“We’ve been using compute-intensive machine learning in our products for the past 15 years. We use it so much that we even designed an entirely new class of custom machine learning accelerator, the Tensor Processing Unit. Just how fast is the TPU, actually? Today, in conjunction with a TPU talk for a National Academy of Engineering meeting at the Computer History Museum in Silicon Valley, we’re releasing a study that shares new details on these custom chips, which have been running machine learning applications in our data centers since 2015.”
Related Content
Related Posts:
- AMD Expands EPYC CPU Portfolio to Bring New Levels of Performance and Value for Small and Medium Businesses
- Intel’s Lunar Lake Processors Arriving Q3 2024
- Lattice Introduces Advanced 3D Sensor Fusion Reference Design for Autonomous Applications
- Microchip Adds 12 Products to its Wireless Portfolio to Further Reduce Barriers to Bluetooth® Integration for Designers at Every Skill Level
- Microchip Expands its Radiation-Tolerant Microcontroller Portfolio with the 32-bit SAMD21RT Arm® Cortex®-M0+ Based MCU for the Aerospace and Defense Market
- New demonstration board from STMicroelectronics kickstarts dual-motor designs for advanced industrial and consumer products
- New wireless-charging boards from STMicroelectronics for industrial, medical, and smart-home applications
- Radiation-Tolerant DC-DC 50-Watt Power Converters Provide High-Reliability Solution for New Space Applications
- Radiation-Tolerant PolarFire® SoC FPGAs Offer Low Power, Zero Configuration Upsets, RISC-V® Architecture for Space Applications
- SK hynix Develops Next-Gen Mobile NAND Solution ZUFS 4.0