“We’ve been using compute-intensive machine learning in our products for the past 15 years. We use it so much that we even designed an entirely new class of custom machine learning accelerator, the Tensor Processing Unit. Just how fast is the TPU, actually? Today, in conjunction with a TPU talk for a National Academy of Engineering meeting at the Computer History Museum in Silicon Valley, we’re releasing a study that shares new details on these custom chips, which have been running machine learning applications in our data centers since 2015.”
Related Content
Related Posts:
- Automotive DC motor pre-driver from STMicroelectronics simplifies EMI optimization and saves power
- Integrated Actuation Power Solution Aims to Simplify Aviation Industry’s Transition to More Electric Aircraft
- Microchip Brings Enhanced Code Protection and up to 15W of Power Delivery to its USB Microcontroller Portfolio
- Microchip Technology Expands Its Serial SRAM Portfolio to Larger Densities and Increased Speeds
- Micron First to Production of 200+ Layer QLC NAND in Client and Data Center
- New Renesas MCUs with High-Resolution Analog and Over-the-Air Update Support Help Customer Systems Conserve Energy
- NXP Breaks Through Integration Barriers for Software-Defined Vehicle Development with Open S32 CoreRide Platform
- onsemi Launches Next-Generation Electrochemical Sensor Solution for Industrial, Environmental and Healthcare Applications
- Renesas Introduces New Entry-Level RA0 MCU Series with Best-in-Class Power Consumption
- Renesas’ New FemtoClock™ 3 Timing Solution Delivers Industry’s Lowest Power and Leading Jitter Performance of 25fs-rms