“We’ve been using compute-intensive machine learning in our products for the past 15 years. We use it so much that we even designed an entirely new class of custom machine learning accelerator, the Tensor Processing Unit. Just how fast is the TPU, actually? Today, in conjunction with a TPU talk for a National Academy of Engineering meeting at the Computer History Museum in Silicon Valley, we’re releasing a study that shares new details on these custom chips, which have been running machine learning applications in our data centers since 2015.”
Related Content
Related Posts:
- SK hynix Unveils Highest-Performing SSD for AI PCs at NVIDIA GTC 2024
- AMD Expands Ryzen Embedded Processor Family for High-Performance Industrial Automation, Machine Vision and Edge Applications
- AMD Extends 3rd Gen EPYC CPU Lineup to Deliver New Levels of Value for Mainstream Applications
- Arm Extends Cortex-M Portfolio to Bring AI to the Smallest Endpoint Devices
- Brand-New Snapdragon 7-Series Mobile Platform Provides Remarkable Performance and Power Efficiency with First-in-Tier Features
- Intel Gaudi AI Accelerator Gains 2x Performance Leap on GPT-3 with FP8 Software
- Microchip Introduces Industry’s Most Complete Solution for 800G Active Electrical Cables (AECs) Used for Generative AI Networks
- Microchip Unveils New Standard of Enhanced Code Security With the PIC18-Q24 Family of MCUs
- Micron First to Enable Ecosystem Partners With the Fastest, Lowest Latency High-Capacity 128GB RDIMMs Using Monolithic 32Gb DRAM
- Nexperia’s first SiC MOSFETs raise the bar for safe, robust and reliable power switching in industrial applications