“Machine learning provides the underlying oomph to many of Google’s most-loved applications. In fact, more than 100 teams are currently using machine learning at Google today, from Street View, to Inbox Smart Reply, to voice search. But one thing we know to be true at Google: great software shines brightest with great hardware underneath. That’s why we started a stealthy project at Google several years ago to see what we could accomplish with our own custom accelerators for machine learning applications. The result is called a Tensor Processing Unit (TPU), a custom ASIC we built specifically for machine learning — and tailored for TensorFlow. We’ve been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law). TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation. Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models and apply these models more quickly, so users get more intelligent results more rapidly. A board with a TPU fits into a hard disk drive slot in our data center racks.”
Related Content
Related Posts:
- Intel Gaudi AI Accelerator Gains 2x Performance Leap on GPT-3 with FP8 Software
- How governments and companies should advance trusted AI
- Microchip Teams Up with Intelligent Hardware Korea (IHWK) to Develop an Analog Compute Platform to Accelerate Edge AI/ML Inferencing
- Renesas Extends Its AIoT Leadership with Integration of Reality AI Tools and e² studio IDE
- How AI Is Powering the Future of Clean Energy
- IBM and NASA Open Source Largest Geospatial AI Foundation Model on Hugging Face
- New MLCommons Results Highlight Impressive Competitive AI Gains for Intel
- Unlocking the future of computing: The Analog Iterative Machine’s lightning-fast approach to optimization
- Intel Labs Introduces AI Diffusion Model, Generates 360-Degree Images from Text Prompts
- Chip Manufacturing ‘Ideal Application’ for AI, NVIDIA CEO Says