Monday, November 18, 2024

Google unveils monster processing unit designed to power next-gen AI models

Must read

Google recently announced at its I/O event its sixth tensor processing unit (TPU) called Trillium, and according to the company the new processor is designed for powerful next-generation AI models.

The company initially created the TPUs for its own internal products such as Gmail, Google Maps and YouTube, which leverage machine learning workloads. Now, Google has created six generations of this technology and according to the company Trillium will arrive with a 4.7x increase in peak compute performance, along with double the high bandwidth memory capacity, compared to its TPU v5e design.

More specifically, Google’s claim of a 4.7x increase in peak compute performance means the new TPU is capable of pushing 926 teraFLOPS at BF16 and 1,847 teraFLOPS at INT8, making it about twice as fast as TPU v5p accelerators that Google announced less than six months ago. How did Google do this? The company said the performance increase can be traced back to the decision to increase the size of TPU’s matrix multiple units (MXUs), and boosting the clock speed.

Furthermore, the new TPU is expected to have 32GB of HBM operating at 1.6TB/s, along with a chip-to-chip interconnect that can reach 3.2 Tbps.

Latest article