Scientists in China have built a new type of tensor processing unit (TPU) — a special type of computer chip — using carbon nanotubes instead of a traditional silicon semiconductor. They say the new chip could open the door to more energy-efficient artificial intelligence (AI).
AI models are hugely data-intensive and require massive amounts of computational power to run. This presents a significant obstacle to training and scaling up machine learning models, particularly as the demand for AI applications grows. This is why scientists are working on making new components — from processors to computing memory — that are designed to consume orders of magnitude less energy while running the necessary computations.
Google scientists created the TPU in 2015 to address this challenge. These specialized chips act as dedicated hardware accelerators for tensor operations — complex mathematical calculations used to train and run AI models. By offloading these tasks from the central processing unit (CPU) and graphics processing unit (GPU), TPUs enable AI models to be trained faster and more efficiently.
Unlike conventional TPUs, however, this new chip is the first to use carbon nanotubes — tiny, cylindrical structures made of carbon atoms arranged in a hexagonal pattern — in place of traditional semiconductor materials like silicon. This structure allows electrons (charged particles) to flow through them with minimal resistance, making carbon nanotubes excellent conductors of electricity. The scientists published their research on July 22 in the journal Nature Electronics.
According to the scientists, their TPU consumes just 295 microwatts (μW) of power (where 1 W is 1,000,000 μW) and can deliver 1 trillion operations per watt — a unit of energy efficiency. By comparison, Google’s Edge TPU can perform 4 trillion operations per second (TOPS) using 2 W of power. This makes China’s carbon-based TPU nearly 1,700 times more energy-efficient.
“From ChatGPT to Sora, artificial intelligence is ushering in a new revolution, but traditional silicon-based semiconductor technology is increasingly unable to meet the processing needs of massive amounts of data,” Zhiyong Zhang, co-author of the paper and professor of electronics at Beijing’s Peking University, told TechXplore. “We have found a solution in the face of this global challenge.”
The new TPU is composed of 3,000 carbon nanotube transistors and is built with a systolic array architecture — a network of processors arranged in a grid-like pattern.
Systolic arrays pass data through each processor in a synchronized, step-by-step sequence, similar to items moving along a conveyor belt. This enables the TPU to perform multiple calculations simultaneously by coordinating the flow of data and ensuring that each processor works on a small part of the task at the same time.
This parallel processing enables computations to be performed much more quickly, which is crucial for AI models processing large amounts of data. It also reduces how often the memory — specifically a type called static random-access memory (SRAM) — needs to read and write data, Zhang said. By minimizing these operations, the new TPU can perform calculations faster while using much less energy.
To test their new chip, the scientists built a five-layer neural network — a collection of machine learning algorithms designed to mimic the structure of the human brain — and used it for image recognition tasks.
The TPU achieved an accuracy rate of 88% while maintaining power consumption of only 295 μW. In the future, similar carbon nanotube-based technology could provide a more energy-efficient alternative to silicon-based chips, the researchers said.
The scientists plan to continue refining the chip to improve its performance and make it more scalable, they said, including by exploring how the TPU could be integrated into silicon CPUs.