Monday, December 16, 2024

Scientific breakthrough gives new hope to building quantum computers

Must read

One of the biggest remaining technical hurdles in the race to build practical quantum computers has been cleared, according to experts in the field, potentially opening the way for the first full-scale systems by the end of this decade.

The latest sign of the growing optimism in the decades-long pursuit of practical computers based on the principles of quantum mechanics follows a claim by Google that it had passed an important milestone in overcoming the inherent instability of quantum systems. 

The findings have been attracting attention in the quantum computing world since they were first published informally in August. On Monday, they appeared in the peer-reviewed journal Nature.

Google has also released details of the new, more powerful quantum chip it had built to carry out its demonstration, and which it said would help it scale up its existing technology to reach practical usefulness.

Experts in the field compared Google’s achievement to another scientific milestone, the first man-made nuclear chain reaction in 1942. That breakthrough had also long been predicted in theory, but it took steady advances in equipment over many years to make it a practical demonstration possible.

“This was theoretically proposed back in the 90s,” William Oliver, a physics professor at Massachusetts Institute of Technology, said of Google’s quantum demonstration. “We’ve been waiting for this result for many years.”

“It often takes decades for engineering to catch up with the theory, and that’s what we’re seeing here,” said Scott Aaronson, a computer science professor at the University of Texas at Austin.

Since the idea of quantum computers was first proposed, one of the greatest barriers has been building systems that are stable enough to handle large-scale computing operations.

They rely on quantum effects such as superposition, where particles exist in more than one state at the same time, and entanglement, where particles in a quantum state interact with each other. But the quantum bits, or qubits, on which the systems are built hold their quantum states for only tiny fractions of a second, meaning that any information they hold is quickly lost.

The more qubits involved in a calculation and the more computing operations performed, the greater the “noise” that creeps in as errors compound. Scientists have long hoped to counter this by using a technique known as error correction, which involves encoding the same information in more than one qubit so that the system as a whole retains enough information to carry out a coherent calculation, even as individual qubits “decohere”.

For error correction to work, however, the individual qubits still need to be of high enough quality to make their combined output useful, rather than degenerating into “noise”.

In their paper in Nature, Google’s researchers said they had passed this important threshold for the first time. As they moved from a 3×3 grid of qubits to 5×5 and 7×7, the incidence of errors had dropped by a factor of two at each step, they said.

Google reported early last year that it had taken the first, tentative step towards effective error correction. But its latest findings amount to a far more robust proof that it can overcome the system’s inherent instability as it scales up the technology to the thousands of qubits that will be needed to carry out useful computations, said Julian Kelly, director of quantum hardware at Google.

The next steps would be to reduce the error rate of its interconnected groupings of qubits further and then to show it can link together more than one of these collections of qubits to perform useful computing operations, said Hartmut Neven, head of quantum at Google.

The advances in error correction have come from steady improvements in hardware. In particular, Google said that a switch to manufacturing qubits in its own facilities had brought a step-change in quality. The new qubits maintained their quantum states for nearly 100 microseconds, or one ten thousandth of a second, the company said — a tiny amount of time, but still five times better than the performance of its previous hardware.

Besides greater stability, bringing new production techniques and larger-scale manufacturing to the field also promises to bring down costs. Google is aiming to cut the cost of components by a factor of 10 by the end of the decade, putting the cost of a fully functional quantum system at that point at around $1bn, Neven said.

Some rivals said that important design considerations could still affect progress and might present problems along the way.

IBM, which is racing Google to build the first full-scale, fault tolerant quantum system, has questioned the practicality of the type of code that Google is using to handle error correction. Known as surface code, this involves coordinating information across a large, two-dimensional grid of qubits.

Jay Gambetta, head of quantum computing at IBM, predicted that this type of code would require Google to build systems with billions of qubits to perform a practical computation. IBM had switched to a different type of more modular code that would work with fewer qubits, he said.

However, different design decisions bring their own challenges The IBM technique involves laying out the qubits in a three-dimensional pattern rather than a flat grid, requiring a new type of connector that IBM said it hopes to produce by 2026.

Neven at Google said the company believed the techniques it had demonstrated in its latest research showed that it could reach practical scale, and that it would need about 1mn qubits to produce a full-scale system.

Latest article