Wednesday, December 18, 2024

Google: Record-shattering quantum AI chip accesses ‘parallel universes’

Must read

The quantum realm is full of theories and concepts that are impossible for most people to understand, and numbers that are both so large and so small they are equally difficult for our minds to grasp.

Google’s recent breakthrough announcement touches all of those angles, and does not disappoint in any of these “mind-blowing” categories.

They announced last week that their new quantum AI chip solved an equation in just five minutes that would take a normal computer one-septillion years of non-stop work to complete.

One septillion equals 1 followed by 24 zeros. So, it looks like this:

1,000,000,000,000,000,000,000,000

To put that into perspective, if you tried to count to a septillion, it would take you billions of years — way longer than the age of the universe.

Why is this a big deal?

Quantum computing, as cool and cutting-edge as it sounds, has always struggled with instability.

Tiny particles don’t follow the same tidy rules that govern everyday objects, and even the most advanced chip can fail as their fragile states fall apart with the slightest disturbance.

Researchers have tried to harness this wiggly behavior for decades, hoping to use it for computations that regular machines would never finish.

There have been many promises, yet each attempt runs into the same stumbling block. Errors pile up faster than anyone can correct them, stalling progress.

Google plays in the quantum world

Quantum error correction offered a possible solution but came with its own complications. It demands spreading information across multiple qubits, which are the fundamental units of quantum data.

That sounds simple in theory, but in practice, it creates a juggling act. If too many qubits are involved, keeping the error rate below a certain critical threshold becomes tough.

Until recently, no one had managed to show that error rates could dip beneath that crucial point for a code specifically designed to scale. This changed with a demonstration on a new quantum chip architecture.

This accomplishment was shared by Hartmut Neven, quantum scientist and founder of Google’s Quantum AI lab.

Accessing parallel universes?

Neven called Willow’s performance “astonishing.” He added that its high-speed results “lend credence to the notion that quantum computation occurs in many parallel universes.”

Neven gave a shoutout to Oxford University physicist David Deutsch for coming up with the idea that successfully developing quantum computers could support the “many worlds interpretation” of quantum mechanics and the existence of a multiverse.

Since the 1970s, Deutsch has become a pioneer in quantum computing, not so much for the technology itself, but to test his multiverse theory.

What is a parallel universe?

Also known as alternate or multiple universes, parallel universes are the idea that there could be other realities existing side by side with our own.

Imagine our universe as just one bubble in a vast cosmic foam, where each bubble is a different universe with its own unique laws of physics, histories, and even versions of ourselves.

Scientists explore this concept through theories like the multiverse, which suggest that there might be countless other universes out there, each with its own set of possibilities.

While we haven’t found concrete evidence for parallel universes yet, the idea sparks fascinating discussions about the nature of reality and what might lie beyond what we can currently see and understand.

Dissenting opinion, but praise nonetheless

However, astrophysicist-turned-writer Ethan Siegel didn’t agree with Google’s take. He accused them of “conflating unrelated concepts, which Neven also ought to know.”

Siegel explained that Neven was mixing up the mathematical space where quantum mechanics happens with the idea of parallel universes and a multiverse.

According to Siegel, even if quantum computers succeed, they don’t prove the existence of parallel universes.

Despite his disagreements, Siegel praised Google’s achievement with Willow, calling it “a truly excellent step forward in the world of quantum computation.”

He believes that this breakthrough could help solve some of Earth’s biggest problems, like discovering new medicines, designing better batteries for electric cars, and advancing fusion and new energy sources.

Neven echoed this optimism, saying, “Many of these future game-changing applications won’t be feasible on classical computers; they’re waiting to be unlocked with quantum computing.”

Google’s quantum Willow chip

The Willow chip is the newest superconducting processor designed by a team at Google Quantum AI.

Unlike older devices that struggled to manage errors, Willow pushes performance into a new zone that supports techniques intended to make quantum error correction actually live up to its promise.

This system meets the conditions for a particular approach known as the surface code. Past attempts ran into snags as more qubits were added, but here, Willow moves beyond that barrier.

Code distances and Google’s quantum chip

Quantum error correction frameworks often refer to something called code distance. In simple terms, this represents how many qubits are used to safeguard a chunk of quantum data.

Larger distances, like moving from a code distance of three to five to seven, are supposed to reduce the overall probability of failure if certain conditions are met.

On this new device, each step up in distance chops the logical error rate in half. That kind of improvement had long been a major goal for researchers in quantum computing.

According to the published findings, “Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion years,” said Hartmut Neven, quantum scientist and founder of Google’s Quantum AI lab.

Slow and steady marathon pace

Running a test for only a few cycles might not reveal the full story of a system’s stability. Google’s new quantum chip overcomes this by pushing performance for up to a million cycles.

The device keeps its below-threshold performance intact across a timescale that would normally leave other systems gasping for air. Maintaining real-time decoding accuracy over such an extended period is no small feat.

The team behind Willow arranged their operations so that corrections could be applied on the fly. That sort of approach ensures the chip can avoid drifting off track.

“We see Willow as an important step in our journey to build a useful quantum computer,” said Sundar Pichai, CEO of Google.

Looking beyond classical bottlenecks

Traditional supercomputers handle complex tasks using billions of tiny switches working in well-understood ways.

Quantum computers, by contrast, tap into phenomena that cannot be reduced to classical shortcuts. The problem until now has always been keeping the delicate quantum states alive long enough to complete a meaningful calculation.

With Willow, the team shows that qubits can collaborate in such a way that errors do not race out of control. The demonstration suggests that quantum chips can move toward computations that might surpass anything a classical system can handle.

Why does Google want a quantum chip?

Relying on hardware that can pass these tough tests of reliability hints that quantum computing is not stuck in the toy problem phase forever.

Increasing the code distance without losing the ability to correct errors suggests that large collections of qubits could someday power algorithms relevant to real-world tasks.

Examples include speeding up complex simulations, refining drug discovery pipelines, and exploring new materials for energy storage.

Willow’s success in reaching below-threshold error rates for extended periods might encourage work in industries that have been waiting on solid evidence that quantum hardware will grow into a trustworthy tool.

When error correction becomes routine

Quantum error correction has never been about eliminating mistakes entirely. It is about making them so rare that the machine can run a calculation to its conclusion.

If future designs build on Willow’s stability and scaling characteristics, there might come a day when such corrections occur behind the scenes, invisible to users.

Achieving that level of fault tolerance could let quantum computers crunch through workloads that stretch far beyond the reach of classical hardware. This reveals a practical route for scaling up these incredible machines.

Scaling fault tolerance with quantum chips

The efforts from Google Quantum AI and other groups worldwide are not happening in a vacuum.

The field of quantum error correction has attracted attention from many researchers intent on finding a path to practical devices.

Over the past decade, studies have shown the importance of certain lattice designs and logical qubits arranged in careful layouts. Willow now shows that with the right chip architecture and error correction schemes, thresholds can be crossed.

This brings the entire field closer to building machines that solve useful problems. While the journey is not over, one essential puzzle piece has clicked into place.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

Latest article