Google’s new chip showcases error correction and performance advancements, paving the way for a practical, large-scale quantum computer.
The Willow chip marks a significant milestone in a journey that started over a decade ago. When I founded Google Quantum AI in 2012, the vision was to create a practical, large-scale quantum computer that could harness quantum mechanics—the “operating system” of nature as we understand it—to benefit society by advancing scientific discovery, developing valuable applications, and addressing some of the world’s biggest challenges. As part of Google Research, our team has followed a long-term roadmap, and Willow takes us a major step closer to realizing commercially viable applications.
Exponential Quantum Error Correction — Below Threshold!
Errors are one of the biggest challenges in quantum computing. Qubits, the basic units of computation in quantum computers, are prone to quickly exchanging information with their environment, making it difficult to preserve the data needed for a computation. Generally, the more qubits you use, the more errors occur, and the system begins to behave classically.
In today’s publication in Nature, we shared groundbreaking results showing that as we increase the number of qubits in Willow, we not only reduce errors but also enhance the quantum nature of the system. We tested progressively larger arrays of physical qubits, scaling up from a 3×3 grid of encoded qubits, to 5×5, and finally to a 7×7 grid. With each increase, we applied our latest advancements in quantum error correction, successfully halving the error rate each time. This means we achieved an exponential reduction in errors.
This historic achievement is referred to as being “below threshold” — demonstrating that we can drive errors down while scaling up the number of qubits. Achieving this milestone is a key indicator of real progress in quantum error correction, a challenge that has been at the forefront of the field since Peter Shor introduced quantum error correction in 1995.
This result also marks several other scientific “firsts.” For instance, it’s one of the first compelling examples of real-time error correction in a superconducting quantum system — a crucial step for any practical computation. Without the ability to correct errors quickly enough, computations can be ruined before they’re completed. Additionally, this is a “beyond breakeven” demonstration, where our arrays of qubits have longer lifetimes than the individual physical qubits themselves, providing undeniable proof that error correction is enhancing the system as a whole.
As the first system to achieve being “below threshold,” Willow stands as the most convincing prototype for a scalable logical qubit to date. This achievement is a strong indication that building truly useful, large-scale quantum computers is within reach. Willow moves us closer to running practical, commercially-relevant algorithms that are impossible to replicate on classical computers.
10 Septillion Years on One of Today’s Fastest Supercomputers
To measure Willow’s performance, we used the random circuit sampling (RCS) benchmark, which was pioneered by our team and is now a widely accepted standard in the field. RCS is the most computationally demanding benchmark that can be performed on a quantum computer today. Think of it as a litmus test for quantum computing — it verifies whether a quantum computer can do something that classical computers simply cannot. Any team developing a quantum computer should first check if it can surpass classical machines on RCS; otherwise, there is significant doubt that it can handle more complex quantum tasks. We’ve consistently used this benchmark to track progress across different generations of chips, having previously reported Sycamore results in October 2019 and again in October 2024.
Willow’s performance on this benchmark is nothing short of astonishing: it completed a computation in under five minutes that would take one of today’s fastest supercomputers 10^25 years — or 10 septillion years — to accomplish. To put that into perspective, that’s 10,000,000,000,000,000,000,000,000 years. This staggering figure far exceeds known timescales in physics and is vastly greater than the age of the universe. This result lends further credence to the idea that quantum computation may involve multiple parallel universes, in line with the multiverse theory originally proposed by David Deutsch.

Our assessment of how Willow outpaces one of the world’s most powerful classical supercomputers, Frontier, was based on conservative assumptions. For instance, we assumed unrestricted access to secondary storage (such as hard drives) without accounting for any bandwidth overhead—an overly generous and unrealistic scenario for Frontier. Naturally, as we saw after announcing the first beyond-classical computation in 2019, we expect classical computers to continue improving on this benchmark. However, the rapidly widening gap illustrates that quantum processors are accelerating at a double exponential rate, and as we scale up, they will continue to vastly outperform classical computers.
State-of-the-Art Performance
Willow was manufactured in our cutting-edge fabrication facility in Santa Barbara—one of the few such facilities worldwide built specifically for this purpose. System engineering is crucial when designing and fabricating quantum chips: every component, from single and two-qubit gates to qubit resets and readouts, must be carefully engineered and integrated. If any component underperforms or if components fail to work well together, it negatively impacts overall system performance. Therefore, optimizing system performance influences every part of our process, including chip architecture, fabrication, gate development, and calibration. The accomplishments we report consider quantum computing systems in their entirety, not just individual factors.
Our focus is on quality, not just quantity, because merely increasing the number of qubits won’t be beneficial if they aren’t of high quality. With 105 qubits, Willow now leads in performance across two key system benchmarks: quantum error correction and random circuit sampling. These algorithmic benchmarks are the best measures of overall chip performance. Other specific performance metrics are also critical. For instance, our T1 times, which indicate how long qubits can retain an excitation—the key quantum computational resource—are now approaching 100 µs (microseconds), a remarkable ~5x improvement over our previous generation of chips. Below is a table with key specifications for evaluating quantum hardware:
Willow System Metrics Table

What’s Next with Willow and Beyond
The next major challenge for the field is to demonstrate the first “useful, beyond-classical” computation on today’s quantum chips, one that has real-world applications. We are optimistic that the Willow generation of chips can help us achieve this milestone. So far, we’ve carried out two types of experiments: First, we’ve used the RCS benchmark, which compares performance against classical computers but doesn’t yet have practical commercial applications. Second, we’ve conducted scientifically interesting simulations of quantum systems that have led to new discoveries, though these simulations remain within the capability of classical computers. Our goal is to achieve both at once—advancing to algorithms that are not only beyond the reach of classical computers but also useful for solving real-world, commercially relevant problems.

Random Circuit Sampling (RCS) Chart
RCS, while extremely challenging for classical computers, has yet to demonstrate practical commercial applications.
We invite researchers, engineers, and developers to join us on this exciting journey by exploring our open-source software and educational resources, including our new course on Coursera. Here, developers can learn the essentials of quantum error correction and contribute to creating algorithms that will solve future problems.

Quantum Computing Roadmap
This timeline illustrates our progress, from “Beyond Classical” to “Large Error-Corrected Quantum Computer,” highlighting six major milestones.
When my colleagues ask why I transitioned from the rapidly growing field of AI to focus on quantum computing, my answer is simple: both will be the most transformative technologies of our time. However, advanced AI will significantly benefit from access to quantum computing. That’s why I named our lab Quantum AI. Quantum algorithms have inherent scaling advantages, as evidenced by the progress we’re seeing with RCS. These same scaling benefits apply to many foundational computational tasks essential for AI. As a result, quantum computing will be indispensable for collecting training data that classical machines can’t access, optimizing certain learning architectures, and modeling systems where quantum effects play a crucial role. This includes discovering new medicines, designing more efficient batteries for electric vehicles, and accelerating progress in fusion energy and other alternative energy solutions. Many of these game-changing applications simply won’t be possible on classical computers—they’re waiting to be unlocked by quantum computing.
This content is sourced from https://blog.google/technology/research/google-willow-quantum-chip/.