The Quantum Leap We Didn’t See Coming: Shor’s Algorithm on 10,000 Atoms
What if we told you that the machine capable of cracking the encryption protecting your bank account, your emails, and your government’s secrets doesn’t need millions of components — but fewer than the number of people in a small town?
Welcome to FreeAstroScience, where we break down the biggest scientific stories into language that respects your intelligence without punishing your patience. We’re glad you’re here. Whether you’re a physics student, a tech enthusiast, or simply someone who refuses to let curiosity sleep — this article is for you. Stick with us to the end. The story we’re about to tell is one of the most consequential in 21st-century computing, and it was published just days ago.

A team of researchers from Caltech and the startup Oratomic dropped a paper on March 31, 2026, that sent shockwaves through the quantum computing world. Their claim? Shor’s famous algorithm — the one that can break RSA and elliptic-curve encryption — can run on as few as 10,000 reconfigurable atomic qubits. Not a million. Not half a million. Ten thousand.
Let’s talk about why that number matters, what changed, and what it means for all of us. ☰ Table of Contents
- 1.What Is Shor’s Algorithm — and Why Should You Care?
- 2.The “Millions of Qubits” Problem: Why Progress Stalled
- 3.Why Neutral Atoms? The Platform That Changed the Game
- 4.How Did They Shrink the Number? High-Rate Quantum Codes Explained
- 5.The Numbers: How Many Qubits and How Long?
- 6.Is Your Encryption in Danger? The Cryptography Question
- 7.Beyond Code-Breaking: What Else Can These Machines Do?
- 8.What Challenges Still Stand in the Way?
- 9.Conclusion: The Sleep of Reason Breeds Monsters
What Is Shor’s Algorithm — and Why Should You Care?
Back in 1994, mathematician Peter Shor showed something terrifying and beautiful at the same time. He proved that a quantum computer could factor enormous numbers — the kind that take classical computers billions of years — in a manageable timeframe.
Why does that matter to you? Because the security of nearly every encrypted message on the internet depends on the assumption that factoring huge numbers is impossibly hard. RSA encryption, used by banks, governments, and messaging apps worldwide, rests on this assumption. So does elliptic-curve cryptography (ECC), a more modern scheme used in cryptocurrency wallets and secure communications.
Shor’s algorithm shatters that assumption. It provides what computer scientists call a “superpolynomial speedup” — in plain terms, it doesn’t just do the job a little faster. It does it fundamentally faster. The kind of faster that turns “heat-death-of-the-universe” timescales into “over the weekend.”
The catch? For over three decades, nobody could build a quantum computer big enough to run it on real-world key sizes. The numbers required seemed absurd — millions upon millions of physical qubits. Until now.
The “Millions of Qubits” Problem: Why Progress Stalled
Here’s the thing about quantum computers that doesn’t get enough attention: qubits are fragile. They lose their quantum state — a process called decoherence — if you so much as look at them funny. A stray photon, a tiny vibration, a thermal fluctuation: any of these can corrupt a quantum calculation.
The solution? Quantum error correction. The idea, first developed by Shor himself in 1995, is to spread a single “logical” qubit across many “physical” qubits. If some of the physical qubits fail, the logical qubit survives. Brilliant in theory. Expensive in practice.
For years, the standard approach used something called the surface code. It works, but it’s a gas-guzzler. You might need around 1,000 physical qubits to protect a single logical qubit. And Shor’s algorithm for RSA-2048 (the most common key size) needs roughly 2,000 logical qubits. You can do the math: that’s on the order of one million physical qubits — or more.
Today’s best quantum processors top out at a few hundred physical qubits. The gap between what we have and what we need looked, until very recently, like a canyon.
Why Neutral Atoms? The Platform That Changed the Game
Several technologies compete to build a scalable quantum computer: superconducting circuits (favored by Google and IBM), trapped ions, photonics, and neutral atoms. The Caltech-Oratomic team bet on neutral atoms — and that bet is paying off.
How Does a Neutral-Atom Quantum Computer Work?
Picture a grid of individual atoms — rubidium or ytterbium, for instance — each one held in place by a tiny focused laser beam called an optical tweezer. These atoms sit in long-lived “clock states” that can encode quantum information for relatively long periods.
When you want two qubits to interact, you excite them to high-energy Rydberg states, which create strong interactions over large distances. Between gate operations, you physically move the atoms by steering the laser beams. This gives you something no other platform offers at scale: reconfigurable, nonlocal connectivity.
Think of it like the difference between a printed circuit board (where connections are fixed) and a room full of robots that can rearrange themselves on command. That flexibility turns out to be the key ingredient for the high-rate error-correcting codes that make 10,000 qubits possible.
Already Demonstrated
This isn’t science fiction. In 2025, Manuel Endres’s lab at Caltech demonstrated a tweezer array trapping 6,100 highly coherent atomic qubits. Separately, experiments have shown fault-tolerant quantum operations on up to 500 qubits, with error rates running at 2× below the error-correction threshold. The building blocks already exist.
How Did They Shrink the Number? High-Rate Quantum Codes Explained
This is where the magic happens — if you’ll pardon the pun. (In quantum computing, “magic states” are literally a technical term.)
The old approach — surface codes — encodes one logical qubit per code block, at an encoding rate of roughly 4%. That means 96% of your physical qubits are just there to babysit errors.
The new approach uses quantum low-density parity-check (qLDPC) codes, specifically a family called lifted-product codes. These codes are denser. Instead of one logical qubit per block, they pack hundreds of logical qubits into a single block, achieving encoding rates of approximately 30% — nearly 8 times better.
Let’s put some concrete numbers on the table:
| Code Name | Physical Qubits (n) | Logical Qubits (k) | Distance (d) | Encoding Rate | Role |
|---|---|---|---|---|---|
| lp163,7 | 2,610 | 744 | ≤ 16 | ~29% | Memory |
| lp203,7 | 4,350 | 1,224 | ≤ 20 | ~28% | Memory |
| lp243,7 | 5,278 | 1,480 | ≤ 24 | ~28% | Memory |
| bb18 | 248 | 10 | ≤ 18 | ~4% | Processor / Factory |
| lp203,5 | 1,122 | 148 | ≤ 20 | ~13% | Processor |
The takeaway: that largest memory code (lp243,7) packs 1,480 logical qubits into just 5,278 physical qubits. With a surface code, you’d need roughly 852,480 physical qubits to protect the same number of logical qubits at the same distance. That’s a 161× reduction.
This is the breakthrough that makes the 10,000-qubit target possible.
The Four-Zone Architecture
The proposed quantum computer has four functional zones — like departments in a factory, each with a specific job:
Memory Zone — Stores logical quantum information during computation. Uses the high-rate lifted-product codes.
Processor Zone — Where active computation happens. Uses smaller codes (the bb18 or lp203,5).
Operation Zone — Contains ancillary qubits for performing logical measurements (called Pauli Product Measurements, or PPMs). These are the “hands” that read, write, and edit quantum data.
Resource Zone — Generates “magic states” — special quantum states needed to perform non-Clifford gates like the Toffoli gate, which are essential for universal computation.
Data flows between these zones through a process called state teleportation: logical qubits are teleported from memory to the processor, operated on, and teleported back. The whole sequence is orchestrated by a compilation strategy that breaks the algorithm into small sub-circuits, each small enough to fit inside the processor block.
The Numbers: How Many Qubits and How Long?
Let’s get specific. The paper explores three architecture variants for each cryptographic target. We’ve assembled the key figures below.
| Target | Architecture | Physical Qubits | Runtime |
|---|---|---|---|
| ECC-256 | Space-efficient | ~9,739 | ~1,000 days |
| Balanced | ~11,961 | ~264 days | |
| Time-efficient (P=130) | ~26,000 | ~10 days | |
| RSA-2048 | Space-efficient | ~11,033 | ~43,000 days |
| Balanced | ~13,255 | ~10,000 days | |
| Time-efficient (P=1,160) | ~102,000 | ~97 days |
A few observations jump out.
ECC-256 is far more vulnerable than RSA-2048. Elliptic-curve keys are shorter, which means the quantum circuit needed to crack them is much shallower. With the fastest architecture and 26,000 qubits, an attacker could break a 256-bit elliptic-curve key in about 10 days. That’s the same encryption protecting many cryptocurrency wallets today.
RSA-2048 is harder — but not impossible. The time-efficient architecture with ~102,000 qubits could factor a 2048-bit RSA key in about 97 days. Still a staggering threat, even if it’s slower.
There’s a clear trade-off between space and time. Fewer qubits means longer runtimes. More qubits enable parallel processing, which slashes the computation time.
A Five-Order-of-Magnitude Improvement
To put this in perspective: the very first resource estimate for running Shor’s algorithm (published around 2012) required roughly one billion physical qubits. Over two decades, improvements in error-correcting codes, logical operations, and algorithms have cut that number by five orders of magnitude. The graph of qubit requirements over time looks like a cliff.
⚡ The Encoding Advantage in One Formula For a lifted-product code built from a seed matrix of dimensions rA × nA over a polynomial ring of order ℓ, the code parameters are:
[[n, k, d]] where n = (rA2 + nA2) · ℓ and k ≥ (nA − rA)2 · ℓ
For the lp243,7 code: rA = 3, nA = 7, ℓ = 91. That gives n = (9 + 49) × 91 = 5,278 physical qubits encoding k ≥ (7 − 3)2 × 91 = 1,456 logical qubits (the actual code achieves 1,480). The encoding rate k/n ≈ 28% — radically better than the ~1% of surface codes at equivalent scale.
Is Your Encryption in Danger? The Cryptography Question
Let’s be direct. If a neutral-atom quantum computer with ~26,000 qubits can break ECC-256 in 10 days, and experimental systems already trap over 6,000 qubits, the window for upgrading our cryptographic infrastructure is narrowing fast.
The paper’s authors themselves stress this point. They write that their analysis “underscores the importance of ongoing efforts to transition widely-deployed cryptographic systems to post-quantum standards designed to be secure against quantum attacks.”
The U.S. National Institute of Standards and Technology (NIST) finalized three post-quantum cryptographic standards in 2024: FIPS 203, 204, and 205. These are based on mathematical problems — like lattice-based cryptography — that quantum computers can’t easily solve. Governments, banks, and tech companies around the world are already migrating. But migration takes time — years, sometimes decades, when legacy systems are involved.
The honest truth? We don’t yet have a working machine that can break real-world encryption. But the distance between “theoretical possibility” and “engineering reality” just got a lot shorter. And remember: intelligence agencies don’t wait for public breakthroughs. They plan ahead. If you’re responsible for long-term data security — medical records, defense secrets, financial archives — the time to act is now.
Beyond Code-Breaking: What Else Can These Machines Do?
It would be a shame to reduce this breakthrough to a cryptography story. Fault-tolerant quantum computers executing millions of gates on thousands of logical qubits won’t just break codes. They’ll build things.
Drug discovery. Molecules obey quantum mechanics. Simulating how a candidate drug interacts with a protein target on a classical computer is brutally expensive — often impossible for large molecules. A fault-tolerant quantum computer could simulate these interactions directly, accelerating the development of life-saving medications.
Materials science. High-temperature superconductors, better battery materials, efficient catalysts for carbon capture — all of these involve quantum systems that classical computers struggle to model accurately.
Machine learning. Quantum-enhanced optimization and sampling could speed up training for certain classes of models, though the exact advantages are still a matter of active research.
Fundamental physics. Want to understand quantum gravity? Or the behavior of matter at extreme densities inside neutron stars? Quantum computers offer a way to simulate these conditions that classical machines simply can’t match.
Nature operates by quantum rules. We’ve been trying to understand it with classical tools. Building a machine that speaks nature’s own language — that’s the deeper promise here.
What Challenges Still Stand in the Way?
We don’t want to leave you with the impression that a 10,000-qubit machine is around the corner. The paper is clear: substantial engineering challenges remain. Let’s name them honestly.
Scaling laser power. High-fidelity entangling operations require intense laser beams. So far, these have been demonstrated on regions of a few hundred qubits. Reaching thousands will require either more laser power or smarter beam management — like dynamically rastering a single beam to increase its duty cycle from 0.1% to something much higher.
Faster readout. Current measurement speeds are in the millisecond range. The paper assumes a 1 ms stabilizer measurement cycle. Techniques exist to push this down to microseconds, which would slash runtimes by three orders of magnitude — from days to minutes. But integrating those techniques into a working architecture isn’t trivial.
Classical control overhead. Each surgery operation in the architecture requires dynamically constructing and programming a unique ancillary qubit configuration. That’s a serious classical computing challenge.
Continuous operation. Large-scale quantum computations take days. Running a system of tens of thousands of atoms continuously — refilling atoms that escape the tweezers, maintaining coherence, correcting errors in real time — remains an open engineering problem.
None of these are fundamental barriers. They’re engineering problems. Hard ones, yes — but the kind that money, talent, and time can solve. And money is flowing: the founding of Oratomic, a Caltech spinoff, signals that the transition from academic lab to industrial reality has begun.
Conclusion: The Sleep of Reason Breeds Monsters
Let’s step back and appreciate the full picture.
In 1994, Peter Shor proved that quantum computers could, in principle, break the encryption protecting our digital world. For three decades, the engineering requirements seemed so extreme — millions of qubits — that the threat felt abstract, a concern for future generations. This new paper from Cain, Xu, and collaborators at Caltech and Oratomic, published on March 31, 2026, rewrites that timeline. With optimized high-rate quantum error-correcting codes and the reconfigurable architecture of neutral-atom systems, the qubit requirement drops to as few as 10,000 — a number within striking distance of current experimental capabilities.
We’re not there yet. But the path is visible. Experimental systems already trap more than 6,100 atoms. Fault-tolerant operations below the error-correction threshold have been demonstrated on hundreds of qubits. The building blocks are on the table. What remains is assembly — and that work has started.
For cryptography, the message is urgent: post-quantum migration isn’t optional. For science, the message is exhilarating: a new class of machine is coming that can speak nature’s language, simulate molecules, probe the deepest questions in physics, and drive innovation we can barely imagine today.
At FreeAstroScience.com, we believe in one thing above all: never turn off your mind. Keep it awake, keep it questioning, keep it hungry. Because the sleep of reason breeds monsters — and the antidote is knowledge.
Come back often. We’ll keep translating the universe for you.
References & Sources
- Cain, M., Xu, Q., King, R., Picard, L. R. B., Levine, H., Endres, M., Preskill, J., Huang, H.-Y., & Bluvstein, D. (2026). “Shor’s algorithm is possible with as few as 10,000 reconfigurable atomic qubits.” arXiv:2603.28627v1. arxiv.org/abs/2603.28627
- Shor, P. W. (1997). “Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer.” SIAM J. Comput., 26(5), 1484–1509.
- Manetsch, H. J. et al. (2025). “A tweezer array with 6,100 highly coherent atomic qubits.” Nature, 647, 60–67.
- Bluvstein, D. et al. (2026). “A fault-tolerant neutral-atom architecture for universal quantum computation.” Nature, 649, 39–46.
- Gidney, C. (2025). “How to factor 2048 bit RSA integers with less than a million noisy qubits.” arXiv:2505.15917.
- Babbush, R. et al. (2026). “Securing elliptic curve cryptocurrencies against quantum vulnerabilities: Resource estimates and mitigations.”
- NIST (2024). Post-Quantum Cryptography Standards: FIPS 203, FIPS 204, FIPS 205. nist.gov/pqcrypto
- Meloni, D. (2026). “Computer quantistico: addio milioni di qubit, il traguardo è più vicino.” Reccom Network, 10 April 2026.
Written for you by FreeAstroScience.com — where we explain complex scientific principles in simple terms, because an awake mind is the best tool the universe ever built.
