How far away is a quantum computer that can break modern encryption?
A recent flurry of "research announcements" has left people a bit confused

After the 2025 International Year of Quantum, I joked that 2026 would be the year of “post-quantum” - and in particular post-quantum cryptography (PQC). This was because I expected (hoped?) it would be the year that organisations started seriously assessing their quantum risk and planning their migration to PQC. However, it also turns out to be a year that has seen a few hyped-up announcements on the topic already.
Some announcements and publications from Google and a few quantum startups has generated lots of breathless excitement, lots of FUD (fear, uncertainty and doubt), and lots of bad takes - sadly common in this field. Therefore, in this article I’ll dive into what it might take for a quantum computer to be a real threat, and what’s really behind the various recent headlines.
However, here’s my personal bottom line up front:
The path to a cryptographically relevant quantum computer is uncertain as it requires many scientific, engineering and mathematical breakthroughs. We are now seeing much of the research work (which by definition has uncertain outcomes) playing out in the media. They are all theoretical predictions, some of which eventually may prove correct, some of which will turn out to be wrong.
None of the recent research work meaningfully changes the likely timeline for this threat to manifest. Maybe, in retrospect, we might say one or two had a few months impact, but given the uncertainty and long timescales, it’s unlikely. No-one should be basing their response to this threat based on a single specific date anyway.
Decision-makers should not let the inevitable noise and hype to disturb their strategic planning (and the same applies to anyone using scare tactics to sell you solutions). Even if the threat is unlikely to manifest before 2035, migrating to PQC will be more complex and take longer than you expect - so now is the time to start assessing your risk and planning your response.
Background
For those unfamiliar with the topic, it is expected that sufficiently large and capable quantum computers will be able to break certain types of encryption that we currently rely on. The risk is limited to certain types of encryption, particularly those used for digital signatures and to protect data sent across networks. To address this risk, organisations should ignore other magical solutions and plan how to migrate vulnerable systems to use different encryption - post-quantum cryptography (PQC). PQC isn’t actually quantum at all, and can be run on existing hardware. If you’re looking for more practical advice, check out these articles on planning and implementing PQC.
What exactly is “Q-Day”?
Regular readers will know that my view is that there is no such thing as “Q-Day”. The idea that suddenly one day there will be a capability broadly accessible for anyone to routinely break certain types of encryption is just fanciful nonsense.
At first, someone will probably manage to demonstrate breaking modern day encryption as a one-off in the lab. Over time they will manage to get it running reliably and repeatably. However, it will take a lot of time and electricity for each encryption key that they are able to calculate (as we’ll discuss below).
Such a “cryptographically relevant quantum computer” will be big and expensive, so when people reproduce the capability it won’t be mass production - maybe 5-10 units might be built at first. Don’t forget that each such unit might still only be able to calculate 50-100 encryption keys per year. Given that organisations rotate keys regularly, you might need to run one of these machines for a year in order to be able to decrypt just a day’s worth of communications from one of your targets.
Nonetheless, at this point we might consider that the threat starts to become realistic. Even then this would be for people who are not only the targets of persistent, well-resourced adversaries (such as major nation states), but are the highest value targets to that adversary.
Over time, we would expect further advances to increase the speed of such quantum computers, and reduce their size and cost - maybe a bit like the Moore’s Law advances we’ve seen in conventional computing over the last 30 years. However, it’s likely to be many more years before the capability becomes widely available and used.
What would it actually take?
The most common example of modern day encryption that is potentially vulnerable to quantum computing is called RSA encryption. This has been commonly used for over 30 years, although as conventional computers have got better the key lengths have increased to reduce the risk of someone being able to calculate encryption keys by brute force guessing. Today the standard for “good” encryption is generally considered 2048 bit keys, or RSA2048. Therefore this is normally used as the benchmark1.
Shor’s algorithm was first proposed in the 1990s, as a way to calculate the RSA private key from the public key using a quantum computer. Estimating how good a quantum computer needs to be in order to run this algorithm is difficult, as it’s not just a matter of how many “qubits” you need in the computer. You also need to be able to run “gate operations” on those qubits - individual computation steps. Qubits are, by their nature, susceptible to noise and errors. In order to make sure you have a good chance of getting the correct answer after a large number of operations requires “error correction” to detect and correct any errors after each steps - and this can also require extra qubits for this purpose. The overall system is fragile, and needs to be able to remain stable and coherent for the duration of the calculation, which as you’ll see can still be quite long.
While some scientists have been working on how to make large quantum computing hardware, others have been working on the best way to actually implement Shor’s algorithm on that hardware, and what this means in terms of how good the quantum computer needs to be. A paper in 2019 by Gidney and Ekera estimated that it would take 20 million qubits and about eight hours of runtime to recover a single RSA2048 private key. In further work last year, Gidney proposed an alternative approach that reduced the number of qubits required to around 1 million, but the runtime would take about a week, and this has been quite widely accepted. In fact, I often use the latter as a benchmark in my talks about this topic. It’s also worth noting that if you assume each qubit requires a few watts of power, this means a likely six-figure electricity bill for each private key calculation.
You should realise that these are all estimates based on theoretical work. They rely on various assumptions, in particular the intrinsic error rate of the qubits and gate operations, how well the error correction scheme works, and how fast the quantum computer operates, ie how many operations per second it can execute including the constraint of detecting and correcting errors.

The gap between this scale and today’s quantum computers (often referred to as NISQ - Noisy Intermediate Scale Quantum) computers is vast. Today we have hundreds of qubits that can run for maybe a few minutes total runtime. Different organisations are developing different types of qubits. There are many different types, at different stages of maturity, but no-one really knows what is the best approach to bridge the scaling gap to get to a “cryptographically relevant quantum computer” or one that can realistically challenge certain modern encryption methods.
The various hardware companies have roadmaps to scale up their systems, but they all rely, at various points, on major innovations and breakthroughs to overcome key limitations such as power, connectivity, noise, manufacturability etc. Much scientific and engineering research effort is going into developing and testing ideas to achieve this. Meanwhile, mathematicians and computer scientists are doing research into better algorithms and more efficient implementations, to see if they can reduce the demands on the hardware they will run them on.
Not all these research projects will succeed - such is the uncertain nature of research - but given the number of different avenues being pursued, there is a good chance of some sort of successful outcome. Achieving a cryptographically relevant quantum computer will require progress across all fields to have any chance of happening in the next 10 years.
Recent (sort of) relevant announcements
This then brings us to the recent announcements, in particular around the estimates of the time and number of qubits required to break certain types of modern commonly used encryption. You may see discussion of various announcements by various companies such as Emergence Quantum, Oratomic, Google and others. Firstly it’s important to note these are all based on theoretical work only. They may have run some example small scale circuits on current hardware and extrapolated results from there, but no-one has come close to running anything at the predicted scale.
Also, all of these have been a combination of a press release and a “preprint” that has not yet been peer-reviewed. However, even if the theory turns out be all correct, the critical point to note is that all the approaches involve making trade-offs against other dimensions of the challenge. For example the papers from Emergence and Oratomic rely on more complex hardware architectures that may be more difficult to realise in practice - if indeed it turns out to be possible at all.
In a future article I’ll dive into these papers in more detail for those interested, but for now we’ll concentrate on the overall “so what” from them. Importantly, no-one has actually demonstrated anything yet that comes close to breaking modern encryption, nor do they claim to be able to in the near future (although some of the news outlets were fooled - even some like New Scientist that should know better).
We need to remember what we are seeing is research (which has uncertain outcomes) and the scientific method playing out in public. No doubt commercial pressures are behind much of this - for example, Emergence and Oratomic are new companies building a brand and raising funds.
We will see more and more of this “noise” over the next few years, but the best thing business and security leaders can do is try to ignore it, and concentrate on strategic risk management and planning - and quantum is certainly one of those strategic risks.
Conclusion
There is no such thing as a singular Q-Day when all organisations are suddenly at risk of quantum threats. When the threat manifests depends on if, and how quickly, various scientific, engineering and mathematical innovations happen to enable quantum computers to scale up to do anything useful - including code-breaking. For many years, such devices will be rare and expensive so will need to be used sparingly. This means that your risk profile also depends on how high a priority target you might be to someone with such a device - not some magical single day.
The advice for business and security leaders is unchanged - the world isn’t going to end, no-one is going to break your encryption tomorrow. However, there is a long term risk to the core assumptions you rely on for key parts of your cyber security. Right now you probably don’t understand what is at risk and how much work it would take to mitigate it.
Ignore the fear-mongering and the nay-sayers, start now to assess your risks and plan what most urgently needs attention, and when to get started.
MDR Quantum helps organisations to understand and assess their quantum risk and to respond accordingly. Our services include executive briefings, policy development, risk assessment and PQC migration strategy and planning - please reach out if you’d like to learn more about how we may be able to help.
Note that the other common used form of encryption that could be quantum-vulnerable is elliptic curve cryptography (ECC), and there is some evidence that this could be easier as it uses smaller keys. However, most work to date has focussed on RSA so that’s what we’ll discuss here.


