Quantum computers will need classical supercomputers as well
Recent announcements by NVIDA and AMD show a focus on scaling up error correction
NVIDIA‘s announcement last week of NVQLink, a high speed connection between GPUs and computers, generated a lot of interest. This was on the back of IBM the previous week announcing how it had interfaced its quantum computer to AMD FPGA chips. Although you can disregard the usual pundits’ claims that it heralded the arrival of some sort of golden quantum age (I even saw one person claim it was a “ChatGPT moment” for quantum computing), it was actually an interesting development.
What it showed was that error correction will be one of the next steps needed to get towards a large scale useful quantum computer. First, we saw increasing numbers of qubits taking the headlines. More recently we have seen announcements that focus on accuracy rates, with “four 9’s” or 99.99% accuracy per operation being seen as the new benchmark. However, to run calculations that need trillions or more operations, error rates need to be much, much lower. This is expected to require error correction - combining large numbers of physical qubits, detecting and correcting errors as they happen, to create a “logical qubit” with much higher accuracy.
Making this work will require collecting large amounts of data while the qubits are executing an operation, processing this data to identify what errors have occurred, determine how to correct them and feeding this back as control signals to be implemented before the next clock cycle. This adds up to a a very high data rate to be processed - estimated as around 10-100TB/s for a 1 million qubit quantum computer. For context, 100TB/s is roughly equivalent to the total Netflix streaming rate for all customers globally.
It is processing this data that will need a large amount of classical compute power. One option is to built dedicated hardware to achieve this, which is what a company like Riverlane is doing. This is likely to be most efficient in resource usage, but custom hardware will be expensive. Therefore, what we are now seeing is NVIDIA and AMD positioning to show their off-the-shelf chips are up to job, by addressing the demanding latency and throughput requirements.
So, far from quantum computers making classical computers obsolete, it turns out that a large quantum computer is likely to need its own large companion classical computer, just to keep it running with low enough error rates to be useful. And that’s before considering the need for classical computing to run the likely “hybrid algorithms” that will be needed to get useful results from a quantum computer…
Finally, it’s also notable that NVIDIA have set up an open interface to work with a large group of labs and quantum hardware builders - recognising that there is no obvious leading type of #qubit hardware modality - but hopefully making it easy for whoever does manage to successfully scale their system to base it around NVIDIA GPUs for the classical data processing that will be needed.



AMD's FPGA integration with IBM's quantum systems is a strategicaly smart play that most people are overlooking. While everyone focuses on NVIDIA's GPU dominance in AI, AMD is quietly positioning in quantum error correction where ultra-low latency matters more than raw compute. The fact that quantum computers will need classical supercomputers for error correction opens up a massive adjacent market that AMD can address with their existing FPGA and Instinct portfolios.