History of quantum computing: 12 key moments that shaped the future of computers

A close-up image of a quantum computer
(Image credit: Getty Images/Bartlomiej Wroblewski)

Computers that exploit the weird rules of quantum mechanics may soon crack problems that are unsolvable using existing technology. Today’s machines are still far from achieving that, but the field of quantum computing has made dramatic progress since its inception.

Quantum computing has gone from an academic curiosity to a multi-billion-dollar industry in less than half a century and shows no signs of stopping. Here are 12 of the most important milestones on that journey.

1980: The quantum computer is born

By the 1970s, scientists had begun thinking about potential crossovers between the new fields of quantum mechanics and information theory. But it was American physicist Paul Benioff who crystallized many of these ideas when he published the first-ever description of a quantum computer. He proposed a quantum version of a "Turing machine" — a theoretical model of a computer, devised by renowned British computer scientist Alan Turing, that is capable of implementing any algorithm. By showing that such a device could be described using the equations of quantum mechanics, Benioff laid the foundations for the new field of quantum computing.

1981: Richard Feynman popularizes quantum computing

Both Benioff and legendary physicist Richard Feynman gave talks on quantum computing at the first Physics of Computation Conference in 1981. Feynman’s keynote speech was on the topic of using computers to simulate physics. He pointed out that because the physical world is quantum in nature, simulating it exactly requires computers that similarly operate based on the rules of quantum mechanics. He introduced the concept of a "quantum simulator," which cannot implement any program like a Turing machine, but can be used to simulate quantum mechanical phenomena. The talk is often credited for kick-starting interest in quantum computing as a discipline.

1985: The "universal quantum computer"

One of the foundational concepts in computer science is the idea of the universal Turing machine. Introduced by its namesake in 1936, this is a particular kind of Turing machine that can simulate the behavior of any other Turing machine, allowing it to solve any problem that is computable. However, David Deutsch, a professor in the quantum theory of computation, pointed out in a 1985 paper that because the universal computer described by Turing relied on classical physics, it would be unable to simulate a quantum computer. He reformulated Turing’s work using quantum mechanics to devise a “universal quantum computer,” which is capable of simulating any physical process.

1994: First killer use case for quantum computers

Despite the theoretical promise of quantum computers, researchers had yet to find clear practical applications for the technology. American mathematician Peter Shor became the first to do so when he introduced a quantum algorithm that could efficiently factorize large numbers. Factorization is the process of finding the smallest set of numbers that can be combined to create a larger one. This process becomes increasingly difficult for larger numbers and is the basis for many leading encryption schemes. Shor’s algorithm can solve these problems exponentially faster than classical computers, though, raising fears that quantum computers could be used to crack modern encryption and spurring the development of post-quantum cryptography.

1996: Quantum computing takes on search

It didn’t take long for another promising application to appear. Bell Labs computer scientist Lov Grover proposed a quantum algorithm for unstructured search, which refers to looking for information in databases with no obvious system of organization. This is like looking for the proverbial needle in a haystack and is a common problem in computer science, but even the best classical search algorithms can be slow when faced with large amounts of data. The Grover algorithm, as it has become known, exploits the quantum phenomenon of superposition to dramatically speed up the search process.

1998: First demonstration of a quantum algorithm

Dreaming up quantum algorithms on a blackboard is one thing, but actually implementing them on hardware had proven much harder. In 1998, a team led by IBM researcher Isaac Chuang made a breakthrough when they showed that they could run Grover’s algorithm on a computer featuring two qubits — the quantum equivalent of bits. Just three years later Chuang also led the first implementation of Shor’s algorithm on quantum hardware, factoring the number 15 using a seven-qubit processor.

1999: The birth of the superconducting quantum computer

The fundamental building blocks of a quantum computer, known as qubits, can be implemented on a wide range of different physical systems. But in 1999, physicists at Japanese technology company NEC hit upon an approach that would go on to become the most popular approach to quantum computing today. In a paper in Nature, they showed that they could use superconducting circuits to create qubits, and that they could control these qubits electronically. Superconducting qubits are now used by many of the leading quantum computing companies, including Google and IBM.

2011: First commercial quantum computer released

Despite considerable progress, quantum computing was still primarily an academic discipline. The launch of the first commercially available quantum computer by Canadian company D-Wave in May 2011 heralded the start of the quantum computing industry. The start-up’s D-Wave One featured 128 superconducting qubits and cost roughly $10 million. However, the device wasn’t a universal quantum computer. It used an approach known as quantum annealing to solve a specific kind of optimization problem, and there was little evidence it provided any speed boost compared to classical approaches.

2016: IBM makes quantum computer available over the cloud

While several large technology companies were developing universal quantum computers in-house, most academics and aspiring quantum developers had no way to experiment with the technology. In May 2016, IBM made its five-qubit processor available over the cloud for the first time, allowing people from outside the company to run quantum computing jobs on its hardware. Within two weeks more than 17,000 people had registered for the company’s IBM Quantum Experience service, giving many their first hands-on experience with a quantum computer.

2019: Google claims "quantum supremacy"

Despite theoretical promises of massive "speedup," nobody had yet demonstrated that a quantum processor could solve a problem faster than a classical computer. But in September 2019, news emerged that Google had used 53 qubits to perform a calculation in 200 seconds that it claimed would take a supercomputer roughly 10,000 years to complete. The problem in question had no practical use: Google’s processor simply performed random operations and then researchers calculated how long it would take to simulate this on a classical computer. But the result was hailed as the first example of "quantum supremacy," now more commonly referred to as "quantum advantage."

2022: A classical algorithm punctures supremacy claim

Google’s claim of quantum supremacy was met with skepticism from some corners, in particular from arch-rival IBM, which claimed the speedup was overstated. A group from the Chinese Academy of Sciences and other institutions eventually showed that this was the case, by devising a classical algorithm that could simulate Google’s quantum operations in just 15 hours on 512 GPU chips. They claimed that with access to one of the world’s largest supercomputers, they could have done it in seconds. The news was a reminder that classical computing still has plenty of room for improvement, so quantum advantage is likely to remain a moving target.

2023: QuEra smashes record for most logical qubits

One of the biggest barriers for today’s quantum computers is that the underlying hardware is highly error-prone. Due to the quirks of quantum mechanics, fixing those errors is tricky and it has long been known that it will take many physical qubits to create so-called “logical qubits” that are immune from errors and able to carry out operations reliably. Last December, Harvard researchers working with start-up QuEra smashed records by generating 48 logical qubits at once – 10 times more than anyone had previously achieved. The team was able to run algorithms on these logical qubits, marking a major milestone on the road to fault-tolerant quantum computing.

Edd Gent
Live Science Contributor
Edd Gent is a British freelance science writer now living in India. His main interests are the wackier fringes of computer science, engineering, bioscience and science policy. Edd has a Bachelor of Arts degree in Politics and International Relations and is an NCTJ qualified senior reporter. In his spare time he likes to go rock climbing and explore his newly adopted home.