Like the obscure mathematics behind quantum computing, some of the expectations surrounding this still-unpractical technology can make you dizzy. If you squint out the window of a flight to SFO right now, you’ll see a haze of quantum hype floating over Silicon Valley. But the enormous potential of quantum computing is undeniable, and the hardware needed to harness it is advancing rapidly. If ever there was a perfect time to work on quantum computing, now is it. Say “SchrÃ¶dinger superposition” three times fast and we can dig deeper.

Explaining the history of quantum computing

The history of quantum computing begins at the beginning of the 20th century, when physicists began to feel that they had lost control of reality.

First, conventional explanations of the subatomic world turned out to be incomplete. Electrons and other particles didn’t just spin around neatly, like Newtonian billiard balls, for example. Sometimes they acted as a wave instead. Quantum mechanics came along to explain such oddities, but it introduced troubling questions. To take just one eyebrow-raising example, this new mathematics meant that the physical properties of the subatomic world, such as the position of an electron, existed as *probability* before they were observed. Before measuring the location of an electron, it is neither here nor there, but with some probability everywhere. You can think of it as flipping a quarter through the air. Before landing, a quarter is neither tails nor tails, but a certain probability of both.

If you find this confusing, you’re in good company. A year before receiving the Nobel Prize for contributions to quantum theory, Caltech’s Richard Feynman noted that “nobody understands quantum mechanics.” The way we experience the world is simply incompatible. But some people have understood it well enough to redefine our understanding of the universe. And in the 1980s, some of them, including Feynman, began to wonder whether quantum phenomena, such as the probabilistic existence of subatomic particles, could be used to process information. The basic theory or blueprint for quantum computers that took shape in the 1980s and 1990s still guides Google and other companies working on the technology.

Before we plunge into the murky depths of quantum computing 0.101, we should refresh our understanding of plain old computers. As you know, smartwatches, iPhones, and the world’s fastest supercomputer all do the same thing: they perform calculations by encoding information into digital bits, aka 0’s and 1’s. A computer can turn a voltage on and off a circuit to represent e.g. , 1 and 0.

Quantum computers also do calculations using bits. After all, we want them to connect to our existing data and computers. But quantum bits, or qubits, have unique and powerful properties that allow a group of them to do much more than an equivalent number of ordinary bits.

Qubits can be created in a variety of ways, but they are all digital 0s and 1s using the quantum properties of something that can be controlled electronically. Popular examples – at least among a very select portion of humanity – include superconducting circuits or individual atoms flying in electromagnetic fields. The magic of quantum computing is that this arrangement allows qubits to do more than just switch between 0 and 1. Treat them right and they can switch into a mysterious extra mode called superposition.

You may have heard that a superposition qubit is this *both* 0 and 1 at the same time. This is not entirely true and it is also not entirely false. A qubit in superposition has some *probability* be 1 or 0, but it does not represent either state, just as our quarter that goes up in the air is neither heads nor tails, but some probability of both. In the simplistic and dare we say ideal world of this explainer, it is important to know that the mathematics of superposition describes the probability of detecting a 0 or a 1 when reading a qubit. The operation of reading a qubit’s value knocks it out of the mix of probabilities into a single, distinct state, analogous to a quarter landing on a table with one end facing up. A quantum computer can use a collection of qubits in superposition to play with different possible paths through a calculation. If done correctly, the pointers to the incorrect paths cancel out, leaving the correct answer where the qubits are read as 0s and 1s.

For some problems that take a very long time for conventional computers, this allows a quantum computer to find a solution in far fewer steps than a conventional computer would need. Grover’s algorithm, a famous quantum search algorithm, can find you in a phone book with 100 million names in just 10,000 operations. If the classic search algorithm just went through all the listings to find you, it would take an average of 50 million operations. For Grover’s quantum algorithms and some other quantum algorithms, the larger the initial problemâ€”or the phone bookâ€”the further in the digital dust a conventional computer remains.

The reason we don’t have useful quantum computers today is because qubits are extremely finicky. The quantum effects they must control are very delicate, and stray heat or noise can flip 0s and 1s or destroy a crucial superposition. The qubits must be carefully shielded and operate at very low temperatures â€“ sometimes just a fraction of a degree above absolute zero. A major area of â€‹â€‹research involves the development of algorithms for a quantum computer to correct its own errors caused by qubit perturbations. Until now, these algorithms have been difficult to implement because they require so much quantum processor power that there is virtually nothing left over to solve problems. Some researchers, particularly at Microsoft, hope to get around this problem by developing a type of qubit made of clusters of electrons, known as a topological qubit. Physicists predict that topological qubits will be more robust to environmental noise and thus less prone to error, but so far they have struggled to make even one. After claiming a hardware breakthrough in 2018, Microsoft researchers retracted their work in 2021 after other scientists discovered experimental errors.

However, the companies have shown promising capabilities with their limited machines. In 2019, Google used a 53-qubit quantum computer to generate numbers that follow a specific mathematical pattern faster than a supercomputer can do. The demonstration kicked off a series of so-called “quantum edge” experiments, with an academic group in China announcing its own demonstration in 2020 and Canadian startup Xanadu announcing its own in 2022. researchers decided to change the name to avoid a repetition of “white supremacy.”) Researchers challenge every claim of quantum superiority by developing better classical algorithms that allow ordinary computers to work on problems faster, in a race that moves both quantum and classical computing forward.

Meanwhile, researchers have successfully modeled small molecules using a few qubits. This simulation does not yet do anything beyond the reach of classical computers, but it could if scaled up, which could help discover new chemicals and materials. While none of these demonstrations have immediate commercial value yet, they have strengthened confidence and investment in quantum computing. After tantalizing computer scientists for 30 years, practical quantum computing may not be quite close, but it’s starting to feel a lot closer.