ADVERTISEMENT

Quantum Supremacy Window Closed by a New Algorithm

Researchers start with a set of qubits or quantum bits in random circuit sampling experiments. These qubits are then randomly manipulated with quantum gates. Some gates can cause qubits in pairs to become entangled. This means that they share a quantum state which cannot be described separately. Multiple layers of gates can cause the qubits to become more complex entangled states.

ADVERTISEMENT

Researchers then measure the qubits within the array to find out more about the quantum state 

Their collective quantum state is then reduced to a random string of ordinary bits, 0s and 1, which causes it to collapse. As the array contains more qubits, the possibilities for different outcomes increase rapidly. Google’s experiment had 53 qubits. It’s almost 10 quadrillions. Not all strings are equally probable. To get a good picture of the probability distribution, you must repeat the measurements several times.

Quantum advantage can be described as: Can you duplicate the probability distribution using a classical algorithm without any entanglement?

researchers proved in 2019 that it is possible to simulate error-free quantum circuits. It is difficult to classically model a random circuit sampling experiment without errors. Researchers used computational complexity theory to classify the difficulty of different problems. Researchers in this field don’t consider the number of qubits to be a fixed number, such as 53. “Think of this as n which is some number going to increase,” stated Aram, a Massachusetts Institute of Technology physicist. “Then, you might ask the question: Are we doing things in which the effort is exponential or polynomial? This is the best way to classify an algorithm’s runtime. When n grows sufficiently, an algorithm that is exponential in number lags behind an algorithm that is polynomial. This distinction is what theorists refer to when they talk about a problem that is difficult for classical computers and easy for quantum computers. A classical algorithm can take exponential time while a quantum algorithm can solve the problem within polynomial times.

The 2019 paper did not mention the effect of imperfect gates. This leaves open the possibility of a quantum advantage in random circuit sampling that does not require error correction.

You can’t imagine complexity theorists increasing the number and depth of qubits, but you want to be able to account for mistakes. To do this, you will need to decide if you are going to add more gates layers increasing the circuit depth as researchers suggest. As you increase the number and depth of qubits, let’s say you keep the circuit at a very shallow three-layer level. The output won’t be affected by classical simulation and you won’t see much entanglement. However, increasing the circuit depth to accommodate the increasing number of qubits will result in less entanglement and the output will be easy to model classically.

There is a Goldilocks Zone in the middle. It was possible that quantum advantage could still survive in this area before the publication of the new paper. This intermediate-depth case allows you to increase circuit depth very slowly with increasing qubits. Even though the output will be steadily degraded by errors it may still prove difficult to simulate classically at each stage.

This loophole is closed by the new paper. They developed a classical algorithm to simulate random circuit sampling. Then they proved that its runtime was a polynomial function of the time it takes to run the quantum experiment. This result establishes a strong theoretical connection between classical and quantum methods to random circuit sampling.

Although the new algorithm is effective for most intermediate-depth circuits it does not work well for some shallower ones. This leaves a gap in which efficient classical simulation methods can be used. However, few researchers believe that it will be difficult to simulate classically using random circuit sampling in the remaining narrow window. Bill Fefferman a computer scientist at Chicago University and one of the authors of 2019 theory paper, said that “I give it very small odds.”

This result shows that random circuit sampling will not yield a quantum advantage according to rigorous computational complexity theory standards. It also shows that polynomial algorithms which complexity theorists routinely refer to as efficient in theory are not always fast in practice. As the error rate drops, the new classic algorithm becomes slower. At quantum supremacy experiment error rates, it is far too slow for practical use. It doesn’t break down completely, and this is consistent with what researchers already knew about the difficulty of classically simulating random circuit sampling in an error-free scenario. Sergio Boixo is the lead physicist in Google’s quantum supremacy research. He says that the paper serves more as confirmation of random circuit sampling than any other.

All researchers agree on one thing: The new algorithm highlights how important quantum error correction is to the long-term success and viability of quantum computing. Fefferman stated, “That’s what the solution is at the end,”

<< Previous

ADVERTISEMENT