Logo IMG
HOME > PAST ISSUE > Article Detail


The Quest for Randomness

Determining whether numbers truly can have no pattern has implications for quantum mechanics, not to mention the stock market and data security.

Scott Aaronson

Sampling Versus Search

Over the past five years, my student Alex Arkhipov and I have been studying a rudimentary type of quantum computer, called a BosonSampler. Such a device’s entire computation consists of generating a stream of identical photons, sending the photons through fiber-optic cables and a network of devices that split the photon beam, and then measuring the photons to see where they ended up. This being quantum mechanics, where the photons land will in general be random—but if you know how the beamsplitter network was set up, then at least in principle you can calculate the probability of any given outcome (for example, “one photon at location A, three photons at location B, no photons at location C...”).

The problem is that we don’t know how to do anything useful with a BosonSampler—not even anything nefariously useful, such as breaking cryptographic codes. Indeed, the only thing BosonSamplers seem to be good for is the tautological task of simulating themselves!

So why bother to study them? The reason is that, even if BosonSamplers are useless, Arkhipov and I were able to give strong evidence that simulating them with a classical computer is hard! More precisely, we showed that under widely believed hypotheses in theoretical computer science, no efficient classical algorithm could sample from the same probability distribution that a BosonSampler does. So if we merely wanted to give direct experimental evidence that quantum computing is possible— that is, that the physical world can do something exponentially faster than our existing computers—then BosonSampling might offer a more direct path than building a full-fledged universal quantum computer. For that and other reasons, BosonSampling could be a stepping-stone toward more useful forms of quantum computing.

In the last year, the first BosonSampling experiments were actually done, by groups in Rome, Oxford, Vienna, and Brisbane. The current experiments involve only three photons, so can easily be simulated using a classical computer. To exceed the capabilities of classical computers, one would probably need to use 20 to 30 photons, but getting that many photons to arrive at the detectors simultaneously presents major experimental challenges.

What does any of this have to do with recognizing randomness? Well, BosonSampling is an unusual problem, one where the goal is not to calculate some specific answer, but to sample according to a certain probability distribution. So a question of principle arises: Given a purported BosonSampler, how could a skeptic ever know whether the machine worked?

Sure, we could always run the machine over and over, collect a massive amount of statistics, and then check whether the statistics matched the theoretical predictions. The problem is that we’d have to run the machine a number of times that grew exponentially with the number of photons. Even with just 8 or 10 photons, we would already be pushing the limits of the experimenters’ patience. So we might wonder: Is there any way to check whether a BosonSampler works using only a small number of measurements (say, 100)?

For this purpose, it would be useful to have a search problem, in which the computer’s goal is to output any string of bits with specified properties. The answers to search problems are easier to check than sampling problems. Indeed, a BosonSampler already solves the search problem of finding photon configurations that appear in the distribution with nonzero probability. However, these two problems still aren’t equivalent to each other, because a search problem only requires finding any possible solution, whereas a sampling problem needs a likely configuration as its output. But how do we formalize the concept of a photon configuration being likely or typical?

It probably won’t surprise you that this is where Kolmogorov complexity comes to the rescue. Building on work from the 1970s, in 2011 I showed that, given any sampling problem, one can define a search problem that’s computationally equivalent to it. This means that if one problem is efficiently solvable by classical computers then both of them are, if one is efficiently solvable by quantum computers then both are, and so on, for every other reasonable model of computation. The trick is the following: Given a probability distribution, we consider a search problem that amounts to “Find a string that occurs with large probability in the distribution and also has large Kolmogorov complexity.”

Using tools developed in the 1970s, one can show that, if a computer program manages to find a string that satisfies both of the above properties, then the program can only have done so by sampling from the distribution. In this way, Kolmogorov complexity lets us switch the goal from sampling a distribution to outputting a fixed string with specified properties.

comments powered by Disqus


Subscribe to American Scientist