Subscribe
Subscribe
MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
Logo IMG
HOME > PAST ISSUE > Article Detail

TECHNOLOGUE

The Quest for Randomness

Determining whether numbers truly can have no pattern has implications for quantum mechanics, not to mention the stock market and data security.

Scott Aaronson

Saved by Complexity

Click to Enlarge ImageThe situation is not quite as hopeless as it sounds. In the 1960s, a beautiful field called algorithmic information theory was developed. This field almost (but not quite) lets us cut through the arguments, and gives a test for individual sequences such that, if they pass the test, then they must have been generated randomly.

The starting point is a paradox that goes back to the beginnings of probability. Consider a coin that landed heads every time you flipped it. Would you eventually conclude the coin was crooked? Of course you would—just as Dilbert suspected the random number–generating lizard that always output nine! On the other hand, you’d probably find nothing wrong if you saw the following sequence of 30 heads and tails: T H T H H H T H H H H T T H H T H H T H T H T H T H T T H T.

The paradox is that a fair coin is exactly as likely to produce this sequence as it is to produce 30 heads in a row. Both sequences have the same probability, namely, 2-30 (base 2 numbers are used because there are two options here, heads or tails).

French mathematician Pierre-Simon Laplace, considering this puzzle in the 1800s, realized that a run of 30 heads is special or meaningful in a way that another sequence is not. If we instead asked for the probability that we’d get a sequence that was just as random-looking and patternless as the one above, the probability would be almost 1.

The trouble is, how do we formalize the concept of a sequence of numbers being random-looking and patternless? This is where three mathematicians—Andrei Kolmogorov, Ray Solomonoff, and Gregory Chaitin—come in. In the 1960s, they independently proposed a measure of the inherent randomness of a sequence, which today we call Kolmogorov complexity (no, the name isn’t historically fair, but “Kolmogorov-Solomonoff-Chaitin complexity” is an unwieldy mouthful). Informally, the Kolmogorov complexity of a sequence x, abbreviated K(x), is defined to be the number of bits in the shortest computer program whose output is x. To illustrate, a sequence such as 00000000000000000000000000000000000000000000000000000000000000000000000000000000 can be described by a computer program much shorter than itself, namely:

print 80 zeros

Thus the Kolmogorov complexity of the sequence is much smaller than the number of bits in the sequence, denoted |x|. By contrast, a random sequence of 1s and 0s, such as 10110011011010011101010000100110111110111001011011100011010011000110010011111011, will almost always have a Kolmogorov complexity close to the length of the sequence itself. This complexity arises because, if you try to write a program to output the sequence, you’ll find that the program has to contain the whole sequence (or something equally long) as part of its code:

print “10110011011010011101010000100110111110111001011011100011010011000110010011111011”

The notion of Kolmogorov complexity lets us neatly resolve the paradox from before. When should you suspect a coin of being crooked? When a sequence x of flips of the coin satisfies K(x) « |x|. In other words, you should suspect a coin if and only if there exists a pattern to the coin flips that can be described by a computer program using substantially fewer bits than the sequence of coin flips itself. And crucially, at least in cases with many coin flips, it doesn’t matter which programming language you use, because the choice of language will affect the program’s length only by an additive constant that doesn’t depend on x.

So the real problem is that the Kolmogorov complexity is impossible to compute. To clarify, you can certainly write a program that looks for the shortest one that outputs x. But how do you know when to stop? What if there’s an even shorter program out there? It turns out that we can never be sure we’ve found the shortest one.

The proof of that fact is inspired by a famous riddle called the Berry paradox, after a librarian who apparently told it to mathematician Bertrand Russell. Consider asking for “the first number that can’t be described using 12 words or fewer.” But we just described the number, and we did so using only 12 words!

In our case, suppose, contrary to assumption, that there were an algorithm that computed the Kolmogorov complexity K(x) of any string x. Also, suppose the algorithm required n bits to write down.

If we took all possible strings of bits arranged in lexicographic order (0, 1, 00, 01, 10, 11, 000, 001, etc.), I claim we could write a program to find the first string y such that K(y) ≥ 2n. The program will simply run the algorithm on each string in succession, and stop when it finds the first string y that satisfies the requirements. Furthermore, the program will require slightly more than n bits to write down: n bits for the algorithm, plus a few additional bits to control the “repeatedly calling the algorithm until we find y” part. However, this means we just described a program that computes y, and that (for large enough n) is less than 2n bits long! It follows that K(y) < 2n, which contradicts the fact that K(y) ≥ 2n. We conclude that the program to compute K(x) could never have existed in the first place.




comments powered by Disqus
 

EMAIL TO A FRIEND :

Of Possible Interest

Engineering: The Story of Two Houses

Letters to the Editors: The Truth about Models

Spotlight: Briefings

Subscribe to American Scientist