MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
RSS
Logo IMG
HOME > PAST ISSUE > July-August 2001 > Article Detail

COMPUTING SCIENCE

Randomness as a Resource

Brian Hayes

The Randomness Industry

To appreciate the value of randomness, just imagine a world without it. What would replace the referee’s coin flip at the start of a football game? How would a political poll-taker select an unbiased sample of the electorate? Then of course there’s the Las Vegas problem. Slot machines devour even more randomness than they do silver dollars. Inside each machine an electronic device spews out random numbers 24 hours a day, whether or not anyone is playing.

There’s also a Monte Carlo problem. I speak not of the Mediterranean principality but of the simulation technique named for that place. The Monte Carlo method got its start in the 1940s at Los Alamos, where physicists were struggling to predict the fate of neutrons moving through uranium and other materials. The Monte Carlo approach to this problem is to trace thousands of simulated neutron paths. Whenever a neutron strikes a nucleus, a random number determines the outcome of the event—reflection, absorption or fission. Today the Monte Carlo method is a major industry not only in physics but also in economics and some areas of the life sciences, not to mention hundreds of rotisserie baseball leagues.

Many computer networks would be deadlocked without access to randomness. When two nodes on a network try to speak at once, politeness is not enough to break the impasse. Each computer might be programmed to wait a certain interval and then try again, but if all computers followed the same rule, they’d keep knocking heads repeatedly until the lights went out. The Ethernet protocol solves this problem by deliberately not giving a fixed rule. Instead, each machine picks a random number between 1 and n, then waits for the random number of units chosen before retransmitting; the probability of a second collision is reduced to 1/n.

Figure 1. Eight specimens of randomness . . .Click to Enlarge Image

Computer science has a whole technology of "randomized algorithms." On first acquaintance the very idea of a randomized algorithm may seem slightly peculiar: An algorithm is supposed to be a deterministic procedure—one that allows no scope for arbitrary choice or caprice—so how can it be randomized? The contradiction is resolved by making the randomness a resource external to the algorithm itself. Where an ordinary algorithm is a black box receiving a stream of bits as input and producing another stream of bits as output, a randomized algorithm has a second input stream made up of random bits.

Sometimes the advantage of a randomized algorithm is clearest when you take an adversarial view of the world. Randomness is what you need to foil an adversary who wants to guess your intentions or predict your behavior. Suppose you are writing a program to search a list of items for some specified target. Given any predetermined search strategy—left to right, right to left, middle outward—an adversary can arrange the list so that the target item is always in the last place you look. But a randomized version of the procedure can’t be outguessed so easily; the adversary can’t know where to hide the target because the program doesn’t decide where to search until it begins reading random bits. In spite of the adversary’s best efforts, you can expect to find the target after sifting through half the list.

Still another field that can’t do without randomness is cryptography, where calculated disorder is the secret to secrecy. The strongest of all cipher systems require a random key as long as the message that’s being sent. The late Claude E. Shannon proved that such a cipher is absolutely secure. That is, if the key is truly random, and if it is used only once, an eavesdropper who intercepts an encrypted message can learn nothing about the original text, no matter how much time and effort and computational horsepower are brought to bear on the task. Shannon also showed that no cipher with a key shorter than the message can offer the same degree of security. But a long key is a considerable inconvenience—hard to generate, hard to distribute.

Much of the emphasis in recent cryptological research has been on ways to get by with less randomness, but a recent proposal takes a step in the other direction. The idea is to drown an adversary in a deluge of random bits. The first version of the scheme was put forward in 1992 by Ueli M. Maurer of the Swiss Federal Institute of Technology; more recent refinements (not yet published) have come from Michael O. Rabin of Harvard University and his student Yan Zong Ding.

The heart of the plan is to set up a public beacon—perhaps a satellite—continually broadcasting random bits at a rate so high that no one could store more than a small fraction of them. Parties who want to communicate in privacy share a relatively short key that they both use to select a sequence of random bits from the public broadcast; the selected bits serve as an enciphering key for their messages. An eavesdropper cannot decrypt an intercepted message without a record of the random broadcasts, and cannot keep such a record because it would be too voluminous.

How much randomness would the beacon have to broadcast? Rabin and Ding mention a rate of 50 gigabits per second, which would fill up some 800,000 CD-ROMs per day.





» Post Comment

 

EMAIL TO A FRIEND :

Subscribe to American Scientist