Subscribe
Subscribe
MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
Logo IMG

COMPUTING SCIENCE

Programs and Probability

Computer programs must cope with chance and uncertainty, just as people do. One solution is to build probabilistic reasoning into the programming language.

Brian Hayes

Randomness and probability are deeply rooted in modern habits of thought. We meet probabilities in the daily weather forecast and measures of uncertainty in opinion polls; statistical inference is central to all the sciences. Then there’s the ineluctable randomness of quantum physics. We live in the Age of Stochasticity, says David Mumford, a mathematician at Brown University.

Ours is also an age dominated by deterministic machines—namely, digital computers—whose logic and arithmetic leave nothing to chance. In digital circuitry strict causality is the rule: Given the same initial state and the same inputs, the machine will always produce the same outputs. As Einstein might have said, computers don’t play dice.

But in fact they do! Probabilistic algorithms, which make random choices at various points in their execution, have long been essential tools in simulation, optimization, cryptography, number theory, and statistics. How is randomness smuggled into a deterministic device? Although computers cannot create randomness de novo , they can take a smidgen of disorder from an external source and amplify it to produce copious streams of pseudorandom numbers. As the name suggests, these numbers are not truly random, but they work well enough to fool most probabilistic algorithms. (In other words, computers not only play dice, they also cheat.)

A recent innovation weaves randomness even more deeply into the fabric of computer programming. The idea is to create a probabilistic programming language (often abbreviated PPL and sometimes pronounced “people”). In a language of this kind, random variables and probability distributions are first-class citizens, with the same rights and privileges as other data types. Furthermore, statistical inference—the essential step in teasing meaning out of data—is a basic, built-in operation.

Most of the probabilistic languages are still experimental, and it’s unclear whether they will be widely adopted and prove effective in handling large-scale problems. But they have already provided an intriguing new medium for expressing probabilistic ideas and algorithms.








comments powered by Disqus
 

EMAIL TO A FRIEND :

Subscribe to American Scientist