Random numbers have a peculiar power, even when they are only pseudo- or quasirandom
The Monte Carlo method is not a new idea, and neither is the quasi–Monte Carlo variation. Simulations based on random sampling were attempted more than a century ago by Francis Galton, Lord Kelvin and others. In those days they worked with true random numbers (and had a fine time generating them!).
The Monte Carlo method per se was invented and given its name at the Los Alamos Laboratory in the years after World War II. It’s no surprise that the idea emerged there: They had big problems (even more frightening than collateralized mortgage obligations); they had access to early digital computers (ENIAC, MANIAC); and they had a community of creative problem-solvers (Stanislaw Ulam, John von Neumann, Nicholas Metropolis, Marshall Rosenbluth). From the outset, the Los Alamos group relied on pseudorandom numbers. At the first conference on Monte Carlo methods, in 1949, von Neumann delivered his famous quip: “Anyone who considers arithmetic methods of producing random digits is, of course, in a state of sin.” Then he proceeded to sin.
Quasi–Monte Carlo was not far behind. The first publication on the subject was a 1951 report by Robert D. Richtmyer, who was head of the theoretical division at Los Alamos. The paper is an impressive debut. It sets forth the motivation for quasirandom sampling, introduces much of the terminology, and explains the mathematics. But it was also presented as an account of a failed experiment; Richtmyer had wanted to show improved convergence time for quasi–Monte Carlo computations, but his results were negative. I am a fervent believer in reporting negative results, but I have to concede that it may have inhibited further investigation.
In 1968 S. K. Zaremba, then at the University of Wisconsin in Madison, wrote a strident defense of quasirandom sampling (and a diatribe against pseudorandom numbers). As far as I can tell, he won few converts.
Work on the underlying mathematics of low-discrepancy sequences has gone on steadily through the decades (most notably, perhaps, by Klaus Friedrich Roth, I. M. Sobol, Harald Niederreiter and Ian H. Sloan). Now there is renewed interest in applications, and not just among the Wall Street quants. It’s catching on in physics and other sciences as well. Ray-tracing in computer graphics is another promising area.
The shifting fortunes of pseudo- and quasirandomness might be viewed in the context of larger trends. In the 19th century, randomness of any kind was an unwelcome intruder, reluctantly tolerated only where necessary (thermodynamics, Darwinian evolution). The 20th century, in contrast, fell madly in love with all things random. Monte Carlo models were a part of this movement; the quantum theory was a bigger part, with its insistence on divine dice games. Paul Erdös introduced random choice into the methodology of mathematical proof, which is perhaps the unlikeliest place to look for it. In computing, randomized algorithms became a major topic of research. The concept leaked into the arts, too, with aleatoric music and the spatter paintings of Jackson Pollack. Then there was chaos theory. A 1967 essay by Alfred Bork called randomness “a cardinal feature of our century.”
By now, though, the pendulum may be swinging the other way, most of all in computer science. Randomized algorithms are still of great practical importance, but the intellectual excitement is on the side of derandomization, showing that the same task can be done by a deterministic program. An open question is whether every algorithm can be derandomized. Deep thinkers believe the answer will turn out to be yes. If they’re right, randomness confers no essential advantage in computing, although it may still be a convenience.
Quasirandomness seems to be taking us in the same direction, with a preference for taking our drams of randomness in the smallest possible doses. What’s ahead may be a kind of homeopathic doctrine, where the greater the dilution of the active agent, the greater its potency. That’s nonsense in medicine, but perhaps it works in mathematics and computation.
- Bork, Alfred M. 1967. Randomness and the twentieth century. The Antioch Review 27(1):40–61.
- Braverman, Mark. 2011. Poly-logarithmic independence fools bounded-depth boolean circuits. Communications of the ACM 54(4):108–115.
- Caflisch, R. E. 1998. Monte Carlo and quasi-Monte Carlo methods. Acta Numerica 7:1–49.
- Chazelle, Bernard. 2000. The Discrepancy Method: Randomness and Complexity. Cambridge: Cambridge University Press.
- Dyer, Martin, and Alan Frieze. 1991. Computing the volume of convex bodies: A case where randomness provably helps. In Probabilistic Combinatorics and Its Applications, edited by Béla Bollobás, pp. 123–169. Providence: American Mathematical Society.
- Galanti, Silvio, and Alan Jung. 1997. Low-discrepancy sequences: Monte Carlo simulation of option prices. The Journal of Derivatives 5(1):63–83.
- Householder, A. S., G. E. Forsythe and H. H. Germond (eds.). 1951. Monte Carlo Method: Proceedings of a Symposium. National Bureau of Standards Applied Mathematics Series, Vol. 12. Washington: U.S. Government Printing Office.
- Kac, Mark. 1983. What is random? American Scientist 71:405–406.
- Karp, Richard M. 1986. Combinatorics, complexity, and randomness. Communications of the ACM 29(2):98–109.
- Karp, Richard M. 1991. An introduction to randomized algorithms. Discrete Applied Mathematics 34:165–201.
- Kuipers, L., and H. Niederreiter. 1974. Uniform Distribution of Sequences. New York: Dover Publications.
- Kuo, Frances Y., and Ian H. Sloan. 2005. Lifting the curse of dimensionality. Notices of the American Mathematical Society 52:1320–1328.
- Matousek, Jiri. 1999, 2010. Geometric Discrepancy: An Illustrated Guide. Heidelberg: Springer.
- Metropolis, Nicholas, and S. Ulam. 1949. The Monte Carlo method. Journal of the American Statistical Association 247:335–341.
- Metropolis, N. 1987. The beginning of the Monte Carlo method. Los Alamos Science 15:125–130.
- Motwani, Rajeev, and Prabhakar Raghavan. 1995. Randomized Algorithms. Cambridge: Cambridge University Press.
- Niederreiter, Harald G. 1978. Quasi-Monte Carlo methods and pseudo-random numbers. Bulletin of the American Mathematical Society 84(6):957–1041.
- Niederreiter, Harald. 1992. Random Number Generation and Quasi–Monte Carlo Methods. Philadelphia: SIAM.
- Owen, Art B. 2002 preprint. Necessity of low effective dimension. Stanford University. http://www-stat.stanford.edu/~owen/reports/necessity.pdf
- Paskov, Spassimir H., and Joseph E. Traub. 1995. Faster valuation of financial derivatives. Journal of Portfolio Management 22(1):113–121.
- Richtmyer, R. D. 1951. The evaluation of definite integrals, and a quasi-Monte Carlo method based on the properties of algebraic numbers. Report LA-1342, Los Alamos Scientific Laboratory. http://www.osti.gov/bridge/product.biblio.jsp?osti_id=4405295
- Roth, K. F. 1954. On irregularities of distribution. Mathematika: Journal of Pure and Applied Mathematics 1:73–79.
- Sloan, I. H., and H. Wo´zniakowski. 1997. An intractability result for multiple integration. Mathematics of Computation 66:1119–1124.
- Sloan, Ian H., and Henryk Wo´zniakowski. 1998. When are quasi–Monte Carlo algorithms efficient for high dimensional integrals? Journal of Complexity 14:1–33.
- Stigler, Stephen M. 1991. Stochastic simulation in the nineteenth century. Statistical Science 6:89-97.
- von Neumann, John. 1951. Various techniques used in connection with random digits. (Summary written by George E. Forsythe.) In Monte Carlo Method: Proceedings of a Symposium. National Bureau of Standards Applied Mathematics Series, vol. 12, pages 36–38. Washington: U.S. Government Printing Office.
- Zaremba, S. K. 1968. The mathematical basis of Monte Carlo and quasi–Monte Carlo methods. SIAM Review 10:303–314.
» Post Comment