Subscribe
Subscribe
MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
Logo IMG
HOME > PAST ISSUE > Article Detail

PERSPECTIVE

The Nature of Scientific Proof in the Age of Simulations

Is numerical mimicry a third way of establishing truth?

Kevin Heng

Reproducibility and Falsifiability

With increasingly complex simulations, there are also questions surrounding the practice of science. It is not unheard of to encounter published papers in astrophysics where insufficient information is provided for the reproduction of simulated results. Frequently, the computer codes used to perform these simulations are proprietary and complex enough that it would take years and the dedicated efforts of a research team to completely re-create one of them. Scientific truth is monopolized by a few and dictated to the rest. Is it still science if the results are not readily reproducible? (Admittedly, “readily” has a subjective meaning.)

There are also groups and individuals who take the more modern approach of making their codes open source. This has the tremendous advantage that the task of scrutinizing, testing, validating, and debugging the code no longer rests on the shoulders of an individual, but of the entire community. Some believe this amounts to giving away trade secrets, but there are notable examples of researchers whose careers have blossomed partly because of influential computer codes they made freely available.

A pioneer in this regard is Sverre Aarseth, a Cambridge astrophysicist who wrote and gave away codes that computed the evolution of astronomical objects (planets, stars, and so on) under the influence of gravity. Jim Stone of Princeton and Romain Teyssier of Zurich are known for authoring a series of codes that solve the equations of magnetized fluids, which have been used to study a wide variety of problems in astrophysics. Volker Springel of Heidelberg made his mark via the Millenium Simulation Project. In all of these cases, the publicly available computer codes became influential because other researchers incorporated them into their repertoire.

A related issue is falsifiability. If a physical system is perfectly understood, it comes with no freedom of specifying model inputs. Technically, astrophysicists call these free parameters. Quantifying how the sodium atom absorbs light provides a fine example—it is a triumph of quantum physics that such a calculation requires no free parameters. In large-scale simulations, there are always physical aspects that are poorly or incompletely understood and need to be mimicked by approximate models that specify free parameters. Often, these pseudomodels are not based on fundamental laws of physics but consist of ad hoc functions calibrated on experimental data or smaller-scale simulations, which may not be valid in all physical regimes.

An example is the planetary boundary layer on Earth, which arises from the friction between the atmospheric flow and the terrestrial surface and is an integral part of the climatic energy budget. The exact thickness of the planetary boundary layer depends on the nature of the surface; whether it is an urban area, grasslands, or ocean matters. Such complexity cannot be directly and feasibly computed in a large-scale climate simulation. Hence, one needs experimentally measured prescriptions for the thickness of this layer as inputs for the simulation. To unabashedly apply these prescriptions to other planets (or exoplanets) is to stand on thin ice. Worryingly, there is an emerging subcommunity of researchers switching over to exoplanet science from the Earth sciences who are bringing with them such Earth-centric approaches.

To form large-scale galaxies, one needs prescriptions for star formation and how supernovas feed energy back into their environments. To simulate the climate, one needs prescriptions for turbulence and precipitation. Such prescriptions often employ a slew of free parameters that are either inadequately informed by data or involve poorly known physics.

As the number of free parameters in a simulation increase, so does the diversity and variety of simulated results. In the most extreme limit, the simulation predicts everything—it is consistent with every outcome anticipated. A quote attributed to John von Neumann describes it best, “With four parameters, I can fit an elephant and with five I can make him wiggle his trunk.” Inattention to falsifiability has been chided by Wolfgang Pauli, who remarked, “It is not only incorrect, it is not even wrong.” A simulation that cannot be falsified can hardly be considered science.

Simulations as a third way of establishing scientific truth are here to stay. The challenge is for the astrophysical community to wield them as transparent, reproducible tools, thereby placing them on an equally credible footing with theory and experiment.

The author is grateful to Scott Tremaine and Justin Read for feedback on a draft version of the article.

Bibliography

  • Dyson, F. 2004. A meeting with Enrico Fermi. Nature 427:297.
  • Held, I.M. 2005. The gap between simulations and understanding in climate modeling. Bulletin of the American Meteorological Society 86:1609–1614.
  • Ostriker, J.P. 1997. Obituary: Martin Schwarzschild (1912–97). Nature 388:430.
  • Poincaré, H. 2001. The Value of Science: Essential Writings of Henri Poincaré, edited by Stephen Jay Gould. New York: Modern Library.








comments powered by Disqus
 

EMAIL TO A FRIEND :

Subscribe to American Scientist