The Nature of Scientific Proof in the Age of Simulations
Is numerical mimicry a third way of establishing truth?
Computational Astrophysics Grows
During the 1940s through the 1980s, the late, distinguished Princeton astrophysicist Martin Schwarzschild was one of the first to use simulations to gain insights into astronomy, harnessing them to understand the evolution of stars and galaxies. Schwarzschild realized that the physical processes governing stellar structure are nonlinear and not amenable to analytical (pencil and paper) solutions, because it requires an understanding of the physics of nuclear burning, while galaxies are hardly perfect spheres. He proceeded to investigate them both using numerical solutions generated by large computers (at that time).
Both lines of inquiry have since blossomed into respected and full-fledged subdisciplines in astrophysics. Nowadays, an astrophysicist is as likely to be found puzzling over the engineering of complex computer code as he or she is to be found fiddling with mathematical equations on paper or chalkboard.
From the 1990s to the present, the approach of using computer simulations for testing hypotheses flourished. As technology advanced, astronomical data sets became richer, motivating the need for more detailed theoretical predictions and interpretations. Computers became more prevalent and faster, alongside rapid advances in the algorithmic techniques developed by computational science. Inexorably, the calculations produced by large simulations evolved to resemble experimental data sets in size, detail, and complexity.
Computational astrophysicists now come in three variants: engineers to build the code, researchers to formulate hypotheses and design numerical experiments, and others to process and interpret the resulting massive output. Supercomputing centers function almost like astronomical observatories. For better or worse, this third way of establishing scientific truth appears to be here to stay.