MY AMERICAN SCIENTIST
SEARCH

HOME > PAST ISSUE > Article Detail

COMPUTING SCIENCE

# A Box of Universe

Watch the cosmos evolve in a cube one billion light-years wide

# 2-Body, 3-Body, N-Body

Even though cosmology is rooted in the general theory of relativity, the simulated universes of Millennium and Bolshoi operate on Newtonian principles. Here gravity is not a warping of space-time; it’s a force between particles. How can we get away with this retro vision of physics? Relativistic methods would be crucial if objects were moving near the speed of light, but galaxies travel at a stately few hundred kilometers per second. Newton’s equations are perfectly adequate.

Newton solved the two-body problem: Given the positions, velocities and masses of two pointlike particles interacting only with each other, he could describe the particles’ trajectories for all times, both past and future. But the three-body problem was beyond the reach of Newton’s methods. As for the n-body problem (with n larger than three), he speculated that “to define these motions by exact laws allowing of convenient calculation exceeds, unless I am mistaken, the forces of the entire human intellect.”

Newton was not mistaken: For the n-body problem, exact solutions are indeed unthinkable. No mathematical expression defines the positions and velocities of the particles for all time. However, with enough computer power we can have a “marching” solution, which takes the state of the system at one moment and walks forward a short distance along all the trajectories to yield a new state a moment later. The result is an approximation, but if the time steps are short enough, it can be highly accurate.

The inputs to an n-body calculation are the masses of the n particles (which never change) and their spatial coordinates and velocities at some initial time t. Each particle feels a gravitational attraction to every other particle. For particles i and j the magnitude of this force is given by Newton’s equation F=Gmimj/r2, where mi and mj are the particle masses and r is the distance between them. (The constant G merely reconciles units of mass and length.) The total force acting on a given particle is the vector sum of all the pairwise forces.

Another Newtonian equation relates force to acceleration: F=ma. Once we know a particle’s acceleration, we can assign it a new velocity and then calculate its motion over some brief interval Δt. This process yields a new set of particle positions and velocities at time t + Δt, which serve as inputs to the next round of computations.

In cosmological n-body simulations, n can be a number in the billions, which imposes a formidable computational burden. There are n(n–1)/2 pairs of particles requiring a force calculation at each time step. With 109 particles, that’s 5×1017 evaluations of the force equation, far beyond the limits of computer capacity, not to mention human patience. Thus the straightforward algorithm described above is not much use in practice.

The key to solving this problem is to lump together groups of nearby particles, summing their contribution to the gravitational field. This coarsening of the system has a small cost in accuracy but a major benefit in speed. Whereas the direct particle-particle method requires on the order of n2 calculations, a hierarchy of lumped masses offers hope of reducing the computation time to n log n. For the billion-particle case, that’s the difference between 1017 evaluations and 1010.

The Bolshoi simulation employed an algorithm called the adaptive refinement tree, devised in the 1990s by Andrey V. Kratsov, now of the University of Chicago, Klypin and Alexei M. Khokhlov of the U.S. Naval Research Laboratory. The procedure starts by dividing the cubical simulation volume into a grid of smaller cubical cells. Each of these level-0 cells can be further divided into eight level-1 cells, each of which can be split into eight level-2 cells, and so on. The splitting process continues until the number of particles in a cell falls below a threshold. The subdivision level can vary across the grid, so that costly high-resolution computations are done only where needed; but adjacent cells are allowed to differ by no more than one level, to avoid sharp discontinuities.