Subscribe
Subscribe
MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
Logo IMG
HOME > PAST ISSUE > Article Detail

COMPUTING SCIENCE

Getting Your Quarks in a Row

A tidy lattice is the key to computing with quantum fields

Brian Hayes


QCD on a Chip

Going beyond toy programs to research models is clearly a big step. Lepage writes of the lattice method:

Early enthusiasm for such an approach to QCD, back when QCD was first invented, quickly gave way to the grim realization that very large computers would be needed....

It's not hard to see where the computational demand comes from. A lattice for a typical experiment might have 32 nodes along each of the three spatial dimensions and 128 nodes along the time dimension. That's roughly 4 million nodes altogether, and 16 million links between nodes. Gathering a statistically valid sample of random configurations from such a lattice is an arduous process.

Some lattice QCD simulations are run on "commodity clusters"—machines assembled out of hundreds or thousands of off-the-shelf computers. But there is also a long tradition of building computers designed explicitly for lattice computations. The task is one that lends itself to highly parallel architectures; indeed, one obvious approach is to build a network of processors that mirrors the structure of the lattice itself.

Two%20QCDOC%20machines%20at%20Brookhaven%20National%20LaboratoryClick to Enlarge Image One series of dedicated machines is known as QCDOC, for QCD on a chip. The chip in question is a customized version of the IBM PowerPC microprocessor, with specialized hardware for interprocessor communication. Some 12,288 processors are organized in a six-dimensional mesh, so that each processor communicates directly with 12 nearest neighbors. Three such machines have been built, two at Brookhaven National Laboratory and the third at the University of Edinburgh.

The QCDOC machines were completed in 2005, and attention is now turning to a new generation of special-purpose processors. Ideas under study include chips with multiple "cores," or subprocessors, and harnessing graphics chips for lattice calculations.

Meanwhile, algorithmic improvements may be just as important as faster hardware. The computational cost of a lattice QCD simulation depends critically on the lattice spacing a ; specifically, the cost scales as 1/ a 6 . For a long time the conventional wisdom held that a must be less than about 0.1 fermi for accurate results. Algorithmic refinements that allow a to be increased to just 0.3 or 0.4 fermi have a tremendous payoff in efficiency. If a simulation at a =0.1 fermi has a cost of 1,000,000 (in some arbitrary units), the same simulation at a =0.4 costs less than 250.


comments powered by Disqus
 

EMAIL TO A FRIEND :

Of Possible Interest

Spotlight: Briefings

Computing Science: Belles lettres Meets Big Data

Feature Article: Engines Powered by the Forces Between Atoms

Subscribe to American Scientist