Subscribe
Subscribe
MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
Logo IMG
HOME > PAST ISSUE > Article Detail

COMPUTING SCIENCE

Hip-Hop Physics

Electrons dance to a quantum beat in the Hubbard model of solid-state physics

Brian Hayes

Why Is Hubbard So Hard?

spin configurationsClick to Enlarge ImageThe long-range correlations that make the Hubbard model interesting also make it hard to solve. One aspect of this difficulty is the vast number of possible configurations. On a 20-site lattice, 10 spin-up electrons and 10 spin-down electrons can be arranged in more than 34 billion ways. Yet this combinatorial explosion is not the main impediment to working out the fate of a Hubbard system. Other models in statistical mechanics also have huge numbers of configurations, yet various methods of analysis or simulation yield accurate results in those cases.

The real source of difficulty in the Hubbard model is the quantum entanglement of all the electrons. In the Ising model we can flip a spin at one site and consider only the local consequences—the change in energy between that spin and its immediate neighbors. In the quantum treatment of the Hubbard model, shifting an electron from one site to another can alter the state of the system arbitrarily far away. (The quantum state is defined by a wave function that can be smeared out over the entire lattice.)

Following the evolution of the Hubbard system by means of exact computations is all but unthinkable except for the tiniest models. It involves working with a matrix of size M × M, where M is the number of possible configurations. Michael Creutz of Brookhaven National Laboratory has experimented with a few such computations, reducing the storage requirement by tricks such as encoding lattice configurations in the bits of an integer. He was able to study a one-dimensional lattice of six sites—essentially a benzene molecule—with a matrix of 400 × 400 elements. But he points out that a two-dimensional lattice of just 16 sites would have some 165,636,900 configurations.

The problems of working with these large matrices go beyond mere demands for storage space and computing time. It turns out that the matrix is often “ill conditioned”; when the algorithm calls for computing a small difference between two large numbers, all accuracy is lost and the matrix fills with useless numerical noise.

But I should not leave the impression that computation is a lost cause in the Hubbard model. On the contrary, it has been the main source of insight over the past 40 years. No one algorithm has conquered the problem overall, but a toolkit of specialized methods have tamed many corners of it. For example, there are algorithms that perform well for values of N near half-filling, and other methods that work best when U is large. One strategy, pioneered by a group headquartered at the University of California, Santa Barbara, takes the counterintuitive step of replacing a large matrix with an even larger one—but the replacement matrix ameliorates the problem of lost numerical accuracy.




comments powered by Disqus
 

EMAIL TO A FRIEND :

Of Possible Interest

Spotlight: Briefings

Spotlight: Briefings

Computing Science: Belles lettres Meets Big Data

Subscribe to American Scientist