MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
RSS
Logo IMG

COMPUTING SCIENCE

g-OLOGY

Brian Hayes

QED

The naive mental picture of an electron is a blob of mass and electric charge, spinning on its axis like a tiny planet. If we take this image seriously, the moving charge on the spinning particle's surface has to be regarded as an electric current, which ought to generate a magnetic field. The g factor (also known as the gyromagnetic ratio) is the constant that determines how much magnetic field arises from a given amount of charge, mass and spin. The formula is:

Click to Enlarge Image

where µ is the magnetic moment, e the electric charge, m the mass and s the spin angular momentum (all expressed in appropriate units). Early experimental evidence suggested that the numerical value of g is approximately 2.

In the 1920s P. A. M. Dirac created a new and not-so-naive theory of electrons in which g was no longer just an arbitrary constant to be measured experimentally; instead, the value of g was specified directly by the theory. For an electron in total isolation, Dirac calculated that g is exactly 2. We now know that this result was slightly off the mark; g is greater than 2 by roughly one part in a thousand. And yet Dirac's mathematics was not wrong. The source of the error is that no electron is ever truly alone; even in a perfect vacuum, an electron is wrapped in a halo of particles and antiparticles, which are continually being emitted and absorbed, created and annihilated. Interactions with these "virtual" particles alter various properties of the electron, including the g factor.

Methods for accurately calculating g were devised in the 1940s as part of a thorough overhaul of the theory of electrons—a theory called quantum electrodynamics, or QED. That the calculation of g can be honed to such a razor edge of precision is something of a fluke. The mass, charge and magnetic moment of the electron are known only to much lower accuracy; so how can g, which is defined in terms of these quantities, be pinned down more closely? The answer is that g is a dimensionless ratio, calculated and measured in such a way that uncertainties in all those other factors cancel out.

Experimental measurements of g benefit from another fortunate circumstance. The experiments can be arranged to determine not g itself but the difference between g and 2; thus the measurements have come to be known as "g minus 2 experiments." Because g–2 is only about a thousandth of g, the measurement gains three decimal places of precision for free.





» Post Comment

 

EMAIL TO A FRIEND :

Of Possible Interest

Feature Article: Simulating Star Formation on a Galactic Scale

Computing Science: New Dilemmas for the Prisoner

Letters to the Editors: Chance Readings

Subscribe to American Scientist