Subscribe
Subscribe
MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
Logo IMG

COMPUTING SCIENCE

The Higher Arithmetic

How to count to a zillion without falling off the end of the number line

Brian Hayes

Safety in Numbers

The magnitude of a number increasesClick to Enlarge ImageIt’s tempting to pretend that floating-point arithmetic is simply real-number arithmetic in silicon. This attitude is encouraged by programming languages that use the label real for floating-point variables. But of course floating-point numbers are not real numbers; at best they provide a finite model of the infinite real number line.

Unlike the real numbers, the floating-point universe is not a closed system. When you multiply two floating-point numbers, there’s a good chance that the product—the real product, as calculated in real arithmetic—will not be a floating-point number. This leads to three kinds of problems.

The first problem is rounding error. A number that falls between two floating-point values has to be rounded by shifting it to one or the other of the nearest representable numbers. The resulting loss of accuracy is usually small and inconsequential, but circumstances can conspire to produce numerical disasters. A notably risky operation is subtracting one large quantity from another, which can wipe out all the significant digits in the small difference. Textbooks on numerical analysis are heavy with advice on how to guard against such events; mostly it comes down to “Don’t do that.”

The second problem is overflow, when a number goes off the scale. The IEEE standard allows two responses to this situation. The computer can halt the computation and report an error, or it can substitute a special marker, “∞,” for the oversize number. The latter option is designed to mimic the properties of mathematical infinities; for example, ∞+1= ∞. Because of this behavior, floating-point infinity is a black hole: Once you get into it, there is no way out, and all information about where you came from is annihilated.

The third hazard is underflow, where a number too small to represent collapses to zero. In real arithmetic, a sequence like 1/2, 1/4, 1/8,… can go on indefinitely, but in a finite floating-point system there must be a smallest nonzero number. On the surface, underflow looks much less serious than overflow. After all, if a number is so small that the computer can’t distinguish it from zero, what’s the harm of making it exactly zero? But this reasoning is misleading. In the exponential space of floating-point numbers, the distance from, say, 2–127 to zero is exactly the same as the distance from 2127 to infinity. As a practical matter, underflow is a frequent cause of failure in numerical computations.

Problems of rounding, overflow and underflow cannot be entirely avoided in any finite number system. They can be ameliorated, however, by adopting a format with higher precision and a wider range—by throwing more bits at the problem. This is one approach taken in a recent revision of the IEEE standard, approved in June 2008. It includes a new 128-bit floating-point format, supporting numbers as large as 216,383 (or about 104,932).




comments powered by Disqus
 

EMAIL TO A FRIEND :

Of Possible Interest

Spotlight: Briefings

Computing Science: Belles lettres Meets Big Data

Technologue: Quantum Randomness

Subscribe to American Scientist