Subscribe
Subscribe
MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
Logo IMG

COMPUTING SCIENCE

A Lucid Interval

Brian Hayes

Perils of Precision

Why should we have to put up with errors and approximations in computing? Why can't the computer just give the right answer?

Sometimes it can. Calculations done entirely with integers yield exact results as long as the numbers are not too big for the space allotted. Often the allotted space is quite scanty—as little as 16 bits—but this is an artificial constraint; in principle a computer can store the exact value of any integer that will fit in its memory.

Integers have the pleasant property that they form a closed set under addition, subtraction and multiplication; in other words, when you add, subtract or multiply any two integers, you always get another integer. Absent from this list of operations is division, because the quotient of two integers is not always an integer. If we allow numbers to be divided, we must go beyond the integers to the rational numbers, such as 2/3 or 3/2. But rational values can also be represented exactly in the computer; all that's needed is to keep track of a pair of integers, which are interpreted as the numerator and the denominator. Thus the constant 1/10, which caused such havoc in the Patriot software, could have been encoded in the two binary integers 1 and 1010. A few programming languages—notably Lisp and its offspring—provide integers of unlimited size ("bignums") and exact rationals as built-in data types. Similar facilities can be added to other languages.

Figure 2. Numerically unstable formulasClick to Enlarge Image

If we can have exact numerical computation, why would anyone choose approximate arithmetic? One reason is that there are numbers beyond the rationals: No ratio of finite integers gives the exact value of v2 or p or log2(3). Perhaps more important, exact computations tend to become hopelessly unwieldy. Consider the series 1 + 1/2 + 1/4 + 1/8 + 1/16 + .... If you sum a thousand terms, the result is vanishingly close to 2, but the exact rational representation fills two thousand binary digits. Doing arithmetic with such obese numbers is slow and cumbersome. And outside the realm of pure mathematics the cost of maintaining exactness is seldom justified. Nothing in the physical world can be measured with such precision anyway.

Figure 3. Comparisons between intervalsClick to Enlarge Image

The usual alternative to exact rational computations is floating-point arithmetic, a scheme that resembles scientific notation. A number takes the form D X ßE, where D is called the significand, E is the exponent, and ß is the base (which in modern computer systems is always 2). For example, the decimal number 6.0 can be expressed as 0.75 X 23, with significand 0.75 and exponent 3. In this case the representation is exact, in binary as well as in decimal (the binary significand is 0.11). Other numbers are not so lucky. As noted above, no finite significand corresponds exactly to the decimal fraction 1/10. Furthermore, it's obvious that some numbers must be missing from the system simply because it has only a finite capacity. In one common floating-point format, the total space available for storing the significand and the exponent is 32 bits, and so the system cannot possibly hold more than 232 distinct numbers, or about 4 billion of them. If you need a number that is not a member of this finite set, the best you can do is choose the nearest member as an approximation. The difference between the true value and the approximation is the roundoff error.

Interval arithmetic cannot eliminate roundoff error, but it can fence it in. When a result x falls between two floating-point values, those nearest representable numbers become the lower and upper bounds of the interval [, ]. But this is not the end of the story. Subsequent computations could yield a new interval for which and are themselves numbers that have no exact floating-point representation. In this situation, where even the interval has to be approximated, rounding must be done with care. To preserve the guarantee that the true value always lies within the interval, the end points of the interval must be rounded "outward": is rounded down and is rounded up.




comments powered by Disqus
 

EMAIL TO A FRIEND :

Of Possible Interest

Computing Science: Belles lettres Meets Big Data

Technologue: Quantum Randomness

Technologue: The Quest for Randomness

Subscribe to American Scientist