Logo IMG
HOME > PAST ISSUE > Article Detail


The Higher Arithmetic

How to count to a zillion without falling off the end of the number line

Brian Hayes

Tapering Off, or Rolling Off a Log

By now, IEEE floating-point methods are so firmly established that they often seem like the only way to do arithmetic with a computer. But many alternatives have been discussed over the years. Here I shall describe two of them briefly and take a somewhat closer look at a third idea.

The first family of proposals might be viewed more as an enhancement of floating point than as a replacement. The idea is to make the trade-off between precision and range an adjustable parameter. If a calculation does not require very large or very small numbers, then it can give more bits to the significand. Other programs might want to sacrifice precision in order to gain wider scope for the exponent. To make such flexibility possible, it’s necessary to set aside a few bits to keep track of how the other bits are allocated. (Of course those bookkeeping bits are thereby made unavailable for either the exponent or the significand.)

A scheme of this kind, called tapered floating point, was proposed as early as 1971 by Robert Morris, who was then at Bell Laboratories. A decade later, more elaborate plans were published by Shouichi Matsui and Masao Iri of the University of Tokyo and by Hozumi Hamada of Hitachi, Ltd. More recently, Alan Feldstein of Arizona State University and Peter R. Turner of Clarkson University have described a tapered scheme that works exactly like a conventional floating-point system except when overflow or underflow threaten.

The second alternative would replace numbers by their logarithms. For example, in a decimal version of the plan the number 751 would be stored as 2.87564, since 102.87564=751. This plan is not as radical a departure as it might seem, because floating-point is already a semi-logarithmic notation: The exponent of a floating-point number is the integer part of a logarithm. Thus the two formats record essentially the same information.

If the systems are so similar, what’s gained by the logarithmic alternative? The motive is the same as that for developing logarithms in the first place: They facilitate multiplication and division, reducing those operations to addition and subtraction. For positive numbers a and b, log(ab)=log(a)+log(b). In general, multiplying takes more work than adding, so this substitution is a net gain. But there’s another side to the coin: Although logarithms make multiplying easy, they make adding hard. Computing a+b when you have only log(a) and log(b) is not straightforward. For this reason logarithmic arithmetic is attractive mainly in specialized areas such as image processing where multiplications tend to outnumber additions.

comments powered by Disqus


Of Possible Interest

Computing Science: Computer Vision and Computer Hallucinations

Feature Article: In Defense of Pure Mathematics

Feature Article: The Statistical Crisis in Science

Subscribe to American Scientist