Logo IMG


Quite a Spread

To the Editors:

Brian Hayes's wonderful article about interval arithmetic (Computing Science, November–December) presents a very good introduction to interval computations, especially from the viewpoint of handling round-off errors.

However, in this otherwise excellent article there is one minor inaccuracy. The article states that for multiplication of two intervals, "in the worst case, there is no choice but to compute all four of the combinations and select the extrema." Although it may not be common knowledge (even in our interval-computations community), there is a way to find the product of two intervals by using no more than three multiplications of real numbers. Gerhard Heindl first presented the corresponding algorithm in "An improved algorithm for computing the product of two machine intervals" (Technical Report IAGMPI-9304, Department of Mathematics, University of Wuppertal, Germany, 1993). Several other 3-multiplication algorithms were later presented by Evgenija D. Popova.

It is also worth mentioning that whereas the worst-case number of multiplications is three, the average-case can be reduced to two (see Hamzo, C., and V. Kreinovich, 1999, On Average Bit Complexity of Interval Arithmetic, Bulletin of the European Association for Theoretical Computer Science 68:153–156). Readers may want to look into the Hamzo article—even if they are not interested in average-case complexity—because the article starts by explaining Heindl's 3-multiplications algorithm.

Vladik Kreinovich University of Texas at El Paso

To the Editors:

Unfortunately, Hayes's interesting article on interval arithmetic is marred by an absolute misstatement of fact. Hayes states "Hardware support for floating-point arithmetic came only after the IEEE published a standard for the format." The article indicates that this claim comes from G. William Walster's 1996 paper. Either Walster was wrong, or he has been misquoted. IEEE standard 754 "IEEE Standard for Binary Floating-Point Arithmetic" was first issued in 1985, but hardware implementations of floating point predate this standard by at least two decades. Indeed, by 1962, when I was first exposed to the computer industry, hardware floating point was available on most large-scale computer systems designed for scientific computation.

Frederick N. Webb Littleton, Massachusetts

Brian Hayes replies:

Dr. Webb is correct; the first computer with hardware for floating-point arithmetic was the IBM 704, a vacuum-tube and punchcard machine from the 1950s. The error was mine, not Walster's.

comments powered by Disqus


Subscribe to American Scientist