COMPUTING SCIENCE

# A Lucid Interval

# Historical Intervals

Interval arithmetic is not a new idea. Invented and reinvented several times, it has never quite made it into the mainstream of numerical computing, and yet it has never been abandoned or forgotten either.

In 1931 Rosalind Cicely Young, a recent Cambridge Ph.D., published an "algebra of many-valued quantities" that gives rules for calculating with intervals and other sets of real numbers. Of course Young and others writing in that era did not see intervals as an aid to improving the reliability of machine computation. By 1951, however, in a textbook on linear algebra, Paul S. Dwyer of the University of Michigan was describing arithmetic with intervals (he called them "range numbers") in a way that is clearly directed to the needs of computation with digital devices.

A few years later, the essential ideas of interval arithmetic were set forth independently and almost simultaneously by three mathematicians— Mieczyslaw Warmus in Poland, Teruo Sunaga in Japan and Ramon E. Moore in the United States. Moore's version has been the most influential, in part because he emphasized solutions to problems of machine computation but also because he has continued for more than four decades to publish on interval methods and to promote their use.

Today the interval-methods community includes active research groups at a few dozen universities. A web site at the University of Texas at El Paso (www.cs.utep.edu/interval-comp) provides links to these groups as well as a useful archive of historical documents. The journal *Reliable Computing* (formerly *Interval Computations*) is the main publication for the field; there are also mailing lists and annual conferences. Implementations of interval arithmetic are available both as specialized programming languages and as libraries that can be linked to a program written in a standard language. There are even interval spreadsheet programs and interval calculators.

One thing the interval community has been ardently seeking—so far without success—is support for interval algorithms in standard computer hardware. Most modern processor chips come equipped with circuitry for floating-point arithmetic, which reduces the process of manipulating significands and exponents to a single machine-language instruction. In this way floating-point calculations become part of the infrastructure, available to everyone as a common resource. Analogous built-in facilities for interval computations are technologically feasible, but manufacturers have not chosen to provide them. A 1996 article by G. William Walster of Sun Microsystems asks why. Uncertainty of demand is surely one reason; chipmakers are wary of devoting resources to facilities no one might use. But Walster cites other factors as well. Hardware support for floating-point arithmetic came only after the IEEE published a standard for the format. There have been drafts of standards for interval arithmetic (the latest written by Dmitri Chiriaev and Walster in 1998), but none of the drafts has been adopted by any standards-setting body.

**IN THIS SECTION**

Community Guidelines: Disqus Comments

The Art and Science of Communicating Science

EMAIL TO A FRIEND :

**Of Possible Interest**

**Feature Article**: Candy Crush's Puzzling Mathematics

**Computing Science**: Belles lettres Meets Big Data

**Technologue**: Quantum Randomness