A Lucid Interval
Although the principles of interval computing may seem obvious or even trivial, getting the algorithms right is not easy. There are subtleties. There are gotchas. The pitfalls of division by an interval that includes zero have already been mentioned. Here are a few more trouble spots.
In doing arithmetic, we often rely on mathematical laws or truths such as x + – x=0 and (a+b)x=ax + bx. With intervals, some of these rules fail to hold. In general, an interval has no additive inverse; that is, given a nondegenerate interval [, ], there is no interval [, ] for which [, ] + [, ]=[0,0]. There is no multiplicative inverse either—no pair of nondegenerate intervals for which [, ] X[, ] = [1,1]. The reason is clear and fundamental: No valid operation can ever diminish the width of an interval, and [0,0] and [1,1] are intervals of zero width.
The distributive law also fails for intervals. In an expression such as [1,2] X ([–3,–2]+[3,4]), it makes a difference whether you do the addition first and then multiply, or do two multiplications and then add. One sequence of operations gives the result [0,4], the other [–3,6]. Strictly speaking, either of these results is correct—both of them bound any true value of the original expression—but the narrower interval is surely preferable.
Another example: squaring an interval. The obvious definition [, ]2 = [, ] x [, ] seems to work in some cases, such as [1,2]2 = [1,4]. But what about [–2,2]2=[–4,4]? Whoops! The square of a real number cannot be negative. The source of the error is treating the two appearances of [, ] in the right-hand side of the formula as if they were independent variables; in fact, whatever value x assumes in one instance, it must be the same in the other. The same phenomenon can arise in expressions such as 2x/x. Suppose x is the interval [2,4]; then naïvely following the rules of interval arithmetic yields the answer [1,4]. But of course the correct value is 2 (or [2,2]) for any nonzero value of x.
Comparisons are yet another murky area. Computer programs rely heavily on conditional expressions such as "if (x is less than y) then...." When x and y are intervals, the comparison gets tricky. Is [1,3] less than [2,4], or not? Whereas there are just three elementary comparisons for pointlike numbers , there are as many as 18 well-defined relations for intervals. It's not always obvious which one to choose, or even how to name them. (Chiriaev and Walster refer to "certainly relations" and "possibly relations.")
Finally, look at what happens if a naïve implementation of the sine function is given an interval argument. Sometimes there is no problem: sin([30°,60°]) yields the correct interval [0.5,0.866]. But sin([30°,150°]) returns [0.5,0.5], which is an error; the right answer is [0.5,1.0]. What leads us astray is the assumption that interval calculations can be based on end points alone, which is true only for monotonic functions (those that never "change direction"). For other functions it is necessary to examine the interior of an interval for minima and maxima.
In fairness, it should be noted that many cherished mathematical truths fail even in ordinary (noninterval) floating-point arithmetic. An identity such as x=vx2 is not to be trusted in floating point. And there are remedies for all the interval gotchas mentioned above—or at least strategies for coping with them. M. H. van Emden has shown that by building on the existing IEEE floating-point standard (including its facilities for representing infinity), it would be possible to create a system of interval arithmetic that would never fall into an error state, not even as a result of division by zero. (Of course the system would sometimes return results such as [–8, +8], which may be of questionable utility.)