MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
RSS
Logo IMG

COMPUTING SCIENCE

The Higher Arithmetic

How to count to a zillion without falling off the end of the number line

Brian Hayes

The National Debt Clock adds a 14th digitClick to Enlarge ImageLast year the National Debt Clock in New York City ran out of digits. The billboard-size electronic counter, mounted on a wall near Times Square, overflowed when the public debt reached $10 trillion, or 1013 dollars. The crisis was resolved by squeezing another digit into the space occupied by the dollar sign. Now a new clock is on order, with room for growth; it won’t fill up until the debt reaches a quadrillion (1015) dollars.

The incident of the Debt Clock brings to mind a comment made by Richard Feynman in the 1980s—back when mere billions still had the power to impress:

There are 1011 stars in the galaxy. That used to be a huge number. But it’s only a hundred billion. It’s less than the national deficit! We used to call them astronomical numbers. Now we should call them economical numbers.

The important point here is not that high finance is catching up with the sciences; it’s that the numbers we encounter everywhere in daily life are growing steadily larger. Computer technology is another area of rapid numeric inflation. Data storage capacity has gone from kilobytes to megabytes to gigabytes, and the latest disk drives hold a terabyte (1012 bytes). In the world of supercomputers, the current state of the art is called petascale computing (1015 operations per second), and there is talk of a coming transition to exascale (1018). After that, we can await the arrival of zettascale (1021) and yottascale (1024) machines—and then we run out of prefixes!

Even these numbers are puny compared with the prodigious creations of pure mathematics. In the 18th century the largest known prime number had 10 digits; the present record-holder runs to almost 13 million digits. The value of pi has been calculated to a trillion digits—a feat at once magnificent and mind-numbing. Elsewhere in mathematics there are numbers so big that even trying to describe their size requires numbers that are too big to describe. Of course none of these numbers are likely to turn up in everyday chores such as balancing a checkbook. On the other hand, logging into a bank’s web site involves doing arithmetic with numbers in the vicinity of 2128, or 1038. (The calculations take place behind the scenes, in the cryptographic protocols meant to ensure privacy and security.)

Which brings me to the main theme of this column: Those streams of digits that make us so dizzy also present challenges for the design of computer hardware and software. Like the National Debt Clock, computers often set rigid limits on the size of numbers. When routine calculations begin to bump up against those limits, it’s time for a rethinking of numeric formats and algorithms. Such a transition may be upon us soon, with the approval last year of a revised standard for one common type of computer arithmetic, called floating point. Before the new standard becomes too deeply entrenched, perhaps it’s worth pausing to examine a few alternative schemes for computing with astronomical and economical and mathematical numbers.





» Post Comment

 

EMAIL TO A FRIEND :

Of Possible Interest

Letters to the Editors: Nautilus Biology

Technologue: The Quest for Randomness

Feature Article: Twisted Math and Beautiful Geometry

Subscribe to American Scientist