Pencil, Paper, and Pi
A gargantuan calculation of π in the 1850s ran up against the limits of manual arithmetic; figuring out where it went wrong calls for forensic mathematics
William Shanks was one of the finest computers of the Victorian era—when the term computer denoted not a machine but a person skilled in arithmetic. His specialty was mathematical constants, and his most ambitious project was a record-setting computation of pi. Starting in 1850 and returning to the task at intervals over more than 20 years, he eventually published a value of pi that began with the familiar digits 3.14159 and went on for 707 decimal places.
Seen from a 21st-century perspective, Shanks is a poignant figure. All his patient toil has been reduced to triviality. Anyone with a laptop can compute hundreds of digits of pi in microseconds. Moreover, the laptop will give the correct digits. Shanks made a series of mistakes beginning around decimal place 530 that spoiled the rest of his work.
I have long been curious about Shanks and his 707 digits. Who was this prodigious human computer? What led him to undertake his quixotic adventures in arithmetic? How did he deal with the logistical challenges of the pi computation: the teetering columns of figures, the grueling bouts of multiplication and division? And what went wrong in the late stages of the work?
One way to answer these questions would be to buy several reams of paper, sharpen a dozen pencils, and try to retrace Shanks’s steps. I haven’t the stamina for that—or even the life expectancy. But by adapting some pencil-driven algorithms to run on silicon computers, I have gotten a glimpse of what the process might have been like for Shanks. I think I also know where a couple of his errors crept in, but there are more that remain unexplained.