Current Issue

This Article From Issue

March-April 2001

Volume 89, Number 2

The Universal Computer: The Road from Leibniz to Turing. Martin Davis. xii + 257 pp. W. W. Norton, 2000. $26.95.


How do you trace the intellectual lineage of something as complex and ubiquitous as the computer? Do you start from von Neumann or Babbage, Archimedes or al-Khwarizmi—or perhaps from the foggy origins of the abacus and the digit zero? In this beautifully written book, Martin Davis (an emeritus professor at New York University and now a visiting scholar at the University of California, Berkeley) argues that computers are in many ways the culmination of the glorious and powerful mathematical tradition we now call logic.

Ad Right

The story begins in 17th-century Europe, where one of the greatest philosophers and mathematicians of the time, Gottfried Wilhelm Leibniz—the man who brought us optimism and the calculus—dreamed about reducing human reasoning to a purely mechanical and symbolic task, thus completing Aristotle's project of codifying syllogisms. Leibniz was not able to go very far with his project, consumed as he was by the trivial tasks his royal employers demanded of him. (Davis does not miss here the opportunity of drawing a parallel with the myopic priorities of today's funding agencies.)

A century and a half later, George Boole, an English mathematician teaching in Ireland, made progress on Leibniz's vision by defining what is today known as Boolean logic (the mathematical laws of the truth and falsity of propositions) and, significantly, Boolean algebra, a system of mechanical rules for carrying out deductions in this realm.

The second half of the 19th century was a time of accelerating progress and accomplishment in mathematics. One great mathematician, however, Gottlob Frege of Jena, was interested in the foundations of mathematics; he wanted to understand the detailed nature of theorems and their proofs. For this, he needed to go beyond Boole, because Boole's system, although adequate for discovering deductions between individual propositions (as in "if A implies B, and A is true, then B must be true"), is impotent in the face of more advanced statements, such as general declarations of the form "for all x, P(x) is true." And such statements are the bread and butter of mathematics. Consider, for example, the following simple theorem about whole numbers (which states, essentially, that every integer is either even or odd):

For all x, there exists a y such that
x = y + y or x = y + y +1

In such statements, elementary Aristotelian propositions such as "Socrates is mortal" or "x = y + y" mingle with powerful logical operators such as "there exists" and "for all," called quantifiers. This is exactly the kind of statement studied by modern logic.

Frege set out to build the foundations of modern mathematics, very much the way Euclid had developed his geometry. But such intense introspection was bound to reveal previously unknown wrinkles and difficulties in mathematics—exactly as speech suddenly becomes problematic when you must visualize the precise position of your tongue. One particularly annoying kind of paradox that plagues mathematical systems, once they become powerful enough, is the diagonal argument ("Who shaves the barber?") used by Georg Cantor in the late 19th century to demonstrate the existence of infinities that are higher than the infinity of the integers. In a pivotal passage, Davis describes Bertrand Russell informing Frege of his system's flaw; the founder of modern logic reacted to the devastating news with noble and scholarly grace.

At about that time, David Hilbert—a man of penetrating mathematical genius, explosive energy and unrelenting faith in our ability to conquer knowledge—had a vision even more advanced and optimistic than Leibniz's: to perfect Frege's system and embed it in a computational device that would then mechanically prove (or disprove) any mathematical statement submitted to it. This dream fueled much of early 20th-century mathematics but was slain after three decades by an ingenious argument on the part of Kurt Gödel, the young Austrian logician. Gödel, inspired by Cantor's diagonal argument, proved that any mathematical system such as Frege's, encompassing the whole numbers and the basic arithmetic operations, must be incomplete, in that it must contain statements that are true and yet cannot be proved within the system. And a few years later, the British mathematician and philosopher Alan M. Turing (broadly considered the founder of computer science) gave the final blow to Hilbert's project by establishing that there can be no computational device capable of proving all mathematical statements—not even all statements that do have proofs.

From The Universal Computer.

This is an important turn in Davis's story. In order to make his argument, Turing conceived an idealized computational device (now called the universal Turing machine) capable of carrying out any calculation—in very much the same way that today's compilers can interpret any program. Turing went on to build real computers, as a leader of the team of British code-breakers who cracked the German navy's secret cipher during World War II and later as a consultant for the nascent British computer industry. And, during several visits to the United States, Turing shared his ideas with John von Neumann, another mathematical genius who was at the time also obsessed with the dream of computing machines. Finally, when the time came for the dream to become a reality, von Neumann, leading a team of designers known as "the logicians" (their rivals were "the engineers") won in 1945 the argument for the design of the EDVAC (which followed ENIAC), a computer whose basic structure was eerily close to that of today's personal computer. And this is finally why, Davis concludes, computers are the incarnation and culmination of three centuries of logic.

This last leg of the book's argument is the only weak one. Turing and von Neumann were not logicians. Turing was originally interested in the capabilities of the human mind, whereas von Neumann had proclaimed himself uninterested in logic—an overreaction to the demise of Hilbert's project. His proposed design of the EDVAC was called "logical," probably because it was clean and principled, already containing the concepts of stored program and the fetch-execute cycle, which must have seemed wasteful to the engineers (this tension still exists today in computer and software design). The ways logic influenced the development of computers seem to me a little more complex and diffuse than Davis suggests. There is an undeniable affinity between a computer's gates (and its binary representation of numbers) and Boole's logic of true and false, 0 and 1 (a rather prosaic connection amply reflected in terminology: ALU is the arithmetic logic unit in every computer's central processor, and logic design is the area of computer science striving to build better circuits). Mathematical logic, the investigation founded by Frege, has influenced computers rather indirectly: Hilbert's project brought computation to the forefront of early 20th-century intellectual discourse, whereas the Turing machine—and the concept of the universal computer—was conceived for the purpose of refuting that project.

Davis's writing is elegant and fluid, and the mathematical development interspersed in the story is gentle and accessible. Several delightful streams run through this book. One is an undercurrent of European history: Leibniz's Hanoverian masters married into the English royal family, which appointed Boole to his professorship, and the debate over the foundations of mathematics was disturbed by several wars—and by periods of peace that were often worse. Hilbert's optimistic dream was an uncertain light during the Weimar Republic, and its demise coincided with the rise of the Third Reich, which many of the protagonists fled.

And then there is the human story, the unbelievable and total sadness of it: Leibniz never had a chance to advance his dream, and he was consumed by his famous row with Isaac Newton over their simultaneous invention of the calculus (a quarrel that would have been silly had it not absorbed so tragically two great minds). Boole lived in poverty and died relatively young of pneumonia, having walked to his lectures through rain. Cantor died in deep depression, his ingenious work ignored. Although Frege's reputation survived well the defect Russell found in his system, it was marred forever by the vehement racism and anti-Semitism articulated in his late writings. Hilbert lived until 1943, in a strange (and defiant) denial of the evil that was destroying his country, whereas Gödel fled to Princeton, where, possessed by an advancing paranoia, he starved himself to death in 1977. Still, the most tragic end is Turing's: His apparent suicide in 1954 was presumably the result of his trial for homosexuality, his subsequent forced treatment with hormones and his continued persecution by the country that he had served so brilliantly and crucially.

A final fascinating thread through this story is the author's own involvement in it: He lived in Einstein and Gödel's Princeton of the early 1950s, programmed some of the first computers built with von Neumann's design, and (something that, in modesty, he does not mention) contributed much to the solution of the 10th (and perhaps most famous) of the 23 major mathematical challenges that Hilbert posed at the dawn of the 20th century.

The book mentions only in passing the great influence logic has had on half a century of research in computer science: Techniques for the automatic verification of programs and circuits, systems for common-sense reasoning and database query languages (to note only three well-known examples) all are based to a large extent on logic. But things may be changing. Von Neumann would not have approved of the chaotic, unplanned, almost illogical architecture of the Internet, the great artifact (if you can call it that) of our time; and probabilistic reasoning (interestingly, the subject of the second half of Boole's Laws of Thought) seems to be replacing logic as the mathematical tool of choice in computer science. But whatever the future may hold, the influence of logic on the development of computers and computation is undeniable and deep—and it could not have found a more eloquent and worthy chronicler.

American Scientist Comments and Discussion

To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.

If we re-share your post, we will moderate comments/discussion following our comments policy.