MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
RSS
Logo IMG
HOME > PAST ISSUE > Article Detail

COMPUTING SCIENCE

The Great Principles of Computing

Computing may be the fourth great domain of science along with the physical, life and social sciences

Peter J. Denning

Computing’s Paradigm

Traditional scientists frequently questioned the name computer science. They could easily see an engineering paradigm (design and implementation of systems) and a mathematics paradigm (proofs of theorems) but they could not see much of a science paradigm (experimental verification of hypotheses). Moreover, they understood science as a way of dealing with the natural world, and computers looked suspiciously artificial.

The founders of the field came from all three paradigms. Some thought computing was a branch of applied mathematics, some a branch of electrical engineering, and some a branch of computational-oriented science. During its first four decades, the field focused primarily on engineering: The challenges of building reliable computers, networks and complex software were daunting and occupied almost everyone’s attention. By the 1980s these challenges largely had been met and computing was spreading rapidly into all fields, with the help of networks, supercomputers and personal computers. During the 1980s computers became powerful enough that science visionaries could see how to use them to tackle the hardest questions—the “grand challenge” problems in science and engineering. The resulting “computational science” movement involved scientists from all countries and culminated in the U.S. Congress’s adoption of the High-Performance Computing and Communications (HPCC) Act of 1991 to support research on a host of large problems.

Today, there is an agreement that computing exemplifies science and engineering, and that neither science nor engineering characterizes computing. Then what does? What is computing’s paradigm?

The leaders of the field struggled with this paradigm question from the beginning. Along the way, there were three waves of attempts to unify views. Allen Newell, Alan Perlis and Herb Simon led the first one in 1967. They argued that computing was unique among all the sciences in its study of information processes. Simon, a Nobel laureate in economics, went so far as to call computing a science of the artificial. A catchphrase of this wave was “computing is the study of phenomena surrounding computers.”

The second wave focused on programming, the art of designing algorithms that produce information processes. In the early 1970s, computing pioneers Edsger Dijkstra and Donald Knuth took strong stands favoring algorithm analysis as the unifying theme. A catchphrase of this wave was “computer science equals programming.” In recent times, this view has foundered because the field has expanded well beyond programming, whereas the public understanding of a programmer has narrowed to just those who write code.

The third wave came as a result of the Computer Science and Engineering Research Study (COSERS), led by Bruce Arden in the late 1970s. Its catchphrase was “computing is the automation of information processes.” Although its final report successfully exposed the science in computing and explained many esoteric aspects to the layperson, its central view did not catch on.

An important aspect of all three definitions was the positioning of the computer as the object of attention. The computational-science movement of the 1980s began to step away from that notion, adopting the view that computing is not only a tool for science, but also a new method of thought and discovery in science. The process of dissociating from the computer as the focal point came to completion in the late 1990s when leaders in the field of biology—epitomized by Nobel laureate David Baltimore and echoing cognitive scientist Douglas Hofstadter—said that biology had become an information science and DNA translation is a natural information process. Many computer scientists have joined biologists in research to understand the nature of DNA information processes and to discover what algorithms might govern them.

Take a moment to savor this distinction that biology makes. First, some information processes are natural. Second, we do not know whether all natural information processes are produced by algorithms. The second statement challenges the traditional view that algorithms (and programming) are at the heart of computing. Information processes may be more fundamental than algorithms.

Scientists in other fields have come to similar conclusions. They include physicists working with quantum computation and quantum cryptography, chemists working with materials, economists working with economic systems, and social scientists working with networks. They have all said that they have discovered information processes in their disciplines’ deep structures. Stephen Wolfram, a physicist and creator of the software program Mathematica, went further, arguing that information processes underlie every natural process in the universe.

All this leads us to the modern catchphrase: “Computing is the study of information processes, natural and artificial.” The computer is a tool in these studies but is not the object of study. As Dijkstra once said, “Computing is no more about computers than astronomy is about telescopes.”

The term computational thinking has become popular to refer to the mode of thought that accompanies design and discovery done with computation. This term was originally called algorithmic thinking in the 1960s by Newell, Perlis and Simon, and was widely used in the 1980s as part of the rationale for computational science. To think computationally is to interpret a problem as an information process and then seek to discover an algorithmic solution. It is a very powerful paradigm that has led to several Nobel Prizes.





» Post Comment

 

EMAIL TO A FRIEND :

Subscribe to American Scientist