Subscribe
Subscribe
MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
Logo IMG
HOME > PAST ISSUE > Article Detail

COMPUTING SCIENCE

The Manifest Destiny of Artificial Intelligence

Will AI create mindlike machines, or will it show how much a mindless machine can do?

Brian Hayes

Neats Versus Scruffies

At the outset, research in artificial intelligence was the project of a very small community. An inaugural conference in 1956 had just 10 participants. They included Allen Newell and Herbert A. Simon of Carnegie Tech (now Carnegie Mellon University); Minsky, who had just begun his career at MIT; and John McCarthy, who left MIT to start a new laboratory at Stanford. A major share of the early work in AI was done by these four individuals and their students.

It was a small community, but big enough for schisms and factional strife. One early conflict pitted “the neats” against “the scruffies.” The neats emphasized the role of deductive logic; the scruffies embraced other modes of problem-solving, such as analogy, metaphor and reasoning from example. McCarthy was a neat, Minsky a scruffy.

An even older and deeper rift divides the “symbolic” and the “connectionist” approaches to artificial intelligence. Are the basic atoms of thought ideas, propositions and other such abstractions? Or is thought something that emerges from patterns of activity in neural networks? In other words, is the proper object of study the mind or the brain?

If an artificial intelligence needs a brain, maybe it also needs a body, with sensors that connect it to the physical world; thus AI becomes a branch of robotics. Still another faction argues that if a machine is to think, it must have something to think about, and so the first priority is encoding knowledge in computer-digestible form.

A backdrop to all of these diverging views within the AI community is a long-running philosophical debate over whether artificial intelligence is possible at all. Some skeptics hold that human thought is inherently nonalgorithmic, and so a deterministic machine cannot reproduce everything that happens in the brain. (It’s the kind of dispute that ends not with the resolution of the issue but with the exhaustion of the participants.)

Through the 1970s, most AI projects were small, proof-of-concept studies. The scale of the enterprise began to change in the 1980s with the popularity of “expert systems,” which applied AI principles to narrow domains such as medical diagnosis or mineral prospecting. This brief flurry of entrepreneurial activity was followed by a longer period of retrenchment known as “the AI winter.”

The present era must be the AI spring, for there has been an extraordinary revival of interest. Last year Peter Norvig and Sebastian Thrun were teaching an introductory AI course at Stanford and opened it to free enrollment over the Internet. They attracted 160,000 online students (though “only” 23,000 successfully completed the course). The revival comes with a new computational toolkit and a new attitude: Intelligent machines are no longer just a dream for the future but a practical technology we can exploit here and now. I’m going to illustrate these changes with three examples of AI then and now: machines that play games (in particular checkers), machines that translate languages and machines that answer questions.




comments powered by Disqus
 

EMAIL TO A FRIEND :

Subscribe to American Scientist