MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
RSS
Logo IMG
HOME > PAST ISSUE > Article Detail

COMPUTING SCIENCE

The Manifest Destiny of Artificial Intelligence

Will AI create mindlike machines, or will it show how much a mindless machine can do?

Brian Hayes

Q and A

My final example comes from an area of AI where algorithmic ingenuity and high-performance computing have yet to triumph fully. The task is to answer questions formulated in ordinary language.

In a sense, we already have an extraordinary question-answering technology: Web search engines such as Google and Bing put the world at our fingertips. For the most part, however, search engines don’t actually answer questions; they provide pointers to documents that might or might not supply an answer. To put it another way, search engines are equipped to answer only one type of question: “Which documents on the Web mention X?,” where X is the set of keywords you type into the search box. The questions people really want to ask are much more varied.

One early experiment in question answering was a program called Baseball, written at MIT circa 1960 by Bert F. Green Jr. and three colleagues. The program was able to understand and answer questions such as “Who did the Red Sox lose to on July 5, 1960?” This was an impressive feat at the time, but the domain of discourse was very small (a single season of professional baseball games) and the form of the queries was also highly constrained. You couldn’t ask, for example, “Which team won the most games?”

For a glimpse of current research on question answering we can turn to the splendidly named KnowItAll project of Oren Etzioni and his colleagues at the University of Washington. Several programs written by the Etzioni group address questions of the “Who did what to whom?” variety, extracting answers from a large collection of texts (including a snapshot of the Web supplied by Google). There’s a demo at openie.cs.washington.edu. Instead of matching simple keywords, the KnowItAll programs employ a template of the form X ~ Y, where X and Y are generally noun phrases of some kind and “~” is a relation between them, as in “John loves Mary.” If you leave one element of the template blank, the system attempts to fill in all appropriate values from the database of texts. For example, the query “___ defeated the Red Sox” elicits a list of 59 entries. (But “___ defeated the Red Sox on July 5, 1960” comes up empty.)

KnowItAll is still a research project, but a few other question-answering systems have been released into the wild. True Knowledge parses natural-language queries and tries to find answers in a hand-crafted semantic database. Ask.com combines question answering with conventional keyword Web searching. Apple offers the Siri service on the latest iPhone. Wolfram Alpha specializes in quantitative and mathematical subjects. I have tried all of these services except Siri; on the whole, unfortunately, the experience has been more frustrating than satisfying.

A bright spot on the question-answering horizon is Watson, the system created by David Ferrucci and a team from IBM and Carnegie Mellon to compete on Jeopardy. The winning performance was dazzling. On the other hand, even after reading Ferrucci’s explanation of Watson’s inner architecture, I don’t really understand how it works. In particular I don’t know how much of its success came from semantic analysis and how much from shallower keyword matching or statistical techniques. When Watson responded correctly to the clue “Even a broken one of these on your wall is right twice a day,” was it reasoning about the properties of 12-hour clocks in a 24-hour world? Or did it stumble upon the phrase “right twice a day” in a list of riddles that amuse eight-year-olds?

Writing in Nature last year, Etzioni remarked, “The main obstacle to the paradigm shift from information retrieval to question answering seems to be a curious lack of ambition and imagination.” I disagree. I think the main obstacle is that keyword search, though roundabout and imprecise, has proved to be a remarkably effective way to discover stuff. In the case of my baseball question, Google led me straight to the answer: On July 5, 1960, the Red Sox lost to the Orioles, 9 to 4. Once again, shallow methods that look only at the superficial structure of a problem seem to be outperforming deeper analysis.





» Post Comment

 

EMAIL TO A FRIEND :

Of Possible Interest

Letters to the Editors: Getting Personal

Perspective: The Nature of Scientific Proof in the Age of Simulations

Feature Article: The Fine Art of Decay

Subscribe to American Scientist