Subscribe
Subscribe
MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
Logo IMG
HOME > PAST ISSUE > Article Detail

COMPUTING SCIENCE

The Manifest Destiny of Artificial Intelligence

Will AI create mindlike machines, or will it show how much a mindless machine can do?

Brian Hayes

Gaming the System

2012-07HayesFA_rev.jpgClick to Enlarge ImageThe game of checkers was the subject of one of the earliest success stories in AI. Arthur L. Samuel of IBM started work on a checkers-playing program in the early 1950s and returned to the project several times over the next 20 years. The program was noteworthy not only for playing reasonably well—quite early on, it began beating its creator—but also for learning the game in much the same way that people do. It played against various opponents (including itself!) and drew lessons from its own wins and losses.

Samuel explained the program’s operation in terms of goals and subgoals. The overall goal was to reach a winning position, where the opponent has no legal move. The program identified subgoals that would mark progress toward the goal. Experienced players pointed out that the program’s main weakness was the lack of any sustained strategy or “deep objective.”

The subsequent history of computer checkers is dominated by the work of Jonathan Schaeffer and his colleagues at the University of Alberta. In 1989 they began work on a program called Chinook, which quickly became a player of world-champion caliber. It played twice against Marion Tinsley, who was the preeminent human checkers player of the era. (Tinsley won every tournament he entered from 1950 to 1994.) In a 1992 match, Tinsley defeated Chinook 4–2 with 33 draws. A rematch two years later ended prematurely when Tinsley withdrew because of illness. The six games played up to that point had all been draws, and Chinook became champion by forfeit. Tinsley died a few months later, so there was never a final showdown over the board.

Chinook’s approach to the game was quite unlike that of Samuel’s earlier program. There was no hierarchy of goals and subgoals, and no attempt to imitate the strategic thinking of human players. Chinook’s strength lay entirely in capacious memory and rapid computation. At the time of the second Tinsley match, the program was searching sequences of moves to a minimum depth of 19 plies. (A ply is a move by one side or the other.) Chinook had a library of 60,000 opening positions and an endgame database with precomputed outcomes for every position with eight or fewer pieces on the board. There are 443,748,401,247 such positions.

More recently, Schaeffer and his colleagues have gone on from creating strong checkers players to solving the game altogether. After a series of computations that ended in 2007, they declared that checkers is “weakly solved.” The weak solution identifies a provably optimal line of play from the starting position to the end—which turns out to be a draw. Neither player can improve his or her (or its) outcome by departing from this canonical sequence of moves. (A “strong” solution would give the correct line of play from any legally reachable board position.) By the time this proof was completed, the endgame database encompassed all positions with 10 or fewer pieces (almost 40 trillion of them).

Schaeffer notes that his checkers-playing program doesn’t need to know much about checkers:

Perhaps the biggest contribution of applying AI technology to developing game-playing programs was the realization that a search-intensive (“brute-force”) approach could produce high-quality performance using minimal application-dependent knowledge.

There is room here for a devil’s advocate to offer a counterargument. Winning isn’t everything, and playing the game without understanding it is not the most obvious route to wisdom. When Samuel began his work on checkers, his aim was not just to create an invincible opponent but to learn something about how people play games—indeed, to learn something about learning itself. Progress toward these broader goals could have influence beyond the world of board games.

But this position is difficult to defend. It turns out that brute-force methods like those of Chinook have been highly productive in a variety of other areas. They are not just tricks for winning games; Schaeffer cites bioinformatics and optimization among other application areas. The anthropocentric scheme, taking human thought patterns as the model for computer programs, has been less fruitful so far.








comments powered by Disqus
 

EMAIL TO A FRIEND :

Subscribe to American Scientist