Logo IMG
HOME > PAST ISSUE > Article Detail


Delving into Deep Learning

The latest neural networks learn to see and hear, and maybe even dream.

Brian Hayes

The Learning Algorithm

The automaton we have just assembled is not a learning machine. The patterns it detects are predetermined, like the hard-wired behaviors of a primitive organism. Learning requires some form of corrective feedback.

Suppose you want to build a stripe detector not by hand-picking the motifs to accept but by letting the system learn them. You project a series of images onto the sensor, some of which should be recognized as vertical-stripe patterns and some not. If the network gives the correct answer for an image, do nothing. If the system makes the wrong choice, there must be at least one neuron in the input layer that responded incorrectly, either accepting a motif it should have rejected, or vice versa. Find all such errant neurons, and instruct them to reverse their classification of the current motif.

With this feedback mechanism, we have a machine that improves with practice—but we are still a long ways from a device that can learn to recognize human faces. Given this network architecture—an input layer and a single output neuron—the repertory of recognized patterns can never extend beyond rather simple geometric figures. Moreover, the system’s all-or-nothing logic makes it brittle and inflexible. A single stray pixel can alter the machine’s verdict, and it can be taught to recognize just one set of patterns at a time. More useful would be a classifier that could look at a variety of patterns and assign them to groups.

comments powered by Disqus


Of Possible Interest

Engineering: The Story of Two Houses

Letters to the Editors: The Truth about Models

Spotlight: Briefings

Subscribe to American Scientist