Logo IMG
HOME > PAST ISSUE > Article Detail


Delving into Deep Learning

The latest neural networks learn to see and hear, and maybe even dream.

Brian Hayes

Going Deeper

Our little neural network can be augmented in several ways. To begin with, the simple yes-or-no choices can be extended to a continuous range of values. Signals passing from one neuron to another are multiplied by a coefficient, called a weight, with a value between –1 and +1. The receiving neuron adds up all the weighted inputs, and calculates an output based on the sum. Signals with a positive weight are excitatory; those with a negative weight are inhibitory. Heavily weighted signals (whether positive or negative) count for more than those with weights near zero. Learning becomes a matter of adjusting the weights in response to corrective feedback.

The geometric scope of the input neurons can also be enlarged. Each neuron might collect inputs from a larger patch, or the inputs might come from regions scattered across the sensor. In the limiting case, every input neuron receives signals from every sensor element.

Finally, the two-layer architecture of the network can be expanded. Inserting intermediate layers of neurons—known as hidden layers because they don’t directly communicate with the outside world—lifts many restrictions on the computational capabilities of neural networks. Indeed, this is what makes the networks “deep.”

Deep networks are more versatile and potentially more powerful. They are also more complex and computationally demanding. It’s not just that there are more neurons and more connections between them. The big challenge is organizing the learning process. Suppose a certain hidden-layer neuron has sent the wrong signal to the output layer, causing the system to mix up Elvis Presley and Elmer Fudd. You might “punish” that behavior by decreasing the weight of the Elvis connection. But maybe the error should really be attributed to the input neurons that feed information to the hidden neuron. It’s not obvious how to apportion blame in this situation.

comments powered by Disqus


Subscribe to American Scientist