Subscribe
Subscribe
MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
Logo IMG
HOME > PAST ISSUE > Article Detail

COMPUTING SCIENCE

What's in Brian's Brain?

Despite the progress of neuroscience, I still don’t know my own mind

Brian Hayes

The Brainiac

As modern neuroscience has grown up alongside molecular biology, it has also been strongly influenced by another blossoming field: computer science. Brains and computers are called on to perform many of the same tasks: doing arithmetic, solving puzzles, planning strategies, analyzing patterns, filing away facts for future reference. At a more theoretical level, biological and electronic machines surely have the same computational power. And yet attempts to explain brains in terms of computers, or vice versa, have not been wildly successful.

The neural circuits sketched by McCulloch and Pitts look much like the diagrams of logic gates (AND, OR, NOT, etc.) that appear in designs of digital computers. The resemblance is misleading. McCulloch and Pitts showed that their networks of neurons can compute the same set of logical propositions as certain abstract automata. However, neurons in the brain are not wired together to form such simple logic circuits.

A typical electronic logic gate computes a function of two or three inputs. For example, a three-input AND gate has an output of true if and only if all three inputs are true. There are eight combinations of three true-or-false signals, yielding 256 possible functions of those inputs. A typical neuron has thousands of synaptic inputs. For 1,000 signals there are 21,000 combinations—an immense quantity, far exceeding the number of cells in the human brain. The number of functions that a 1,000-input neuron might be computing is larger still: 2 raised to the power 21,000. For such a neuron there’s no hope of constructing a complete truth table, showing the cell’s response to all possible combinations of inputs; the appropriate tool for describing the action of the neuron is not logic but statistics.

The spiky nature of neural signaling is a further complication. Standard electronic logic gates work on persistent signals: Apply a voltage to each of the inputs, wait for the system to settle down, then read the state of the output. Neural signals are brief impulses rather than steady voltages. Thus the output of a neuron depends not just on which input signals are present but also on their precise timing.

Abstract models of the brain, such as artificial neural networks, smooth away many of these complexities. If the function that a neuron computes is just a weighted sum of the inputs, only a tiny fraction of the possible combinations need to be considered. (For a fixed set of input weights, the number of distinguishable combinations falls from 21,000 to 1,001.) Likewise the problem of synchronizing spikes is eased by assuming that the neuron merely measures the average rate of spiking. The extent to which biological neurons adopt these simplifying strategies remains a matter of controversy and conjecture.




comments powered by Disqus
 

EMAIL TO A FRIEND :

Subscribe to American Scientist