Machines, Minds and Madness
THE CYBERNETIC BRAIN: Sketches of Another Future. Andrew Pickering. x + 526 pp. University of Chicago Press, 2010. $55.
In 1948 the British neurologist W. Grey Walter began work on a pair of robotic “tortoises”—three-wheeled creatures named Elmer and Elsie that crawled around on the floor seeking light, avoiding obstacles and even engaging in a kind of mating dance. Elmer and Elsie were a big hit at public demonstrations and in a BBC newsreel; Walter also wrote a book about them, as well as two Scientific American articles. And the tortoises had a lasting influence; for example, they inspired the “turtle geometry” that animated computer graphics 30 years later.
Walter’s tortoises were an experiment in cybernetics—a word that was still shiny and new then, having been coined by Norbert Wiener just a year earlier. Cybernetics drew heavily on ideas from control theory and the design of servomechanisms, but the aim was to achieve something more than just the stable, self-regulating feedback loops of thermostats or float valves. The cyberneticians wanted to build systems with a capacity for boredom and curiosity, fatigue and excitement, learning and forgetting, and maybe even desire and fear. Most of all they wanted to model the brain.
The British cybernetics community of the late 1940s and early 1950s was small but enthusiastic. They even had a club, called the Ratio Club, which met in a basement room of the National Hospital for Nervous Diseases in London. The club attracted a diverse membership: Alan Turing was a mathematician and pioneer of computing, I. J. Good a mathematician and statistician, Tommy Gold an astrophysicist. But most of the members came from the neurosciences, psychology or psychiatry (which explains the choice of meeting place). Walter was one of the neuro specialists, an expert in electroencephalography. (He discovered theta and delta brain waves.) Another active member, W. Ross Ashby, was the director of a psychiatric hospital in Gloucester. Like Walter, Ashby took up a soldering iron to build cybernetic machines in his spare time.
Andrew Pickering’s new book, The Cybernetic Brain, examines the lives and works of six figures in the British cybernetics community, starting with Walter and Ashby. The other four subjects are Gregory Bateson, an anthropologist who turned to psychiatry late in his career; R. D. Laing, a radical psychiatrist (or “antipsychiatrist”); Stafford Beer, who applied cybernetic principles to business management (and also to the national economy of Chile); and finally Gordon Pask, the only one of the bunch who could claim cybernetics as a profession rather than a sideline. In this review I cannot do justice to all six of these stories, so I am going to focus mainly on Walter and Ashby, the pioneers.
The British variant of cybernetics emphasized machines that act rather than machines that think; in Pickering’s terms, the distinction is between the performative and the cognitive aspects of the brain. The aim was to model things we do without necessarily being able to articulate how we do them—walking, breathing, maintaining a steady body temperature. Walter’s tortoises offer a good illustration, with their goal-directed behavior. They were attracted to light, unless it was too bright, in which case they were repelled by it. They also modified their behavior according to their own internal state, adjusting their brightness threshold when their batteries ran low. (The charging station had a bright lamp.) What the tortoises did not have was any mental representation of the world they moved through, or any semblance of consciousness. This is hardly a surprise: The tortoise brain consisted of two vacuum tubes and a few relays.
In spite of this simplicity, Elmer and Elsie were capable of surprising their creator. Each tortoise had a pilot light that turned on when the tortoise was scanning the environment in search of a new light source; then the light winked off as soon as the tortoise locked onto a beacon. Given a mirror, the tortoise would be attracted to its own light, but locking onto the light immediately extinguished it. In front of the mirror, the tortoise began “flickering, twittering and jigging like a clumsy Narcissus.”
Ashby’s first foray into cybernetic machinery was a device he called the homeostat. This one didn’t crawl around on the floor. It was a box with lots of knobs and switches, and a pivoting vane on top. Three electrical inputs determined (in a complicated way) a single output current, whose level was indicated by the pivoting vane. One homeostat in isolation didn’t do much; it was designed to hold the output current steady, so that the vane would remain in the middle of its range. But when Ashby wired four homeostats together—with each output becoming an input to another unit—things got interesting, if not chaotic. Each unit tried to maintain its output current near the midpoint, but the balance was upset by fluctuations in the inputs from other units; the resulting excursions in output were then fed back to the other units’ inputs, and so the disturbance continued to circulate through the system. An open question was whether the oscillations would settle down to some stable state, and how long that would take.
It’s fair to ask at this point what all these electronic black boxes and cute kitchen-floor robots could possibly have to do with the brain and psychiatry. Ashby thought that the brain might well be an adaptive, self-organizing device something like his network of homeostats. An incoming stimulus would create a disturbance that could rattle around among the units for a while, until the system reached some stable equilibrium, which would constitute the brain’s response to that stimulus. The final configuration would depend on both the stimulus and the system’s internal state—indeed, its entire history. Again, however, there was the question of whether convergence to a stable state was guaranteed, and how long it might take to achieve. Ashby’s estimate was not encouraging. In a fully connected network, with every unit’s output going to every other unit’s input, he found that the settling time would exceed the age of the universe, even for fairly small networks. Thus if the homeostat mechanism is to be salvaged as the basis of a theory of the brain, connectivity must be severely limited.
Ashby wrote in his diary that his cybernetic experiments were merely an amusement and a hobby, unrelated to his professional responsibilities of caring for psychiatric patients. Pickering refuses to accept this disclaimer; he sees links between the cybernetic hobby and treatments for mental illness, notably electroconvulsive shock and prefrontal lobotomy. Pickering writes:
[I]t is hard not to relate Ashby’s later thoughts on the density of connections between homeostat units, and their time to reach equilibrium, with lobotomy. Perhaps the density of neural interconnections can somehow grow so large that individuals can never come into equilibrium with their surroundings, so severing a few connections surgically might enable them to function better.
Pickering goes on to remark, “Ashby often failed to drive home these points in print, but that proves very little.” This strikes me as a rather high-handed rhetorical maneuver, insisting on what Ashby must have believed even while acknowledging that he didn’t quite say it. Yet the supposition is not utterly far-fetched; there’s no doubt that Ashby was indeed a proponent of therapeutic interventions such as electroconvulsive shock.
The evidence is clearer in the case of Walter. After Elmer and Elsie, Walter built a second generation of tortoises to explore capacities for memory and learning. He found that conflicting stimuli could induce an “experimental neurosis.” Walter wrote:
The “instinctive” attraction to a light is abolished and the model can no longer approach its source of nourishment. This state seems remarkably similar to the neurotic behavior produced in human beings by exposure to conflicting influences or inconsistent education.
Pickering comments: “[A]fter driving his tortoises mad, Walter cured them.” He proposed three therapies: leaving the machine without stimuli for a while, switching it off and on, and disconnecting some of the circuits. And Walter made the psychiatric analogy overt:
Psychiatrists also resort to these stratagems—sleep, shock and surgery. To some people the first seems natural, the second repulsive, and the third abhorrent. . . . [B]ut our simple models would indicate that, insofar as the power to learn implies the danger of breakdown, simplification by direct attack may well and truly arrest the accumulation of self-sustaining antagonism and “raze out the written troubles of the brain.”
For the one other professional psychiatrist among Pickering’s six subjects—R. D. Laing—links to cybernetic ideas are tenuous. Pickering admits that “Laing did not describe himself as a cybernetician,” but Pickering nonetheless sees something cybernetic about Laing’s “rumpus room” approach to psychotherapy, in which patients and caregivers “were left to adjust and adapt to one another, without any prescription how that should be accomplished.” I suppose there’s an echo here of the homeostat network settling toward equilibrium, but it’s an awfully faint echo.
Gregory Bateson also gains entry to this group because of his work in psychiatry—one aspect of a highly varied career—and he at least had a clear connection with the cybernetic community, as a founder of a conference series on cybernetics in the 1940s. His psychiatric work hinged on the idea of the double bind, a situation in which none of the available responses is satisfactory, as with Walter’s neurotic robots.
With Stafford Beer and Gordon Pask, cybernetics moves well beyond the mechanistic models of Walter and Ashby and also leaves behind the preoccupation with understanding the brain and the mind. Beer’s early interests were operations research and business management, and he drew up plans for running a steel mill as a “cybernetic factory.” He also explored biological computing and at one point thought of recruiting a pond ecosystem as the manager of a factory. Then he went to Chile during the regime of Salvador Allende to manage the nation’s economy on cybernetic principles.
Pask had an early encounter with Norbert Wiener and was so impressed that he wound up earning a doctoral degree in cybernetics. (There can’t be many of those.) He worked in the theater (creating lighting systems that adapt to the performer) and architecture (designing buildings that adapt to their occupants). He also developed teaching machines and chemical computers.
In The Cybernetic Brain Pickering has gathered a trove of stories about a community whose work deserves to be better known. He has done careful and thorough historical research, reading diaries and other unpublished material; in one case he fills in details of a project that had almost sunk from memory—Ashby’s failed attempt to follow up on the homeostat work with a more elaborate system called DAMS. Pickering has also traced the ramifications of cybernetic thought into dozens of surprising (and sometimes dark) corners: the novels of William Burroughs, the music of Brian Eno and that of John Cage, biofeedback, the hallucinogenic drug culture of the 1960s, the cellular automata of Stephen Wolfram, the architectural patterns of Christopher Alexander, the robots of Rodney Brooks. And he makes acute observations about the social context of cybernetics: For many practitioners it remained a sideline or a kitchen-table hobby simply because it never became an established academic discipline or had much institutional support.
The stories told here are deeply engaging. I am grateful to have them. However, I must add that they are told and interpreted from an ideological viewpoint that I find silly and exasperating. Pickering’s aim in this book is not just to revive interest in cybernetics; he wants “to challenge the hegemony of modernity.” And modern, in Pickering’s vocabulary, is a word that refers not to chronology but to attitude or sensibility. Modern science is not just science as it happens to be practiced now; it is the science of cause and effect, of command and control. The ambition of modern science, he writes, is “the achievement of general knowledge that will enable us to calculate (or, retrospectively, explain) why things in the world go this way or that.” These are not words of praise.
For Pickering, cybernetics offers a nonmodern alternative, a science that can focus on performing rather than knowing, engaging with the world without trying to control it or even, perhaps, to understand it—rather like Elmer and Elsie bumbling toward their charging station. He wants to celebrate the “ontology of unknowability,” resisting the scientist’s urge to open up the black box, catalogue its parts and draw a diagram of how it works. The “sketches of another future” in Pickering’s subtitle refer to his wistful hope that the world might make room for such a nonanalytic science.
I am moved to respond to this notion on two levels. First, although the various gadgets built by Walter and Ashby were more performative than cognitive, that’s surely not true of Walter and Ashby themselves. They wanted not just to model brainlike behavior but to understand how the model worked, and ultimately to gain some insight into how the brain itself works. Stafford Beer later argued that some systems are so complex that true understanding will always elude us. He may have been right, but that does not blunt the desire. As David Hilbert put it: “We must know. We will know.”
Second, it is helpful to keep in mind that we usually speak of cybernetics in the past tense, and for good reasons. The framework of ideas erected by Wiener, Walter, Ashby and their followers was not a failure or a dead end, but it is not the armature on which recent students of the brain or of other complex systems choose to build their theories. In computer science these days, the most devastating criticism that can be leveled against an idea is that “it doesn’t scale”—it might work for small problem instances but not for big ones. The methods of Walter and Ashby didn’t scale, as they discovered for themselves when they tried to build systems larger than a two-tube tortoise or a four-unit network of homeostats. Their work is no less fascinating for that, but it remains another past, not another future.
Brian Hayes is Senior Writer for American Scientist. He is the author most recently of Group Theory in the Bedroom, and Other Mathematical Diversions (Hill and Wang, 2008).
» Post Comment