Top banner
MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
RSS
Logo

FEATURE ARTICLE

Science in 2006

A former IBM chief scientist looks ahead from 1986 into the twenty-first century

Lewis Branscomb

Originally published in the November-December 1986 issue of American Scientist.

When the National Science Board met in the spring of 2006 to discuss the budget for 2007, there was a debate about the competition between "big" and "little" science. There had not been such a debate for a long time, because all areas of science were now supported by instrumentation of a power undreamed of back in 1986. Intelligent instruments were not only capable of extraordinary resolution and sensitivity; they had attained a degree of control and a complexity of synthesis beyond the reach of science twenty years before. All science had become capital intensive.

What triggered the debate was the Board's responsibility for overseeing all U.S. scientific operations in Antarctica, where amazing discoveries in the biological adaptation of microorganisms had been made, and where the record of terrestrial climate variations had been found to be completely preserved in the ice cores. As a result, both environmental and climatological research had been greatly expanded.

The debate concerned a discipline new to the frozen continent: high-energy physics. It seems that the Superconducting Super-Collider (SSC) had been finished in late 1999, just in time to celebrate the arrival of the twenty-first century. The presidential campaign of 2000 was about to begin, and recapturing the U.S. position in basic science was a major issue. Since the Japanese had been outspending the Americans in science for a decade, as had the Europeans collectively, the US presidential candidates were pushing for the internationalization of "big science" projects.

High-energy experimentalists (and accelerator designers) were dissatisfied with the SSC almost before it was built, and had proposed an even bigger machine: a ring 5,000 km. in circumference, which would lend itself in unique ways to cheap tunneling costs and to international sponsorship and operation. The scheme was to build the toroidal tunnel 1,000 m. under the surface of central Antarctica, where the ice is over 2 km. thick for millions of square kilometers. An unmanned robot tunneling tool navigated by inertial guidance would melt the ice as it went with steam supplied by a small fission reactor. (All reactor wastes were to be removed, in accordance with the Antarctic treaty, and burned in a fusion waste converter being built at Oak Ridge.) The treaty provided the framework for the project, and the National Academy of Sciences had suggested an International Cosmological Year (the "ICY") to mobilize the international community behind the accelerator project.

Merging of Science and Engineering

The use of advanced engineering technology to further scientific objectives was, of course, not new in 1999. Back in 1986, the ability to reconstruct genetic material and to prepare new inorganic materials, unknown to nature, with molecular beam epitaxy, gave a hint of things to come. More and more, scientists were constructing a nature of their own design. With the increased complexity of the instrumentation, driven by internal computers that were directed by tens of thousands of lines of microcode programming, it had become harder to distinguish between the disciplines of science and engineering.

The main difference now was in point of view. Scientists still pursued questions, and engineers answers, but there was little distinction in the tools they used. Both scientists and engineers were heavily dependent on high-speed computer equipment to model their ideas and to simulate experiments and prototypes. Indeed, the theoretical experiments of the scientist were hard to distinguish from the prototype simulations of the engineer. Both ran on teraflop (1012 floating point instructions per second) parallel processors conveniently accessible through desktop workstations.

Complex Instrumentation

Despite the enormous power of the new instrumentation, without which none of the fantastic new experiments could be done, many scientists were a little nostalgic about the old days. (At least the chemists, biologists, and atomic and molecular physicists were. High-energy physicists, astronomers, and oceanographers had long since been committed to complex facilities requiring extremely sophisticated engineering; that's why those fields were called "big science.")

They could remember when, if you were clever, you could take apart the sensor system in your commercial instrumentation, make some innovative changes, and squeeze out another factor of two in resolution or sensitivity. Or write a new data-analysis routine that discriminated better against unwanted signals. The engineers from the instrument company always showed their gratitude, and incorporated many of these good ideas in the next model. Now, in 2006, the signal-processing and analysis capabilities of the instrument were so complex and sophisticated that you had to call in applied mathematicians and software engineers to make a modification. And you needed the approval of the instrument company even for this, because any change to the system would void its warranty.

Sharing Facilities

The other thing they were wistful about was, of course, the fact that everyone had to share this instrumentation with many others. In the old days you could call your apparatus your own. (Does anyone remember science in the days of "love and string and sealing wax," as Arthur Roberts's ballad of 1945 called it, when physicists built all their own apparatus from scratch?)

There were two reasons everyone had to share now. First, what chemist could get a grant big enough to pay for a dozen of these new tools, when each one cost a quarter of a million dollars? That was already happening back in 1986, and only a few lucky scientists in corporate laboratories, national laboratories, or special institutes commanded such facilities. In 2006 you share, or you can't compete.

There was another reason for sharing. The threads of different branches of science had become interwoven, rather like the strands of the DNA whose manipulation had motivated so much of the new instrumentation. Computer scientists were beginning to experiment with the biological replication of crystalline structures for storing data. Chemists were building atomic surface structures designed for catalysis at theoretical optimum rates of specific reactions. From a purely technical point of view, very few people could command the immense range of scientific and technical knowledge necessary to master the intricacies of such a broad spectrum of research tools. Scientific progress now depended on the use of the full range of tools available-from mathematics to biology.

Reintegration of the Sciences I: Neurophysiology, Brain Studies, and Cognitive and Behavioral Science

But a more profound change had transformed the character of the research community in 2006. To make important progress you not only had to master the tools of a broad range of disciplines; you had to draw on the ideas and points of view of many disciplines. To take just one example, scientists who had mastered the neuro-physiology of the brain-and had used tools like chemically specific NMR imaging to correlate biochemical activity with cognitive activity and behavior-were now beginning to make extraordinary progress in building a scientific model of the human brain in which physiology and personality were integrated in a reasonably well-understood way. The benefits promised to be immense for addressing the causes and treatment of mental illness, identifying the driving forces of aggressive and self-destructive behavior, and helping computer scientists understand the human functions their inventions were designed to support or mimic. Neurophysiology, cognitive psychology, biochemistry, and behavioral science had all become engaged with the challenge to comprehend the human brain and mind.

Reintegration of the Sciences II: Cosmology, High-Energy Physics, Astrophysics and Mathematics

The same integration of several branches of science had occurred in other fields. In 2006 the ICY machine was still being debated by the politicians; the Malaysian delegate to the United Nations wanted assurance that any constituent particles of the quark that might be discovered would be declared the Common Heritage of Mankind. Meanwhile the SSC, still the workhorse of high-energy physics, had long ago proved to be a kind of time machine for exploring what happened during the first seconds of the most recent creation. The conceptual boundaries separating theoretical astrophysics from particle physics had long since vanished. But that was really already true back in 1986. The new news was the remarkable resurgence of pure mathematics as physicists rediscovered its power, ending a long estrangement stretching from the 1930s to the 1980s.

Reintegration of the Sciences III: Biochemistry, Medical Sciences, and Molecular, Cellular and Developmental Biology

Of course there were other areas of reintegration. Back in the 1960s there were so many biology societies, each with a journal, that the only way you could get your work read by biologists in some other subfield was to publish in Science. (Physicists and chemists frequently protested to Philip Abelson-still remembered twenty years later as that most distinguished editor of Science-that they had read their last article on the effects of genetic mutation on the mating habits of the rhinoceros beetle.) Some biologists, looking for a classic if more obscure place to publish without narrow disciplinary constraints, were even driven to publishing in the Proceedings of the National Academy of Sciences.

But by 1986 it was already clear that the fast track was the marriage of molecular biology to cellular and developmental biology. Biochemistry had already become indistinguishable from molecular biology. And by 2006, biology had not only reached out to embrace much of clinical medicine, but was well on its way to understanding the mechanisms for the genetic encoding of instinctive behavior. It was even able to use the new computers to predict the structures and energetics of complex molecules from calculations based on first principles. The tools of the biological engineer were also on every biologist's desk. The human genome had finally been mapped in a mammoth supercomputer project in the 1990s, and was now available for $9.99 on a CD-ROM disk that could be plugged into your 40 MIP personal computer.

Reintegration of the Sciences IV: Geophysics, Meteorology, Oceanography and Paleontology

The earth and planetary scientists were, of course, quick to point out that they were really the first to bind together the threads of scientific progress back in the period from 1960 to 1980, when they began to study the planets as integrated systems. They were no strangers to complexity. Geophysics, meteorology, and oceanography were very difficult areas of research. Even in the 1960s it was realized that you had to understand the coupling of the oceans and the atmosphere, and their roles as dynamic systems driven by the planetary heat engine, modified by terrestrial topology. The outstanding achievement of the era was the unraveling of plate tectonics, in which zoology and botany as well as geomagnetics, paleontology, and geology had played an important part. By 2006 this entire field of science was called simply planetary science. The last department of meteorology vanished in 1997. Weather forecasting was now done by automated sensors, with satellite data collection driving planetary simulations on supercomputers. Weather forecasters were trained in radio and television schools.

Reintegration of the Sciences V: Geography

Despite the enormous attention being paid to the reintegration of the sciences back in the late 1980s, everyone had forgotten that a universal subject in our high schools decades before had been geography. By the 1980s the only people you could find to lament the demise of geography were the trustees and staff of the National Geographic Society. They could not understand how the subject had ever slipped from the curriculum, since their society's magazine, with eight million member-subscribers, substantially outsold its rivals in other fields. The lay public still cared about geography, long after the pedants abandoned it.

By 2000, it was recognized that modern geography is the integrated view of man and his planet, the bringing together of ecology, the study of human habitats, geomorphology, social anthropology, and economics-in short, all the tools necessary to understand how human beings should view their fragile planetary home. Once again geography became a popular course of study in school, particularly since students were no longer required to memorize state capitals and map features. People carried such information with them in their pocket data banks.

Reorganization for Reintegration: Disciplines as the Protectors of Standards

The universities had realized in the late 1980s that they had to reorganize to take advantage of the reintegration of science, and it was a painful period. Many were quick to deplore the chauvinism of academic disciplines as a barrier to progress; department heads had lost most of their clout to the new research institutes on campus. Murray Gell-Mann's Santa Fe Institute (now named Pickens Institute for Science, after T. Boone Pickens's generous gift of $150 million) had been founded as a postgraduate institution with no departments at all. The Carnegie Institution of Washington, which had celebrated the centenary of its congressional charter in 2003, flourished in the new pan-disciplinary environment, having adopted this approach when it was founded over 100 years ago.

Peer Review for Pan-Disciplinary Science

But trouble began when bitter controversies arose over peer review of the new cross-disciplinary research programs. It was almost impossible to find reviewers with the required breadth of view. Having one reviewer in each of the fields involved was worst of all; no such proposals were rated acceptable. Even when qualified panels could be found, nobody could figure out which discipline's budget should be charged for the work. This situation was largely responsible for the fact that two dozen of the top scientists in the country swore off government grants and moved to Santa Fe with Murray Gell-Mann.

The reintegration of science exacted its price in glib talkers whose dabbling in a variety of fields equipped them for excellent high-table conversation, but whose work was often sadly lacking in rigor. Many scientists became deeply concerned about how standards could be maintained. Finally, a suggestion made back in 1985 by Nobelist Herb Simon was resurrected and put into effect. The rule was that no one was allowed to publish pan-disciplinary pronouncements until they had published at least one solid paper in each of the disciplines drawn upon. As a result the word "discipline" took on new meaning.

At that point, someone recognized that the arts and humanities had suffered from this problem for decades. In 2005, one wag suggested that academic musicians, painters, and poets could create whatever they liked to call art without restraint, provided they first showed that they could compose a tune you could whistle, draw a picture you could recognize, or write a poem that rhymed. Artists and writers bridled at this patently anti-intellectual suggestion, and noted that if science was engaging nature in its full complexity and at high levels of abstraction, so should art.

Specialization Versus Reintegration

Thus the traditional departments in the universities were weakened, but they did not disappear. Universities were still the preferred institutional setting for federally funded research. Their faculties had to qualify to teach organized courses in specific disciplines. The departments offered the only satisfactory arrangement for setting standards for the awarding of degrees in each discipline. They were-and still are-the defenders of rigor in the field. They set the tests for compliance with the new "Simon standard."

So the tension between specialization and integration in science continued and is still with us. As a matter of fact, it was noticed that although Pickens Institute had no departments, there were committees to award Ph.D.s in each discipline, so that the students could qualify for faculty positions in other universities. Pickens Institute by 2006 looked a lot like Rockefeller University in adobe architecture.

Peer Review: Awards for Innovative Science

Peer review of pan-disciplinary research only exacerbated problems that had long existed. Scientists kept their best ideas out of grant proposals, which were instead written around work already largely completed and moving toward publication. When the National Science Foundation and the National Institutes of Health could no longer get competent people to referee these meaningless proposals or to sit on study sections, the scheme was changed.

The new funding system for science had two parts, one for mature scientists and one for young investigators. Neither group was required to describe in detail the work they planned to do in the future. The mature scientist submitted papers and other evidence of completed work of the last three years. This material was evaluated by peer review and given a rating, which was then used to determine the probability of continued funding. The scientist's institution received an additional sum, amounting to 25% of the grant, for the support of young scientists with less than three years of postdoctoral experience. Young investigators seeking startup grants could also demonstrate their qualifications for research through personal testimonials from mature scientists.

In this way, the research support system evolved away from its procurement tradition of the twentieth century, and in the twenty-first century became an investment in people.

The Integrity of Experimental Science: The Meaning of Data

There were many reasons the disciplines now found it more difficult to assure the integrity of their science, other than the new complexity of interdisciplinary research. First of all, the form in which experimental data were presented to the investigator was so different from the sensor output, as a result of signal processing in the instrument, that it was no longer easy to track the experiment intuitively. In an apparatus with a time resolution in the femtosecond range or an experiment with 100,000 concurrent data channels, it was impossible. As a result, much more analysis was necessary to know whether the signal represented an instrumental (or now an algorithmic) artifact or was characteristic of the phenomenon being studied.

By 2006, the problem was often even more complicated than that. The processing of experimental data with signal-to-noise ratios of less than one was not unusual in 1986; progress had been made in digital noise filters and other techniques, and if the phenomena studied were sufficiently stable, the automated instrumentation had great patience. One could integrate over hours or even days to pluck small signals out of noisy data.

But once intelligent digital signal-processing was refined, it became possible to include algorithms that specified what was already known about the object of the study from theory and other experiments. Computers with powerful color-graphics and image-processing capabilities were used to present pictures of viruses, made possible in part because the symmetry properties of the virus were assumed to be known. But what exactly does such a result mean? It is neither theory nor experiment; it is the two intertwined.

It is nonetheless a powerful and important result, even if verification of its "truth" is at best incomplete. Science had gotten a lot more complicated. Self-deception had become easier. Teachers of science realized they had to spend some time discussing the philosophy of science with their students. The simplistic model of the scientific method-which was never a realistic description-was dearly obsolete. Yet the damage to science from scientific fraud, even inadvertent self-deception, was more threatening than ever, particularly since peer review was an even less precise measuring engine than it had been in the 1980s.

Theoretical Experiments

Theorists, and indeed mathematicians, had a similar problem. The notion of theoretical experiments had become popular in the mid-1980s, partly because of Kenneth Wilson's success in condensed matter theory and partly because the high-speed computers for which Wilson campaigned became generally available to scientists. Could you learn something new about nature-something that led to additional theoretical understanding-from a numerical solution of many-body problems based on current theoretical models? Sometimes the computer predicted a phenomenon unknown to observation, for which one could search experimentally. But frequently one started with equations in which there was great confidence, but used calculation to travel far from the realm of empirical observation. How much reality should be attributed to such results?

Of course, in certain cases these theoretical "experiments" were a godsend. For example, there were the special-purpose, massively parallel computers built to explore the equations of quantum chromodynamics. A year of processing at speeds of 10 gigaflops had told us what the theory of quantum chromodynamics predicts for the mass of the proton-surely a worthwhile result, given the price tag on high-energy accelerators. The mathematicians were unable to look on all this with their customary disdain, because they were caught up in their own debate about the legitimacy of proving theorems by exhaustive calculation. While mathematics had by 2006 long since ceased to be the smallest of the sciences in terms of research cost per person (anthropologists were now in sole possession of this honor), there were still a few recalcitrant souls who refused to accept any proof that could not be made while lying on a beach in Rio de Janeiro.

Keeping Research Rooted in Universities: The Problem of Remote Facilities

The disappearance of small science and the unavoidable requirement of sharing expensive facilities had a number of other consequences. Perhaps the most troublesome was the difficulty-for many years the bane of experimental high-energy physicists-of having to leave the campus and one's students to travel to research facilities at a remote location. Some had concluded that the problem was insoluble, and that the country should establish an expanded system of smaller national laboratories, patterned after Germany's Max Planck Institutes, for conducting advanced experimental research.

But the leaders of some of the greatest of the existing laboratories, such as Leon Lederman of Fermilab, had insisted that they must find ways to keep their field of science and their facilities rooted in the academic world.

Computer Networks for Science

Fortunately, technology came to the rescue, not to solve the problem but to make the situation tolerable-certainly better than it was in the early 1980s. For starting in 1984, the American universities, which had already interconnected their computer-science laboratories through ARPANET and CSNET, hooked those networks to bitnet, which linked computer centers on hundreds of campuses, giving thousands of departments access to one another. European universities connected up through earn, the European Academic Research Network, and the Canadians with netnorth. With the help of transnational digital trunks donated by IBM, universities in Europe, parts of the Middle East and Africa, Canada, and the United States were interconnected. In 1986 nine Japanese universities were linked with the network, and soon thereafter Australia, Hong Kong, and Singapore joined.

The World University Network: WUNET

The leaders of earn proposed that these several networks, already interconnected, join together to promote worldwide sharing of scientific knowledge. This voluntary association of networks would be known as the World University Network (WUNET, or "One-Net"). The creation of WUNET made several things possible. First, as scientific facilities became automated it was more often possible to log on to an experiment elsewhere from a workstation at the home university, and to drive a distant accelerator, telescope, or satellite by remote control. Second, facilities could more easily be shared internationally, which substantially increased both the joint planning of major science facilities and the interdependence of nations.

Finally, and most important, the computer network helped remove the sense of isolation felt by small research groups away from the main centers. It was quickly realized, of course, that this was the universal position of the developing countries, and they were brought into the network as rapidly as their scientific leadership could overcome the hesitation of local government officials accustomed to protectionism in telecommunications.

Worldwide Scientific Communications Versus Protectionism

Many people were astonished that the network grew so rapidly. In the United States and Europe, the number of university computers hooked up was doubling every six months back in 1986. The reason was that it was a peer network, requiring neither central control nor a central budget. Universities could join or leave as they chose. By the time the information protectionists in the United States, Japan, and Europe began to raise questions about all this transnational scientific traffic, it was already evident that there was no turning back. International projects were dependent on it. Travel funds to replace it were prohibitive. The gains in productivity, duplication avoided, and the healthy atmosphere of a vital world science were too valuable.

Science-SAT: A Backbone for WUNET

The result of all this was not a set of transborder constraints on scientific communications but a decision at the Economic Summit of 1988 to give the network a new global communications backbone: a system of satellite channels and optical fiber trunks dedicated to international science. With the availability of broadband, demand-assignment communications, it was now possible to transfer large databases, computer programs, book manuscripts, and scientific images. Remote, interactive graphics access to supercomputer facilities around the world became possible.

Science had established itself as a global enterprise at last. The only problem that remained incompletely solved was automatic language translation, the Achilles' heel of artificial intelligence research. So everyone communicated in mathematics, Fortran, Lisp, Prolog, and English-except for the French when the Gaullists came back to power.

The Wired University: Campus Networks

Computer networks had also transformed the campuses, for now students, staff, and faculty could communicate with a minimum of inconvenience. Each individual could more nearly set his or her own schedule, independent of the others. Groups with common interests were able to find each other through electronic bulletin boards. Running experiments could be checked from the dormitory or the faculty office, or even from home. And of course you could now access WUNET from almost anywhere-actually almost literally anywhere, by dialing in through digital cellular radio services, using a battery-operated computer with a built-in transceiver.

Complexity, Irreversibility and Nonlinearity

The reintegration of science had already introduced quite a bit of complexity to research by 2006, but the combination of new mathematical approaches with powerful computers made possible a direct attack on nonlinear, irreversible problems. The philosophical debate over whether the new approaches required fundamentally new physical principles, such as those proposed in Ilya Prigogene's controversial views on time back in the 1980s, had subsided with modern physics intact and more powerful than ever. But it was dear that physics had come a long way since Newton. For the first time, rapid progress was being made in the theory of turbulence, of nonequilibrium states in multiphase condensed matter, and of radiating atmospheres far from conditions of local thermodynamic equilibrium. Again, this tended to bring science and engineering closer together, as science learned how to attack from first principles a range of problems traditionally approached empirically.

Resurgence of Pure Mathematics as the Handmaiden for Science

None of this could have been accomplished without the reemergence of pure mathematics as an essential element of new progress in physics. For three or four decades prior to the 1980s, mathematics and physics went their separate ways. Mathematicians explored some wonderful new areas in topology and new algebras, but physics made little use of them. And while the mathematicians may have enjoyed the freedom of their perceived irrelevance to practical matters, support from science budgets for mathematics dwindled to dangerous levels.

But the reunification of science that began in earnest in the 1980s brought mathematics back into the mainstream. Once again physicists and mathematicians began exploring arcane areas of mathematics, looking for radical new avenues of scientific progress. One early example was the study of cellular automata as an embodiment of scientific principles in algorithms that, operating on random noise, would generate patterns and structures like those found in nature. Mathematicians in large numbers shed their distaste for becoming dependent on technology and took aesthetic delight in the use of color graphics to explore such areas.

Science and the Arts

Indeed, the power of information science to broaden the scope of artistic creativity brought the natural alliance of science and art into full view. By 2006 colleges of arts and sciences were alive with aesthetic excitement, which took many forms. The word "arts" in the school title took on new meaning. The old electronic music, produced on primitive synthesizers back in the 1960s and 1970s, gave way to music created through the application of artificial intelligence to harmony and structure. Dancers created their own accompaniment by translating holographic images of their bodies into music. Graphic arts received stimulation from the extraordinary power of three-dimensional color graphics, extended to video form.

These new artistic media were not only offshoots of science; they were the external representation of scientific results. There had always been an important aesthetic element in criteria for scientific "truth"; scientists spoke of an "elegant" proof, a "beautiful" result, a "well-behaved" solution. But simplicity was the cardinal ingredient of Newtonian scientific beauty. Now with new, more powerful techniques for visualizing scientific models and concepts, and with the theoretical power to tackle complex phenomena, the opportunity for a flowering of the artistic potential of science was at hand.

There was a new interest in the life of Leonardo da Vinci, who best embodies this tradition: the conviction that the eye is the most powerful tool of discovery, and that complex phenomena can be understood without draining them of the complexity that is fundamental to nature.

Social Science Links with Natural Science

The powerful trend toward the reunification of the disciplines made slower progress in the social sciences. Creative intuition is a valuable-even essential-tool for both scientific and artistic progress. In the social sciences, however, intuition had long proved a dangerous trap. It was easier to be objective when man studied nature. Man's study of man is the ultimate challenge. But the challenge had to be faced.

Behavioral science had built links to many areas of natural science through their common interest in man: his physiology and his brain. Computer science was leaving behind its empirical, engineering-oriented phase and was making steady fundamental progress in supporting the human mind. By 2006 you could not earn a Ph.D. in computer science without studies in depth in psychology. To design machines to help people, you had to know something about people.

As the first science devoted to the study of a human artifact, computer science had faced an almost insuperable difficulty. How can one build a quantitative conceptual base for the field without a reliable model of the human mind? The primitive twentieth-century notion of "artificial intelligence" used a caricature of a human for its paradigm. By 2006, in studies of the brain permitted behavioral science to link up with computer science to explore how computers could not only help man with logical thinking but relate to his personality in a compatible, supportive way. The study of artificial personality had caught up with the study of artificial intelligence.

A Science of Learning

This bridge to natural science had a stimulating effect throughout the social sciences. The new information technology gave psychology a quantitative research tool. Molecular biology and modern genetics shed new light on the behavior of all living species, including man.

The biggest impact was in education, where the new information tools made it far easier to quantify learning experiences. The study of learning flowered as a branch of basic cognitive science. Now every learner, supported by an appropriately intelligent workstation, could measure and maximize his own rate and effectiveness of learning. Once the educational system adapted to the concept of self-paced learning (and the transition was still far from complete in 2006), there was an explosion of educational productivity, which at last began to release resources that could be invested in the essential, human side of education: shared values and experiences, the enhancement of wisdom, and a firmer base of knowledge.

Science in a Societal Context

Another force was even more important in creating bridges between the natural and the social sciences. Natural scientists had long realized that their work had a social context which it was important to understand. For many years they had debated the impact of their discoveries, but they were slow to look to collaboration with the social sciences as a means of influencing the effects of new technologies on society. There were serious debates about such cooperation, for it was not without its controversial elements. Some of the oldest retired scientists in 2006 could still remember a naive but well-intentioned program sponsored by the National Science Foundation way back in the 1960s called Research Applied to National Needs (RANN). In RANN the participation of social scientists had been mandated in each project. The promise of social benefits inherent in the program's name was not consistent with the focus of the projects on fundamental research. It gave collaboration between natural and social science a bad name for a long, long time. But by 2006 it had become dear-desperately clear-that without the direct involvement of large teams of scientists in global problems, the future for mankind might be dismal indeed.

This conviction had been sealed, finally, by a calamity and a near calamity. Back in 1988, as Americans prepared for the presidential campaign, it had become all too obvious that environmental destruction from acid rain was no longer a problem just for the Northeast and Canada. Indeed, it was the victory of the Green Party in Germany, following public outrage at the death of the Black Forest and of countless lakes and streams, that finally persuaded the US administration to take urgent action. By then it was too late to prevent serious economic harm, as well as a significant impact on the quality of life. Scientists became more than ever convinced that they could not just issue the warning and ask for research funds to study such impending problems. They had to be part of the solution.

The clincher was, of course, not acid rain but the threat of mutual nuclear destruction. The political campaigns of 1988 and 1992 were boisterous and emotional, as scientists, young people, and many other concerned citizens challenged the strategic defense policies of the leading candidates. But it was not until the problem of war and peace was approached as a matter worthy of serious research-from the perspective of the origins of conflict and modes of negotiation on the social science side, and from the viewpoint of the design of fail-safe strategic systems on the science and engineering side- that a more rational and less emotional attitude began to emerge.

Many thought that the turning point was the establishment of five International Centers for the Study of Human Conflicts in Washington, Moscow, Beijing, Paris, and Rio de Janeiro. These centers were based on the very successful model of the international laboratories of the Consultative Group for International Agricultural Research (CGIAR), and like them were funded jointly by governments (through their arms control agencies) and by foundations and international corporations.

Quantification in the Social Sciences

There was another very important reason why the social sciences began to achieve equal footing with natural science in terms of government support and academic prestige: they had become substantially more quantitative and rooted in robust collections of data. During the difficult years of the early 1980s, when social science budgets were hard hit, the National Science Foundation and private foundations worked hard to sustain the major data collections-especially the longitudinal collections that, once interrupted, could not be rebuilt. With improving analytical methodology and substantial computer support, behavioral scientists began to provide an interesting degree of predictive power.

Since the turn of the century, even the macroeconomists had been publishing a significant number of papers in which their mathematical models and interpretations were tested against credible collections of empirical data. Too bad Vassily Leontieff did not live to see it happen.

So, by 2006, the National Science Foundation had already had a Directorate of Economic and Social Sciences for seventeen years (it was formed in the first year of the administration elected in 1988), and by this same year fully 23% of the members of Sigma Xi were from human and social science disciplines. It had all happened quite naturally.

Post-Handbook Engineering

At the same time that science and even art were becoming more technological, engineering was also changing. It had already, by 1986, largely outgrown its handbook past. It was simply not possible to keep the printed handbooks up to date with the extraordinary progress in the science base for engineering. In materials science, for example, instead of building things with materials in stock, whose known properties were in the handbooks, engineers were increasingly creating new materials especially designed for a specific purpose. Many of these materials were complex composites unknown in nature. The equivalent of the data tables in the handbooks became algorithms and models of design and fabrication processes. For this, the materials engineer and his materials science colleague had to stay abreast of both an extensive scientific literature and a burgeoning body of practical experience with the preparation, performance analysis, and processing of useful new materials.

As a result, doctoral degree programs in materials engineering and science expanded dramatically, combining the skills of the engineer (problem solving and sensitivity to production, quality, and cost factors) with the knowledge of the materials scientist (the uses of ceramic and polymeric substances, interfaces, and synthetic composites). Computer simulation models of hypothetical materials made it possible to predict the performance of a variety of new materials tailored to specific applications.

Complex Systems for Design and Production

But engineers were also mastering the systems integration of automated support for development, design, production, and testing. Computer-aided design went far beyond the creation of mechanical parts and the layout of integrated circuits. The design process now became the output of a computer system driven by functional and materials specifications. This had already happened to microelectronic logic designers back in the 1970s. But now all branches of engineering showed improvements in productivity that came not so much from lower costs as from shorter development cycles. The key was the ability of the system to pass design data directly to the automated production tooling in electronic form, leading to the automatic generation of testing programs.

The process specifications were coded into the CAD/CAM system; of course, given the necessity for safety margins, these specifications had to be somewhat more conservative than laboratory measurements might justify. Thus the process rules incorporated in the automated tooling became the equivalent of the design data in the old handbooks. But now design margins could be kept tight, because the materials and processes were so much more accurately characterized.

The introduction of automation into industrial processes raised the level of skill necessary for software, test, and production systems engineering. The mathematical and computer/communications requirements were now quite demanding, and major research efforts in systems engineering were being pursued at all the major engineering schools. Students at Cornell's School of Engineering occasionally stopped by the exhibit installed for the school's centenary in 1985 to stare at the slide rule and soldering iron on display there. How could so much progress have been made with such primitive tools, they wondered?

Software and Systems Science

The evolution of the economy toward services and away from heavy industrial processing of raw materials had gone quite far by 1986. The shift in the US economic structure was well under way in the mid-1980s, and was reflected in the nearly disastrous trade wars at the end of the decade. But people soon realized that the largest part of US export income came from knowledge and services, and improving the productivity of the service sector became a high economic priority.

The universities responded in a number of innovative ways. Economics became more pragmatic, and microeconomic research enjoyed a major revitalization. The theory of human organizations was making great strides, and one of the most popular research goals in 2006 was to understand how, using modern information networks, it was possible to sustain both a high degree of spontaneous individual creativity and a high degree of common purpose and efficiency in the use of resources.

Software Engineering

A new "clinical" degree in software engineering-a four-year postgraduate degree equivalent to the M.D. plus internship-was now the most popular professional program in the universities. It offered assured employment in the management of the information resources that are the lifeblood of any large organization. The degree holders had mastered digital telecommunications and computer engineering, application analysis and modeling, complex systems management, microeconomics, and business management, and had a good understanding of the human beings who are the ultimate users of these systems.

Although the Japanese had made deep inroads in the American market for microelectronic hardware in the 1990s, the new thrust in the United States was not matched elsewhere. Our economy began to be based on the combination of American strengths in research, organization, and respect for individual creativity.

Publish or Perish

By 2006, the life of the typical academic scientist had not changed all that much, except in intellectual and technological terms. There were the usual number of academic committees trying to practice decision by consensus, although they mostly met in computer-video conferences (making it easier to work or sleep during the more tedious moments). Students still had to be taught, research accomplishments evaluated for promotions. But "publish or perish" now seemed more than ever an outdated caricature of how academic merit should be measured.

First of all, it was far less dear what "publish" meant, since the research output lived on the network and was incorporated in shared data collections. Its methodology was recorded in magneto-optic disks, embodied in video imagery, and even enshrined in the solutions of social problems professionally and effectively addressed. All these options made research much more interesting and accessible. But combined with the reunification of many of the disciplines, it made the task of academic administration much more difficult. While it was easier to find scholars outside the university to evaluate the work of a scientist being considered for tenure, it was virtually impossible to find any way to count the items of output to his or her credit.

Tenure and promotion committees, therefore, were forced to consider the substance of the case, not its form. But the tensions of the 1980s on the subject of employment were lessened by the resurgence of population pressure on the campuses. The babies of the 1980s had arrived.

Plus Ça Change

With all the change that had overtaken science in twenty years, the really important things never changed: the excitement of an unexpected discovery, the joy of a student who has made an intellectual leap beyond the work of the professor, the beauty of a three-dimensional color image of a virus, created out of theory and x-ray diffraction patterns, shimmering on a large graphics tube like a strange and wondrous bird on first discovery. In 2006 God still loved the noise as much as the signal. Man was still aware that with every step forward in science two delicious new questions-crying out for study-were born.

 

EMAIL TO A FRIEND :


Bottom Banner