COMPUTING SCIENCE

# The Weatherman

# The Orchestra of Slide-Rules

For his reconstruction of the Richardson experiment, Lynch went back to the original weather charts for May 20, 1910, and redid the interpolation process. He found only one likely error in Richardson's initial data—a suspiciously low pressure over Strasbourg. Data values elsewhere were in good agreement.

Although Lynch set out to create a faithful reproduction of Richardson's model, he did not do the arithmetic with pencil and paper; the model was implemented as a computer program. But even though computing power was not a limiting resource, Lynch decided to simplify the model in some respects. A curiosity of Richardson's work is that he included certain physical and even biological phenomena that now seem marginal. He was a shrewd numerical analyst, and he must have been able to estimate the importance of each term in his equations; nevertheless, he invested much effort in modeling factors that could not possibly have much effect on the overall outcome. For example, he calculated the temperature of the soil at various levels and the effects of vegetation on moisture in the atmosphere. These influences might be barely detectable in a high-precision weather model, but it was premature to build them into this first crude experiment. Lynch neglects these factors, and indeed drops all consideration of water in the atmosphere.

The results of Lynch's replica computation are quite close to those of the original. The surface-level rise in pressure was 145.1 millibars according to Richardson, and 145.4 according to Lynch. Predicted winds do not match quite as closely, but the average discrepancy is only about 13 percent.

Having reproduced Richardson's faulty calculation, Lynch then went on to try correcting it. The problem of initial observations that are not in harmony persists in weather prediction today, but digital filtering techniques have been devised to reconcile the wind and pressure fields. Lynch applied one of these filtering methods, which essentially runs the model both forward and backward from the starting point to generate a consistent set of values. With the filtered data, and with a smaller Δ*t *to ensure numerical stability, all results were physically plausible and in reasonably good agreement with observations. Richardson came *that* close to getting it right.

Richardson ended his book with a daydream about the future of numerical weather prediction. He estimated that it would take 64,000 computers (and by "computers" he meant people) to keep up with all the world's weather. The work might be done in a great spherical hall. "The walls of this chamber are painted to form a map of the globe. The ceiling represents the north polar regions, England is in the gallery, the tropics in the upper circle, Australia on the dress circle and the antarctic in the pit. A myriad of computers are at work upon the weather of the part of the map where each sits.... From the floor of the pit a tall pillar rises to half the height of the hall. It carries a large pulpit on its top. In this sits the man in charge of the whole theatre.... One of his duties is to maintain a uniform speed of progress in all parts of the globe. In this respect he is like the conductor of an orchestra in which the instruments are slide-rules and calculating machines. But instead of waving a baton he turns a beam of rosy light upon any region that is running ahead of the rest, and a beam of blue light upon those who are behindhand."

Lynch and others have pointed out that the estimate of 64,000 computers was a serious undercount. Even by Richardson's own criteria, the number probably should have been 200,000, and a modern estimate would be much larger still. Indeed, if we were to try to do by hand labor all the computing that is nowadays dedicated to weather prediction, the entire human population could not keep up. Thus Richardson's orchestra of slide-rules was never a realistic possibility. Practical forecasting by numerical methods could not have begun much sooner than it did, with the work of Jule Charney and John von Neumann around 1950.

Nevertheless, Lynch concludes his article with a wistful consideration of what might have been, if Richardson's early forecast had not gone awry. "Let us suppose that Richardson had applied some filter, however crude, to his initial data. His results might well have been realistic, and his method would surely have been given the attention which it certainly deserved." I would not disagree, and yet at the same time I find that what is most interesting about the forecast is its failure, and what is most admirable about Richardson is his determination to publish it anyway. The failure of the experiment even made it worth repeating.

© Brian Hayes

EMAIL TO A FRIEND :

**Of Possible Interest**

**Spotlight**: Briefings

**Computing Science**: Belles lettres Meets Big Data

**Technologue**: Quantum Randomness