Ready or Not
PREDICTING THE UNPREDICTABLE: The Tumultuous Science of Earthquake Prediction. Susan Hough. viii + 261 pp. Princeton University Press, 2010. $24.95.
Earthquake prediction is, in an important sense, a solved problem. Earthquakes are vastly more common in certain parts of the world than others, and they occur at a reasonably steady statistical frequency in a given location. We even know why this is so. Earthquakes are most frequent in those parts of the world where the tectonic plates run up against each other and try to move past each other. Where the plates meet, we get fault lines. When the material on one side of the fault sticks to that on the other, strain builds up and gets released in sudden movements: earthquakes. So the baseline prediction is that earthquakes occur near faults, with frequencies about equal to their historical frequencies, because the mechanics of tension and relaxation change very slowly.
This lets us say things like “Once in about every 140 years, the Hayward fault in northern California has a quake of magnitude 7.0 or greater.” But some people, including some seismologists, are not content with this level of understanding and these actuarial “forecasts”; they want to be able make highly accurate predictions—to be able to say precisely when and where an earthquake will occur and what its impact will be (“magnitude 7.1, directly beneath the stadium at the University of California, Berkeley, the day after the Big Game with Stanford in 2010”). As recently as the 1970s, this goal seemed feasible to professionals and the U.S. government, but now most geologists believe that it is extremely unlikely ever to be accomplished. In Predicting the Unpredictable, Susan Hough tries to explain both the initial enthusiasm for precise predictions and how and why that enthusiasm dissipated.
The enthusiasm came at the end of the plate tectonics revolution, which gave us our current understanding of, among many other things, earthquakes. After millennia of speculation and superstition, we finally knew why the earth shakes and why earthquakes happen where they do. It really didn’t seem too much to hope that this triumph of science would soon extend to knowing when they would happen. Moreover, the authorities in the People’s Republic of China had apparently been able to predict the magnitude 7.3 earthquake that occurred in Haicheng in northwest China in 1975. (The real story of the Haicheng prediction, as Hough explains in chapter 6, is far murkier; as one of her sources puts it, “the prediction . . . was a blend of confusion, empirical analysis, intuitive judgment, and good luck.” But the details were deliberately kept from the rest of the world for many years.) Eminent geologists saw earthquake prediction as a reasonable scientific aim, and by the end of the 1970s, they managed to get it inscribed into U.S. policy, along with hazard reduction. They also established an official body for evaluating earthquake predictions.
Chapters 9 through 13 are mostly about various prediction efforts since that time, ranging from the serious to the crackpot. None of these efforts has been really successful, although Hough is careful to say that some of them are only ambiguously failures. Evaluating the success of the predictors is harder than it first seems, because earthquakes are not just concentrated around plate boundaries at characteristic, though irregular, intervals; they are also clustered in space and especially in time. Earthquakes tend to happen near where other earthquakes have happened recently.
This clustering invalidates what has been a common method of evaluating earthquake predictions, which is to assess how well the predictions match the actual record of quakes and then compare that with how well the same predictions match a simulated record in which earthquakes occur at random on each fault at the historical rate (technically, according to a homogeneous Poisson process). Matching the real data better than the simulated data is supposed to be evidence of predictive ability.
To see the flaw here, think of trying to predict where and when lightning will strike. We know that lightning strikes, like earthquakes, are clustered in both space and time, because they occur during thunderstorms. So a basic prediction rule might state that “within 10 minutes of the last lightning strike, there will be another strike within 5 kilometers of it.” If we used this rule to make predictions and then were evaluated by the method described in the preceding paragraph, we would look like wizards. If we made predictions only after the lightning had already begun, we’d look even better. This is not just an idle analogy; the statisticians Brad Luen and P. B. Stark have recently shown that, according to such tests, the following rule seems to have astonishing predictive power: “When an earthquake of magnitude 5.5 or greater occurs anywhere in the world, predict that an earthquake at least as large will occur within 21 days and within an epicentral distance of 50 km.” Earthquake prediction schemes that do no better than this baseline predictor have little value, they observe. And no prediction method yet devised does do any better than that.
Of course it’s possible that there is some good way of making detailed predictions, which we just haven’t found yet. To continue the lightning-strike analogy, we’ve learned a lot about how thunderstorms form and move; we can track them and extrapolate where they will go. Perhaps earthquakes are preceded by similar signals and patterns that are, as the saying goes, patiently waiting for our wits to grow sharper. But it’s equally possible that any predictive pattern specific enough to be useful would involve so many high-precision measurements of so much of the Earth’s crust that it could never be used in practice.
Suppose, however, that that’s not true; suppose we are someday able to make predictions like the one above about Berkeley’s stadium. We could perhaps evacuate Berkeley and its environs, but every building, power line and sewer pipe there would still go through the quake. It would be a major catastrophe if they all went to pieces, even if no loss of life occurred. If we insist on living in places like Berkeley, where we know there will continue to be earthquakes, why not work on hazard reduction—on building cities that can survive quakes and protect us during them—rather than on quake prediction? As Hough puts it:
If earthquake science could perfect the art of forecasts on a fifty-year scale, we would know what structures and infrastructure would be up against. For the purposes of building a resilient society, earthquake prediction is largely beside the point. Whether the next Big One strikes next Tuesday at 4:00 p.m. or fifty years from now, the houses we live in, the buildings we work in, the freeways we drive on—all of these will be safe when the earth starts to shake, or they won’t be.
One might almost say that the real problem isn’t predicting when the earth will shake, it’s organizing society so that it’s not a catastrophe when that happens.
In the end, whether through hope, caution or diplomacy, Hough declines to dismiss the prospect of prediction altogether. The current state of a lot of the science she reports on is frustratingly inconclusive. Hough’s book, however, is not frustrating at all; it offers an enlightening, fair and insightful look at how one science has dealt with the intersection of an extremely hard problem with legitimate public demands for results. Those of us in other fields who read it may find ourselves profiting from the example someday.
Cosma Shalizi is an assistant professor in the statistics department at Carnegie Mellon University and an external professor at the Santa Fe Institute. He is writing a book on the statistical analysis of complex systems models. His blog, Three-Toed Sloth, can be found at http://bactra.org/weblog/.