Models and Mathematics: Q&A with Erica Thompson

Mathematics Scientists Nightstand

Current Issue

This Article From Issue

July-August 2023

Volume 111, Number 4
Page 251

DOI: 10.1511/2023.111.4.251

Erica Thompson’s December 2022 book, Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It, explores the limits of mathematical models, and how we can use them in smarter, better ways. She is associate professor of modeling for decision making at University College London’s Department of Science, Technology, Engineering, and Public Policy. She’s also a fellow of the London Mathematical Laboratory and a visiting senior fellow at the London School of Economics’ Data Science Institute. Thompson spoke with book review editor Jaime Herndon.


Photo by C. Vernon

Models: That term can mean a lot of things, especially for someone outside of the field. What kinds of models do you work with?

Well, not the glamorous kind that you’d see on a catwalk! I’m interested in scientific and mathematical models, simulations, or representations. That includes very complex physical models like climate and weather models, but also much simpler models like the basic infection models in epidemiology and even conceptual nonquantitative models like “If the price of a product goes up, then fewer people will want to buy it.” Models are used to help us think about a subject, to make predictions of how a system might change over time, and to try to work through what the consequences would be if we intervened in some way. In science, we rely on models, but they are also increasingly important in policymaking, business decisions, and so on. The more data we have available, along with the computing power to collect and analyze that data, the more we tend to construct and use models.

Ad Right

You look at how we use models to make real-life, real-world decisions. Can you talk about why this is so important?

From the very brief examples I just gave, you can see the range of decision-making that is supported by models. The weather forecast on the phone in your pocket is produced by a very complex model, and you might use it for all sorts of everyday decisions; emergency service organizations probably use it for much more consequential decisions. Climate models are used to support decision-making about infrastructure investment, emissions policies, and international cooperation. Epidemiological models, from the very simple to the hugely complex, have supported public and private decision-making at all stages of the COVID-19 pandemic. And models are increasingly forming the bedrock of business analytics and forecasting, including the rapid use of new machine-learning techniques to create different kinds of models.

You coined the term the hawkmoth effect, which has to do with “the sensitivity to structural model formulation.” How does this relate to models and decision-making, and what does this mean for real-world decisions? What are some examples of the hawkmoth effect in action?

Most people have heard of the butterfly effect, which is the idea that complex systems can be very sensitive to changes in the initial condition (“the flap of a butterfly’s wings in Brazil can trigger a tornado in Texas”). From the point of view of forecasting, this means that if we get the initial conditions slightly wrong, then the forecast could become unreliable on some timescale. The hawkmoth effect is a similar idea, but relating to the accuracy of the model rather than the data. If the model is slightly “wrong,” even by just a tiny amount, then the forecast it makes could be significantly wrong if you are predicting far enough into the future.

Models are primarily useful as  metaphors and aids to thinking, rather than prediction engines.

Of course, the problem is that just as all data are subject to uncertainties (measurement errors), also “all models are wrong” in the sense that we can never really know that we have fully represented absolutely everything that is important about the system. In some cases, like the weather forecast, we have lots of past evidence from our successful predictions, which help us have confidence; we know that tomorrow’s weather forecast is pretty good, next week’s is indicative but not all that reliable, and we wouldn’t even look at the forecast six months in advance because we know, based on past evidence, that it wouldn’t be of any use. So, for the weather forecast, we have a good idea of when it is useful and when it isn’t, but we don’t necessarily have such a clear idea of the limitations of other models and other kinds of predictions about the future.

In what ways are models helpful, and in what ways have models been not so helpful, or even counterproductive, in real life? What can we learn from these examples?

Models are incredibly helpful. Models can send us to the Moon and they power the modern world, from your electricity supply to your social media feed. And I’d go further to say that they are very much part of the way that we think. Both at the mathematical end of institutional and organizational decision-making, where you might think of weather models, economic models, pandemic models, or business models, but also at the conceptual level. I argue in my book that models are primarily useful as metaphors and aids to thinking, rather than prediction engines, and so you might think of the “flatten the curve” concept that was so important in the spring of 2020, or more qualitative concepts like the idea that the national budget either is or is not like a household budget. In that light, I hope it becomes clearer how models also change the way that we think: If you have a particular model (mathematical or conceptual) for something in the real world, then you use that model as a tool for understanding, and as a tool for communicating with other people. It can be illuminating by helping us to think in new ways, but the model and the limits of the metaphor also constrain the ways that we are able to think about the system.

To take an example, if you construct a simple epidemiological model for a disease outbreak, you are focusing (probably rightly) on the infection and its consequences, and you might choose to model people as statistically representative populations or as interacting individuals with different characteristics, and so on. You might have schools and hospitals and prisons in your model, or you might not. Now if a politician comes and asks you what can be done about the outbreak, you will frame your advice in terms of the information you have from your model and the kinds of interventions that can be represented in the model. This is fine, and completely reasonable, as long as the decision-maker also has access to other kinds of sources of information about the impacts of different policies. Epidemiology is a very politicized example, which perhaps makes the shortcomings of a model like that more obvious. The models from the COVID-19 pandemic weren’t counterproductive, they were able to contribute to decision-making. But what we can learn from the pandemic is that for highly contested and complex decisions, we need a more diverse range of models, and more effective ways to think about the insights gained from models.

What do you think is the most important thing we should keep in mind for working with models in the future, and where do you see your work going?

The most important thing to keep in mind when working with models is that any model can only give us one perspective. It can’t tell the whole story. A photograph is great to show you what someone looks like, but it doesn’t tell you their political opinions or what they want to have for lunch. When we make and use models, we need to keep in mind what they are good at and what they aren’t good at.

My own work has two strands, a mathematical strand about the statistics of model calibration and evaluation, and a more sociopolitical strand about the value judgements that are embedded in models—both illustrated through case studies of different kinds of models, from public health to finance and climate change. I’m working on bringing these two strands closer together, to learn from each other and hopefully to improve the usefulness of the modeling methods that are so important for today’s decision-making.

American Scientist Comments and Discussion

To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.

If we re-share your post, we will moderate comments/discussion following our comments policy.