Statisticians can reuse their data to quantify the uncertainty of complex models
Smoothing Things Out
Bootstrapping has been ramified tremendously since Efron’s original paper, and I have sketched only the crudest features. Nothing I’ve done here actually proves that it works, although I hope I’ve made that conclusion plausible. And indeed sometimes the bootstrap fails; it gives very poor answers, for instance, to questions about estimating the maximum (or minimum) of a distribution. Understanding the difference between that case and that of q0.01, for example, turns out to involve rather subtle math. Parameters are functions of the distribution generating the data, and estimates are functions of the data or of the empirical distribution. For the bootstrap to work, the empirical distribution has to converge rapidly on the true distribution, and the parameter must smoothly depend on the distribution, so that no outlier ends up unduly influencing the estimates. Making “influence” precise here turns out to mean taking derivatives in infinite-dimensional spaces of probability distribution functions, and the theory of the bootstrap is a delicate combination of functional analysis with probability theory. This sort of theory is essential to developing new bootstrap methods for new problems, such as ongoing work on resampling spatial data, or model-based bootstraps where the model grows in complexity with the data.
The bootstrap has earned its place in the statistician’s toolkit because, of all the ways of handling uncertainty in complex models, it is at once the most straightforward and the most flexible. It will not lose that place so long as the era of big data and fast calculation endures.
- Efron, B. 1979. Bootstrap methods: another look at the jackknife. Annals of Statistics 7:1–26.