Logo IMG


Bugs That Count

Brian Hayes

Missing a Beat

The synchronization of cicada emergence is impressive, but not perfect. There are always at least a few clueless unfortunates who turn up a year early or a year late. Four–year accelerations and retardations are also common. Evidently, the year–counting mechanism can go awry. How much error can the system tolerate before synchronization is lost entirely?

Several authors have proposed that Magicicada periodicity evolved during the Pleistocene epoch, as a response to the unfavorable and uncertain climate of glacial intervals. Conditions have changed dramatically since the glaciers retreated, and so it seems unlikely that the same selective pressures are still working to maintain synchronization. What does maintain it? Before considering more complicated hypotheses, it seems worthwhile to ask whether periodicity could have survived as a mere vestigial carryover, without any particular adaptive value in the current environment. If the timekeeping device is never reset, how accurately would it have to work to maintain synchronization over the 10,000 years or so since the end of the Pleistocene?

The answer depends in part on what kinds of errors can disrupt the counting. The simplest model allows individual cicadas to make independent errors. Each year, each cicada has some small likelihood of either failing to note the passage of the year or interpolating a spurious extra year. Under this model, the error rate needs to be kept below 1 in 10,000.

The weakness of this model is the assumption that cicadas would make independent errors. If all the cicadas are trying to read the same chemical signal in the tree sap, errors could be strongly correlated. In a bad year with a short growing season, the signal might never reach the threshold of detection for many individuals. A double oscillation is also a possibility, for example if the trees are defoliated by predators and then put out a second growth of leaves.

An error model that allows for such correlations works like this: A cicada's probability of correctly recording the passage of a year depends on the strength of the xylem signal, which varies randomly from year to year but is the same for all the cicadas. If the signal is very strong, almost everyone detects it correctly. If the signal is extremely feeble, nearly all miss it. Although this  latter event must be counted as a timekeeping error, it does not break synchronization; instead it retards the entire population by a year. What spoils synchronization is an ambiguous signal, one in the gray area where half the cicadas detect it and the other half don't. This splits the population into two groups, which will mature and emerge a year apart. Four or five such splittings over 10,000 years would be enough to wipe out synchronization.

A drawback of this error model is that it depends on two variables, which are hard to disentangle: the frequency of ambiguous signals in the xylem and the cicada's acuity in reading those signals. If the signal is usually near the extremes of its range, then even with a crude detector, the population will almost always reach a consensus. If ambiguity is common, then the insect's decision mechanism needs to be finely tuned. I have experimented with tree–ring data as a proxy for the distribution of xylem–signal amplitudes, but the results were not much different from those with a random distribution.

The cicadas' response to the signals is defined by an S–shaped curve. If the curve is infinitely steep—a step function—then the probability of registering a tick of the clock is exactly 0 up to some threshold and exactly 1 above the threshold. As the curve softens, the transitional region where probabilities are close to ½ gets broader.

Running the simulation, it turns out that synchronization survives only if the response curve is very steep indeed, with a vanishingly narrow region of ambiguity. For ease of analysis, suppose we are merely trying to synchronize the clocks of two cicadas that each live for 10,000 years. To a first approximation, they remain in phase only if they agree on the interpretation of the signal every year throughout the 10,000–year interval. For a 90–percent chance of such uninterrupted agreement, the probability of agreement  each year must be at least 0.99999.

Is such accuracy plausible in a biological mechanism? Could periodicity really be a historical relic, without adaptive significance today? Probably not, but the models are too simplistic to support quantitative conclusions. Nevertheless, the idea of timekeeping errors introduced by ambiguities in environmental signals may well have a place in the biology of cicadas. Suppose there is a north–south gradient in signal amplitude; then somewhere along the gradient there must be a zone of ambiguity. Forty years ago, Richard D. Alexander and Thomas E. Moore of the University of Michigan, Ann Arbor, pointed out that broods tend to be arranged like shingles from north to south, with each brood emerging one year later than the one above. It's a pattern that might have been generated by successive population–splitting events like those in the model.

comments powered by Disqus


Subscribe to American Scientist