COMPUTING SCIENCE

# How to Count

# Cascading Errors

Peano counting has the wholesome virtues of simplicity and transparency. Even in the presence of errors, it gives reasonably predictable results. There's just one problem: It never seems to accomplish anything. You start out with a heap of ballots, and after you count them you have a sheaf of papers covered with tally marks. Now you have to count the tally marks. If you do so by the Peano process, you wind up with another set of tally marks, which need to be counted in turn.

To make the count comprehensible, you have to introduce some higher-level structure. One familiar approach is to arrange the tally marks in groups, making four parallel strokes and then a fifth cross-stroke. In a second stage of counting, you can tally the groups of five, producing a page that has only one-fifth as many marks (plus a few leftovers, typically). Repeating the process, you count by twenty-fives, then by one-hundred-twenty-fives, etc. A slight variation on this procedure will have you counting in Roman numerals. But any such hierarchical scheme changes the error model. Suppose you make a mistake not in the original tally but in one of the higher-level consolidations; then the count changes not by one unit but by some power of 5.

Rather than analyze Roman enumeration in greater detail, let's proceed directly to counting in modern positional notation. Counting with tally marks is essentially a unary, or base-1, process. Adopting a base larger than 1 makes counting more efficient but also more treacherous.

Mechanical counters based on decimal notation are found in odometers and many other devices, including lever-actuated voting machines. In these mechanisms, input events drive a units wheel; each time this wheel completes a full turn, it jogs the tens wheel by a tenth of a revolution; then the tens wheel moves the hundreds wheel by the same amount, and so on.

Electronic counters are generally binary rather than decimal. A *k*-bit binary counter can count from 0 up to 2^{k}–1; for example, an eight-bit counter runs up to 255 (which is 11111111 in binary). The basic building block for a binary counter is a device called a flip-flop, which has a single input, a single output and two internal states, labeled 0 and 1. In state 0, when the flip-flop receives an input pulse, it flips to state 1 but does nothing else. In state 1, when an input pulse arrives, the machine flops back to state 0 and also emits an output pulse. A *k*-bit binary counter requires *k* flip-flops. They are wired together in a cascade arrangement, with the output of each device becoming the input to the next. Initially, all the flip-flops are in the 0 state. The first input event flips the least-significant bit to 1. The next pulse flops this bit back to 0, and the output of the first flip-flop changes the next bit to 1. As successive pulses arrive, the cascade of flip-flops counts through the sequence of binary numbers: 0, 1, 10, 11, 100, 101, 110, etc.

What kinds of errors could upset this process? For one thing, the hazard that plagues the unary counter is still present: A signal might fail to register at the input port, so that the state of the counter would remain unchanged. But now there are other weak points as well. Each of the signals between stages of the flip-flop cascade is subject to the same kind of failure. And if a pulse between stages is lost in transmission, the consequences can be more drastic than a mere failure of the counter to advance.

Consider a four-bit counter in the state 0111 (equivalent to the decimal number 7). On the next input event, the rightmost flip-flop should change state from 1 to 0, issuing an output pulse that causes the next flip-flop to make a similar transition; when the signals have rippled all the way through the cascade, the next state of the counter is 1000 (decimal 8). But suppose the input pulse is recognized successfully, and so are all the intermediate signals except for the very last one, which is lost in transmission. Then all the 1 bits in the original state flop back to 0, but the 0 bit fails to flip to 1. As a result of this single error, the counter is reset to 0000, and the entire history of the count is wiped out.

The statistics of such errors are clearly different from those of the Peano counter. It is still true that *R* is less than or equal to *N*, since there is no way for the recorded count to get ahead of the true count. But the sequence of *R* values is no longer guaranteed to be nondecreasing. Whenever the output of a flip-flop fails to propagate to the next stage, the count retreats; it is said to fall off a "Hamming cliff," named for the mathematician Richard W. Hamming. Moreover, the mean value of *R* is no longer equal to *pN*.

The trajectory of a cascade counter—the graph of *R* as a function of *N*—is typically a straight line ascending to the right with slope 1, interrupted at intervals by sudden falls from Hamming cliffs, after which the ascent resumes. Even with this erratic behavior, it seems possible that the mean value of *R*, averaged over many samples, would still be a simple function of *N*. But in fact the graph of the mean is neither a straight line nor a smooth curve; it has discontinuities near each power of 2. This nonlinearity makes estimating *N* from a knowledge of *R* even more challenging.

**IN THIS SECTION**

Community Guidelines: Disqus Comments

The Art of Communicating Science

EMAIL TO A FRIEND :

**Of Possible Interest**

**Computing Science**: Belles lettres Meets Big Data

**Technologue**: Quantum Randomness

**Technologue**: The Quest for Randomness