Subscribe
Subscribe
MY AMERICAN SCIENTIST
LOG IN! REGISTER!
SEARCH
 
Logo IMG
HOME > PAST ISSUE > Article Detail

COMPUTING SCIENCE

Fat Tails

Sometimes the average is anything but average

Brian Hayes

By No Means

How different are the statistics of the factoidal process! When I first began playing with the n? function, I was curious about its average value, and so I did a quick computation with a small sample—100 repetitions of 10?. The result that came back was much larger than I had expected, in the neighborhood of 1025. When I repeated the computation several times, I continued to get enormous numbers, and furthermore they were scattered over a vast range, from less than 1020 to well over 1030. The obvious strategy was to try a larger sample in order to smooth out the fluctuations. But when I averaged factoidals 10,000 at a time, and then a million at a time, the numbers got even bigger, and the variations wider.

The arithmetic mean is undefined...Click to Enlarge ImageThe illustration at left shows what I was up against. Each dot represents a sample of runs of the n? program; a dot's horizontal position indicates the sample size, and its vertical position gives the arithmetic mean calculated from that sample. In all cases the value of nis 10. It's important to emphasize that this is nota graph of n? as a function of n; the value of nis fixed. All that changes in moving from left to right across the graph is the size of the sample over which the average is computed. There is no sign of convergence here. The trend is continuously upward: The more trials in the sample, the larger the calculated mean. And because both scales in the graph are logarithmic, the apparent straight-line trend in the mean actually represents exponential growth. The "average" value of 10? is somewhere near 1040or 1050 if you average over 1,000 trials, but it rises to roughly 1090 if you go on to collect a million samples. (For comparison, 10! is roughly 106—or more precisely 3,628,800.)

The dispersion of the dots around the trend line also shows no sign of diminishing as the sample size increases. Thus the variance or standard deviation of the data is also impossible to pin down.

Odd, isn't it? Generally, if you are conducting an experiment, or making a measurement, or taking an opinion survey, you expect that collecting more data will yield greater accuracy and consistency. Here, more data just seems to make a bad situation worse.

With a closer look at the factoidal data, it's not hard to understand what's going wrong with the computation of the mean. Although the majority of 10? values are comparatively small (less than 3,628,800), every now and then the factoidal process generates an enormous product—a rogue, a monster. The larger the sample, the greater the chance that one of these outliers will be included. And they totally dominate the averaging process. If a sample of 1,000 values happens to include one with a magnitude of 10100, then even if all the rest of the data points were zero, the average would still be 1097.

The arithmetic mean is not the only tool available for characterizing what statisticians call the central tendency of a data set. There is also the geometric mean. For two numbers a and b, the geometric mean is defined as the square root of axb; more generally, the geometric mean of k numbers is the kth root of their product. The geometric mean of samples taken from the factoidal process suffers from none of the problems encountered with the arithmetic mean. It converges, though somewhat slowly, to a stable value. Moreover, it turns out that the geometric mean of n? is simply n!, so this is a highly informative measure. Perhaps it should not be a surprise that factoidals are better described by a statistic based on multiplication than by one based on addition.

The median of the factoidal process...Click to Enlarge ImageThe median of n? is also well-defined. The median is the midpoint value of a data set—the item that is greater than half the others and less than half. Because it merely counts the number of greater and lesser values, without considering their actual magnitudes, it is insensitive to the outliers that cause havoc with the arithmetic mean. For samples of 10? the median converges on a value near 27,000 (notably smaller than 10!).

Still another way to tame the factoidal is to take logarithms. If you determine the logarithm of each n? value, then calculate the arithmetic mean of the logarithms, the result converges very nicely. (Note that taking the mean of the logarithms is not the same as taking the logarithm of the means.) The success of this strategy does not come as a surprise. Logarithms reduce multiplication to addition. Essentially, then, taking the logarithm of the n? values converts the factoidal process into the corresponding triangular-number calculation. Logarithms are also at work behind the scenes in computing the geometric mean.

Even with other statistical methods available, it's disconcerting to face the failure of something so familiar and elementary and ingrained as the arithmetic mean. It's like stumbling into an area of mathematics where Euclid's parallel postulate no longer applies, or the commutative law has been repealed. To be sure, such areas exist, and exploring them has enriched mathematics. Distributions without a mean or variance have likewise broadened the horizons of statistics. All the same, they take some getting used to.








comments powered by Disqus
 

EMAIL TO A FRIEND :

Of Possible Interest

Feature Article: The Statistical Crisis in Science

Computing Science: Clarity in Climate Modeling

Technologue: Weighing the Kilogram

 

Other Related Links

A Tale of Tails

Subscribe to American Scientist