Logo IMG
HOME > PAST ISSUE > Article Detail


How Many Ways Can You Spell V1@gra?

Spam mutates, and the Internet community mounts an immune response

Brian Hayes

Spams and Hams

The techniques used to combat spam are much better known than the methods of spam senders, for the simple reason that authors of spam-fighting tools publish in the open literature. The antispam measures are quite diverse (which is surely a virtue). There are blacklists and whitelists and even graylists, and various schemes to authenticate senders. There have also been many proposals for legal or economic remedies, or changes to network protocols. But the most widespread tactic for thwarting spam is filtering—analyzing the content of individual messages in an effort to distinguish unwanted from wanted messages. Insiders call them spams and hams.

When procmail rule sets became too unwieldy, attention turned to filters based on ideas from computational learning theory. Underlying this approach is the insight that people can tell at a glance whether a message is spam or ham, but they cannot always articulate the reasons behind their judgment or give clear criteria that can be applied to future messages. Thus an efficient division of labor is to let the human reader serve as final arbiter of what is spam and what is ham, while letting the program discern which features of messages are most useful in defining the two categories. Interestingly, the features identified by the software are often ones that human readers don't notice at all.

Initially, a filter program can be trained on a corpus of pre-classified messages. Thereafter, training continues as the human user corrects any errors of classification.

The first experiments with trainable spam filters were reported in the late 1990s. Patrick Pantel and Dekang Lin of the University of Manitoba built a program called SpamCop, and another project based on similar principles was undertaken by Mehran Sahami of Stanford University and Susan Dumais, David Heckerman and Eric Horvitz of Microsoft Research. The idea attracted a lot more attention four years later when it was rediscovered by Paul Graham, an independent writer and programmer, who issued a manifesto titled "A Plan for Spam." Since then, dozens of commercial products and open-source programs have come into use.

The buzzword associated with these projects is Bayesian, after the 18th-century cleric and mathematician Thomas Bayes; in some cases it is further qualified as naive Bayesian. The nature of the e-mail is inferred from a statistical weighing of various features. In this context a feature might be anything from the presence of the word "Viagra" to the number of exclamation points in a message. Given a set of preclassified messages, it is straightforward to tabulate the frequency of each such feature in each class and thus to calculate the probability that a feature will appear in a spam or ham message. Bayes's theorem offers a mathematical armature for making the inverse calculation: Given the presence or absence of certain features in a message, determine the probability that the message is either spam or ham. The naive variant makes the simplifying assumption that all the features are independent.

A filtering program starts by breaking a message down into a sequence of tokens, which could be words or numbers or perhaps special entities such as components of e-mail addresses. During the training phase, each token is entered into a database and assigned a "spamminess" score. In the simplest case the spamminess could just be the token's frequency in spam messages divided by its frequency in all messages, but in practice various minor adjustments are made.

When the filter is applied to a newly arrived e-mail, each token from the message is looked up in the database. The spamminess scores of these tokens make a prediction about the status of the new message. But the predictions may be contradictory. How do you choose which ones to believe, and how do you combine them to reach a decision? Some of the early filtering programs included information from all the tokens, but later authors found that accuracy improved when the program considered only those tokens with the most extreme spamminess scores (close to 0 or close to 1). As for how to combine the values, Graham adopted the formula

 EquationClick to Enlarge Image

where a and b are the spamminess predictions of two tokens. (The formula can readily be extended to an arbitrary number of tokens.) This is not quite the formula suggested by the Reverend Bayes; Graham neglects to adjust for differences in the overall probability of spam and ham. Nevertheless, the filter performs remarkably well.

In a way, what's most impressive about this technique is how crude and superficial is it—and yet it works. There is no attempt to divine the meaning of the message. The filter's judgment is based on weighing a multitude of tiny features, like the minutiae of fingerprints. The basic technology of Bayesian text classification was first developed for more general tasks, such as the automated filing, sorting and indexing of documents. Spam filtering is just a special case of this method, but an unusually difficult one, if only because it's an adversarial process: There's someone out there trying to fool the filter.

comments powered by Disqus


Subscribe to American Scientist