- Content Tests
- Popular Content Tests
- ...And That's It for Content Filtering
There are far more envelope and content tests used in the spam world than can be covered in a series of articles. This limitation requires us to stick to the more popular methods, which in the content-testing world include feature recognizers, collaborative spam-reporting networks, and Bayesian learning engines. By using a combination of these three techniques, you can eliminate the vast majority of spam that is currently passing to your users.
Feature Recognizers
There are particular features that are the hallmarks of spamalthough some of them are also the hallmarks of items like E-mail newsletters, which is why so often these requested E-mails are caught by spam filters (and why you should use a quarantine to let the users sort their own spam from their ham).
A "feature" can be any characteristic of an E-mail message that might tip you off that the mail is (or is not) spam. It can be something blatant, such as an extra mail header that spam software inserts for tracking purposes, or it can be something more subtle, such as the symptoms of mail header forgery.
Feature recognizers use pattern-based rules to look for a large number of features in the headers and in the body of an E-mail message. Many of these rules are contributed by mail administrators around the world, and can be downloaded in regularly-updated sets from public websites, such as the SpamAssassin Custom Rule Emporium. When a new form of spam starts making the rounds, it usually takes less than a day for someone to submit a new rule or two to accurately identify these items. Think of these as "virus signatures for spam." By keeping your rule sets up-to-date, your feature recognizer can better protect you.
One popular spam feature is HTML-formatted mail, in part due to what we discussed in Part I: spammers can embed a unique code into an image access link, allowing them to know who looked at the message, and therefore determine which addresses go to real people. Another reason for HTML mail is URL obfuscation. Scam spammers will often include URLs such as:
http://www.paypal.com:80@7527202817/secure-login.php
Take this URL apart, and you will actually find that it points to host 192.168.0.1 and not to http://www.paypal.com. This trick works because most people don't realize that the full possible format of a URL is really this:
scheme://username:password@host:port/filepath
In our example, then, the scam artist is taking advantage of the optional username and password fields to trick you into thinking you're actually connecting to http://www.paypal.com, when in fact "http://www.paypal.com is just the username, and the "80" is not a port number but instead is the optional password. The real host's IP address is encoded as a "really large number" (the decimal version of the four bytes used for a traditional IP address): 7527202817. When you convert this decimal number to an IP address, you get 192.168.0.1, but that's not obvious to human eyes, and so this value looks at a glance like some sort of "session ID" code.
This technique is a great way to trick people into thinking they are interacting with their bank, favorite online auction service, ISP, or other venue that has vital information someone might want to steal. Since legitimate senders have no reason to conceal their URLs, the presence of an obfuscated URL in an E-mail message is a strong sign that the mail is spam, and it's an easy feature for properly-designed feature recognizers to pick out.
Spammers also like HTML for its visual impact. It lets them catch your attention with bright colors, pretty pictures, and large fonts. In fact, this is one area where spammers often go overboard, making their spam even easier to pick out from legitimate mail. Most people don't use HTML to compose the mail they send, and when they do, they tend to stick to reasonable font sizes and default colors like black. When you spot 32-point red text in an HTML E-mail, then, the mail is almost always spam.
This same ability to use colors for text and background lets spammers "hide" text as well, by printing white letters on a white background, or using an extremely tiny font. Why would a spammer want to hide his message from you? He's trying to fool your content filters by stuffing the mail with a bunch of words that usually signal that the mail is legitimate. That way, if your filters see ten words that make the mail look like spam ("buy," "guaranteed," "limited time," "exclusive offer"), but 100 words that make it look like legitimate mail ("dictionary," "rhododendron," "oncology"), the mail is more likely to get through. However, this list of junk terms could distract you from the sales pitch if visible to human eyes. Using HTML font tricks, the spammer can hide them from you, without hiding them from your content filter. Since there are few legitimate reasons to print white text on a white background, this feature is usually associated with spam.
Another popular spam tactic is to display an attached image with HTML, but without any text. Technically, mail programs that send HTML-based mail are supposed to include a plain-text version of the mail as well, so the contents can be read by mail programs that aren't HTML-aware. It's easy to forget that there's no requirement that a mail client be able to read HTML, and many popular clients in the Unix world, such as elm and pine, don't support HTML without additional plug-ins. If you receive a message that contains HTML but no plain-text, you can be pretty sure that either it was sent by a poorly-written mail client (of which there are plenty, alas) or it was designed to broadcast spam.
Not all spam features involve HTML, however. Feature recognizers can also watch for the techniques popularly used to evade content filters. For example, back to the ever-popular "Viagra" pitches, a spammer might try to fool the filters by using "V|@gra," "V^I#A*G%R*A," or even "Via<!nonsense_gra" in order to get their point across in a way that a computer won't easily catch on the server-end or in an E-mail client's mail filters.
Then there's "When Feature Recognizers Attack." No computerized tool is perfect. While often associated with spam, the phrase "multi-level marketing," for example, could appear in a legitimate context as well. Say, a newsletter about spam scams. Most rules are designed to identify spam features, rather than ham features, but the basic idea behind feature recognizers is to assign a score to each rule (positive values for spam features, and negative values for ham features) and then add up the cumulative scores at the end of the process. Features that appear almost exclusively in spam tend to be assigned higher scores than features that are found often in ham as well.
If you've got a large number of spam and ham items sorted into separate piles, a particularly clever feature recognizer can even compute these score values for you, based on how often a rule gets triggered by the mail items in each pile. For example, if you've got 1,000 items in the spam pile and 1,000 items in the ham pile, you could find out how many of the mail items in each pile contain a pattern like "MLM" or variations on the phrase "multi-level marketing." You might find that this rule triggers in 250 of the items in the spam pile (25%), but only in 10 of the items in the ham pile (1%). You could then conclude that if this rule gets triggered, there's a 250/2000 (12.5%) chance that the mail is spam, and only a 10/2000 (0.5%) chance that the mail is ham. The feature recognizer can then use these percentages to come up with a balanced score for this rule, so that it doesn't receive too much (or too little) weight.
Collaborative Spam-Reporting Networks
One aspect of spam that many forget is that it is a one-to-many broadcast. The typical spammer sends to millions of recipients at a time, so by the time you receive a copy in your mailbox, you can be sure that many thousands of others have already received theirs. Wouldn't it be handy if some of these people could come forth and say, "I've already received that mail, and it's definitely spam"? If enough people did that by the time you received your copy, you could be pretty confident that what you just received was spam. This is the beauty of collaborative spam-reporting networks.
There are a number of these collaborative networks, such as the Distributed Checksum Clearinghouse, Vipul's Razor and Pyzor. These sites maintain databases of spam reported by people all over the Internet, so that a content-filter can quickly determine whether a given E-mail has already been reported by others as spam. The more people who have reported a given piece of spam, the more the content-filter should be inclined to make the same diagnosisagain, by using a scoring system.
Spammers actually do try to make each copy of a piece of spam "unique" for each user, but these collaborative networks are typically smart enough to recognize a particular spam in spite of these "personalizations." The fact that one copy reads "Dear paul387" and another reads "Dear chris24" doesn't make it a different mail item, as far as these databases are concerned. If the rest of the mail fits the same template, a few extra words thrown in won't be enough to make the mail items seem unique.
One handy function of these networks is that they can give you an idea of how many people have reported an item as spam in real time. This factor lets you make threshold-based decisions; you might decide that if 100,000 people think an item is spam, that's good enough for you. Since this is based on human input, rather than rule-based automation, the results can be more reliable than a computer's decisions. Also, the real-time nature lets these networks respond quickly to new forms of spam that computerized filters using yesterday's rule-sets might miss.
A downside of consulting external databases, however, is that when you receive a piece of mail, your server then has to wait for a response from a server somewhere else on the Internet. On a high traffic site or a day when network traffic is overloaded or even down, the resource and time-lag cost of processing every piece of E-mail becomes prohibitive. On better days, this feature adds one or two seconds to mail processing. But, on high traffic sites, this performance hit may be more than you can tolerate.
Bayesian Learning Engines
If you hang out anywhere online where people talk about their favorite spam solutions, you have no doubt seen people bandying about the term "Bayesian filters." The Bayesian learning engine's popularity has skyrocketed over the past year, for good reason: it's a truly effective content-scanning tool.
Learning isn't just a buzzword here. Bayesian engines gobble up the spam and ham that people mark off in their quarantine folders, analyzing the frequency with which certain tokens (words, symbols, or phrases) appear in each. Over time, they (the engines, that is, not necessarily your users) actually get "smarter;" the more spam and ham the engine chews on, the more accurately it understands the big picture.
In a sense, Bayesian (often dubbed "Bayes") learning engines are automated feature recognizers. The feature recognizers we discussed earlier are based on patterns that human beings have spotted and hard-coded into rules. Bayesian learning engines try to do the same sort of thing on their own, without any initial rules to go on. You sort your mail into a spam pile and a ham pile, and let the Bayes engine scan them both to identify patterns without any further human guidance. Every message you receive thereafter can be used to add to its knowledge, just by telling the Bayes engine whether that new item is spam or ham.
For example, the fact that the word "Viagra" appears as a token in so many pieces of spam and appears so rarely in legitimate E-mail makes the Bayesian learning engine suspicious when that token shows up in new mail. Unlike ordinary feature recognizers, however, this kind of pattern recognition is highly individual. If you're a urologist, for example, and "Viagra" appears often in your legitimate E-mail, the Bayesian learning engine will regard that token with much less suspicion (possibly even as an indicator of legitimate mail!).
Most importantly, though, you never have to tell the Bayes engine to look for a specific "Viagra" pattern, as you would with ordinary feature recognizers. The Bayesian learning engine should pick out that pattern all on its own, if it's found in a lot of your E-mail. Whether that token is taken to be an indicator of spam or ham depends entirely on whether it shows up more often in your spam pile or your ham pile.
The Bayes engine uses a "confidence level" to indicate the likelihood that the mail as a whole is spam. A low confidence level (say 10%-20%) is a strong sign that the mail is ham, whereas a high condidence level (90%-100%) suggests the mail is almost certainly spam. If the mix of spam tokens and ham tokens is almost even, the confidence level ends up around 50%, which is the Bayes engine's way of saying that it couldn't make up its mind.
Due to the effectiveness of the Bayesian engines, spammers aren't fond of them. In fact, these days many spammers go to the trouble of trying to "poison" Bayes databases that chew on their mail. As we mentioned in the feature recognizers section, the spammer stuffs a long list of dictionary words together somewhere in the E-mail message, hoping that some of those words will appear more often in your ham pile, so that the mail will look more legitimate. While a feature recognizer will just be fooled or not fooled, to a Bayes this technique is meant as a "poison pill." If you try to train your Bayes engine to recognize this E-mail as spam, you force it to add all those dictionary words to its database of spam tokens. The end result is that legitimate mail that includes those dictionary words in the future can get flagged as spam.
All is not lost, however. The size of your Bayes database determines how vulnerable it is to this poisoning tactic. A gallon of poison in a bathtub of water is quite potent, but that same gallon in the Pacific ocean is diluted to the point of harmlessness. As your database grows, this tactic has less and less impact. In fact, smarter anti-spam solutions can even learn to identify these blocks of dictionary words (which are usually all lower-cased and without any punctuation) and avoid adding them to the database at allthe poison is just another pattern, or feature, after all.