We recently had the fifth annual Conference on Email and Anti-Spam, and I’ve been meaning to get around to writing up a “highlights” post. I finally have, here.
We were particularly pleased to have a keynote talk by Lois Greisman, head of the Division of Marketing Practices in the U.S. Federal Trade Commission’s Bureau of Consumer Protection. Ms Greisman told us about what the FTC is doing in the anti-spam (and related abuse) war, and talked about what we, the anti-spam community can do to help. She stayed for the day and talked with a number of the attendees, several of whom are interested in sharing their work with the FTC.
Rather than trying to go through all the presentations and give summaries, I’ve picked three papers to highlight here because I found them particularly timely, insightful, or interesting. That’s not to say that these were the only interesting ones; I just had to select a few. In order of their presentation at the conference (links to the papers are PDFs)....
“Exploiting Transport-Level Characteristics of Spam”
[authors: Robert Beverly, Karen Solins]
We’re always looking for alternatives to content analysis in the fight against spam — not to replace content analysis completely, but to reduce our dependency on it and to find other mechanisms to work alongside it. The authors of this paper have analyzed network traffic at the transport layer and looked for characteristics that differentiate spam from non-spam.
Their results are preliminary, but interesting, and the idea merits more study. It’s not clear to what extent it reflects their specific environment, whether use of their current results would have the effect of punishing poorly connected legitimate mail senders, and whether any results they get would see the spammers easily adapt to them.
Those concerns mean that lots more careful work is needed before any such analysis could really be used to combat spam. But our need to have mechanisms that do not rely on the content of the messages is such that lots more careful work on this is warranted.
“Social Honeypots: Making Friends With A Spammer Near You”
[authors: Steve Webb, James Caverlee, Calton Pu]
The conference is looking outside of email, addressing spam in other contexts. One that crops up here and there is social networking — in particular, spam identities and spam “friend” requests. The authors created 51 “honeypot” MySpace identities, one for each U.S. state and one for the District of Columbia. They created bots to keep them all logged in all the time, to make them more appealing (currently logged-in identities show up higher in lists). They waited to see who befriended them, and then the bots automatically rejected the requests.
They found that friend requests did not come in a geographically proximate way, but that their identities in the midwest were called on to be friends more often than ones in other U.S. regions. Most of the originators of the requests claimed to be in California.
They got almost 1600 requests in all, mostly over a two-month period, and when they eliminated duplicates and compared the profiles, they found only 226 profiles that were sufficiently distinct to not be considered effective duplicates. When they boiled these down to target URLs and removed duplicates and redirection there, it all came down to 6 profile URLs and 5 redirection URLs — 2355 URLs eventually reduced down to 11.
“Breaking out of the Browser to Defend Against Phishing Attacks”
[authors: Diana Smetters, Paul Stewart]
“Phishing”, the practice of trying to fool people into giving away personal information, usually involving access to financial accounts, amounts to a significant amount of the spam that’s now sent. It can also be a particularly tricky kind of spam to separate from the real mail, and failure to filter it exposes users and financial institutions to real losses.
Recognizing that phishing is mostly a social engineering problem, and that there are real limits to what anti-spam technology can do for it, the authors designed and tested some user-interface changes to address the issue. They created a set of secure bookmarks for protected sites — sites that would include banks and credit card providers, for example. Those bookmarks reside in a special, secure container, and they launch a specially locked-down browser that would refuse to visit other sites and would disallow such things as cross-site scripting. The bookmark container will only hold authorized secure bookmarks, and it and the bookmarks in it are protected from tampering.
The idea is that if users are taught to use only the secure bookmarks to access high-value sites, they could not be fooled into giving their login information for their bank account to a fake web site run by a fraudster.
No comments:
Post a Comment