Lessons From Fighting Cybercrime
The history of anti-spam teaches us about half-baked ideas and how people succeeded or failed to implement them. The analogy of evolution, while limited, demonstrates how reactionary solutions can achieve strategic goals before they are made obsolete by countermeasures.
May 17, 2009
The history of anti-spam teaches us about half-baked ideas and how people succeeded or failed to implement them. The analogy of evolution, while limited, demonstrates how reactionary solutions can achieve strategic goals before they are made obsolete by countermeasures.How do you herd cats? In a series of blogs starting today, I'll explore the history of fighting cybercrime and how and why certain solutions worked while others failed, how we can recreate success, and what lessons we can distill to build business solutions, affect change in communities -- and even fight terrorism.
One of many failed spam-fighting ideas is the idea of charging for email (raising the cost of sending email): spammers would have supposedly made less money, making it less beneficial for them to spam.
Asking for money (or postage stamps) with email to combat spam is doomed to fail--the spammers already use technology that avoids such measures. Even if they didn't, implementation doesn't seem likely.
Spam is sent by botnets (armies of compromised computers controlled by criminals). These would adapt and use stamps already bought by users who own the computers. It's an example of evolution teaching criminals how to be better at crime.
Thus, by fighting criminals and forcing them to learn, we make our situation worse. I had the honor of being the first to introduce this argument into modern security discussion, which was later elaborated upon by respected colleague Paul Vixie. Paul introduced the idea that as cybercrime results from financial incentive, so must the solution be economic in nature.
A permanent solution to cybercrime, economic or otherwise, doesn't exist yet; the nature of combating it is mostly reactive.
The criminals have a direct economic incentive to retain their ROI and reach their quarterly goals. Therefore, reactive solutions are quickly defeated, again and again, much like we have seen in terrorism and the drug wars. On the other hand, in fighting cybercrime, knee-jerk reactions are often the only viable option we have. At the very least we can strategize for a desired outcome.
When ISPs started blocking port 25/TCP outgoing (SMTP) and limiting it to their servers alone, spammers changed their tactics. Previously, bots sent email directly to any server in the world. Now, they had to go through the ISP's own mail servers. This caused spammers to develop new bots that used real user email accounts -- and thus the ISP's servers -- to send out spam.
While it wasn't popular at first, I was a proponent of blocking port 25, introducing it and advocating for it whenever and wherever I could. (To my knowledge it was a colleague, Dave Rand, who first introduced the concept).
The strategy had some tradeoffs. Mainly, ISPs had to invest resources upgrading their infrastructure to cope with the extra email. The positive results outweigh the negative, however: service providers can now easily pinpoint who offenders are, and automate the abuse-handling process. Not to mention save bandwidth.
Criminals were forced to evolve in a desirable direction, which is a victory on its own. Evolution in capabilities occurs to circumvent security measures. By limiting the spammers' options they evolved to a technological battleground where we have more control.
In my next blog post, I'll discuss why some spam solutions get implemented and others never get past the drawing board, and the role of user psychology here.
Follow Gadi Evron on Twitter: http://twitter.com/gadievron
Gadi Evron is an independent security strategist based in Israel. Special to Dark Reading.
About the Author
You May Also Like