Inconvenient Lack of Truth
We'll never be able to fix our security problems until we start truthfully sharing breach information
When I graduated the University of Colorado with a history degree, I was fairly certain it would only be marginally more useful to my security career than my unofficial minor in molecular biology. Sure, I'd get to mix in analogies about the Maginot line and antibodies, but you can't swing a dead PowerPoint without hitting those two.
As with many things in life, I was wrong.
When I began my career in information security, I never imagined we would end up in a world where we have as much need for historians and investigative journalists as we do technical professionals. It's a world where the good guys refuse to share either their successes or failures unless compelled by law. It's a world where we have plenty of information on tools and technologies, but no context in which to make informed risk decisions on how to use them.
Call me idealistic, but there is clearly something wrong with a world where CISOs are regularly prevented by their legal departments from presenting their successful security programs at conferences.
Once I realized that the bad guys were openly sharing information and techniques — and the good guys aren't — I decided it was time to put those history skills to work and see if there is anything we can learn from these endless data breaches. The result is a presentation — sort of my own, "An Inconvenient Truth" — entitled, "Understanding and Preventing Data Breaches." I'll be making this presentation at a special breakfast session at the RSA Security Conference on April 8th.
While we have no shortage of breaches, we face a dearth of good information. I've spent countless hours combing through every piece of public information on breaches, both major and minor, to determine consistencies, root causes, and effective defensive techniques.
I've learned how we learned exactly the wrong lesson from the breach at Egghead.com. I've learned how the failures at ChoicePoint were a business decision (that the CEO lied about on record), not a technology failure. I've learned how all the statistics we use are wrong, and are desperately manipulated by the vendor community to sell us products we sometimes need, and often don't.
My research leads to some conclusions that may be unsurprising, but often ignored:
1. Blame the system, not the victims, for identity fraud.
We suffer identity fraud because the financial system now uses a single, insecure, not-very-secret identifier, rather than relationships, to define and issue credit. By eliminating the use of Social Security numbers as the sole attribute to define our credit records, we will materially reduce identity fraud rates. Since it is consumers, not the financial system, that suffer most from the current system, market forces will not be effective in forcing this change and we will (very unfortunately) need more ugly regulation.
2. Blame the credit card companies, not the retailers, for credit card fraud.
Our system is now designed to push as much risk onto retailers, then credit card processors, rather than the issuing banks. The credit card companies themselves bear very little fraud risk in our current system. The PCI Data Security Standard, for example, is really just a mechanism to push risk and costs onto retailers. We have the knowledge and technologies to revamp the credit card system, significantly reducing risk using techniques such as multi-part transaction encryption. But our market forces again fail us, and it's far cheaper for Visa, MasterCard, and friends to issue costly standards for retailers than to secure the back-end system.
3. Consumers suffer from identity fraud, retailers from credit card fraud.
As consumers, we are well protected from credit card fraud, assuming we pay even the most casual attention to our monthly statements. It's the retailers (and banks) that bear those losses. On the other hand, once someone loses our Social Security Number, our financial future is at risk for the rest of our lives.
4. We need fraud disclosure, not breach disclosure.
We are making our security decisions based on breach disclosures mired in self-defensive public relations verbiage. But although we know we face an epidemic of data breaches, we have little to no information on the sources of fraud. In only a few, rare cases do we know that a breach resulted in fraud, and how. If we force the financial system to disclose fraud origins and rates, we can make much more accurate decisions on where to mitigate risk.
5. We need public root cause analysis.
Breach disclosures were designed to inform consumers so they can protect themselves, but we now try to use them to make enterprise security decisions. Sources like Attrition.org and the Privacy Rights Clearinghouse do an amazing job of collecting and classifying public disclosures, but outside of lost laptops, we rarely learn the real details of how a breach occurred. Without this knowledge, we can't make effective risk decisions.
6. Breach disclosures teach us the wrong lessons.
Are lost laptops really the biggest source of losses? Or are they just the most commonly reported? Based on my research, I know for a fact that many companies fail to disclose breaches, or game the numbers and facts when they do. Lost laptops may be the biggest source of breach disclosures, but are they the biggest source of fraud?
Based on the ongoing research I've seen, it's clear that the system is broken in multiple ways. It's not our failure as security professionals -- it's the failure of the systems we are dedicated to protecting.
While my presentation focuses on using what little information we have to make specific tactical recommendations, the truth is we'll just be spinning our wheels until we start sharing the right information — our successes and failures — and work on fixing the system, not just patching the holes at the fringes.
— Rich Mogull is founder of Securosis LLC and a former security industry analyst for Gartner Inc. Special to Dark Reading.
Read more about:
2008About the Author
You May Also Like