Security Ratings Are a Dangerous Fantasy
They don't predict breaches, and they don't help people make valuable business decisions or make users any safer.
Security professionals don't like security ratings, also known as cybersecurity risk scores. Partly this is because people don't like being criticized. But mostly it's because security ratings don't work, and cannot work as presently conceived and sold. The industry is a marketing facade. Security ratings do not predict breaches, nor do they help people make valuable business decisions or make anyone safer.
Why are security ratings so bad? For starters, the data is terrible. The quality of security ratings is contingent on the quality of the underlying data and the science with which this data is interpreted. Unfortunately, the cybersecurity ratings industry has nowhere close to the depth and breadth of data of other ratings sectors.
Security ratings companies do not have accurate network maps, and ratings are regularly deflated due to misattribution or improper understanding of network configurations. Security ratings companies typically use incomplete third-party data and do not communicate caveats or error estimates to their customers.
By the time you read them, security ratings are already out of date, because the data is not quickly refreshed and refresh timestamps aren't clearly communicated.
Another challenge is that ratings aren't scientific or statistically relevant. Given those problems, vendors committed to a ratings product have no choice but to hack their way to a partial solution. The partial solutions manifest in a subjective weighting of multiple factors that will almost never perfectly align with real security priorities.
Ratings are whatever product managers want them to be, and they are not based on standards or risk science. Ratings also don't make sense for the vast majority of businesses, which are small, third-party-managed, increasingly cloud-hosted networks with a tiny Internet attack surface.
Today's security ratings can't tell us what to care the most (or least) about; the worst cyber incidents are large, unpredictable events like wildfires. That's why these vendors provide subjective ratings, not probabilities.
Because security ratings are unreliable, companies cannot use them to make important business decisions or drive security outcomes.
What Would Be Better Than Ratings?
First, large companies and government agencies can subsidize downstream cybersecurity, using threat intelligence and information sharing programs to benefit small-to-midsize businesses that can't afford full security programs. A key part of such an initiative should include in-sector information exchange; it's probably not a secret which of the vendors that share information have regular technical issues.
Second, risk assessment partnerships can cut across levels of the security stack to correlate data from endpoints, internal network activity, and public Internet data to more comprehensively evaluate the posture of an organization. An accurate shared perspective on the state of cybersecurity requires buy-in from on-premises and network product manufacturers and/or the evaluated organizations themselves.
Why Are Ratings Dangerous?
Ratings companies have distorted reality for the sake of a cheap, nearsighted market advantage. These distortions have the potential to misallocate valuable and scarce resources, like expert labor hours and dollars for technology.
If we really want to make cybersecurity and Internet safety better, then we have to start with a common understanding of the problems, and then build technology and process solutions. Reducing the complexity and nuance of a highly technical practice to a round number or letter grade takes us further away from reality, creating an unwelcome distraction from those of us still living in it.
Related Content:
About the Author
You May Also Like