Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.
The 20 Worst Metrics in Cybersecurity
Security leaders are increasingly making their case through metrics, as well they should -- as long as they're not one of these.
September 19, 2019
Figure 1:
After a decade or more of exhortations from cybersecurity pundits that CISOs need to be more data-driven and speak in the language of business — namely through numbers and measurement — the metrics message is finally sinking in. Whether it is to justify spending, quantify risk, or generally keep the executive suite up on security doings, CISOs discussions are now awash in dashboards, charts, and key performance indicators. The only problem? A lot of the numbers security teams and their leadership uses are, well, not very useful.
In fact, many of the measurements made are vanity metrics, presented with little context, collected in volume with little analysis, and often instrumented to the wrong observables to truly communicate risk. The Edge recently asked security experts around the industry about their least favorite metrics — and boy did they have a lot to say. The following are 20 of the worst metrics in cybersecurity, as described by the people who live and breathe security every day.
(Image: maxxasatori via Adobe Stock)
Figure 2:
Overly Complex Metrics
Says Caroline Wong, chief strategy officer at Cobalt.io: "Before you present a security metric with a complex calculation behind it — whether it's something formal like FAIR or a customer security score that you use internally — consider how familiar your audience may already be or not be with the calculation behind the score. If your audience is not familiar with how you get to the number(s) you're presenting, you may find yourself defending the methodology and calculation more than you actually get to discuss the security metric itself, its meaning, and the action that you recommend as a result."
(Image: Sergey Nivens via Adobe Stock)
Figure 3:
Shock And Awe Metrics
Shock and awe volume metrics do exactly what they say. For example: There are 23,456 unpatched vulnerabilities. But that number has no context or risk consideration by itself.
Says Brian Wrozek, CISO at Optiv: "Is this figure good or bad, normal or shocking, rising or falling? Are the vulnerabilities old or new? Are the vulnerabilities on high- or low-value assets? Are there many vulnerabilities on a few assets or a few vulnerabilities on many assets? All of those contextual signs matter. Unfortunately, context is left out of a lot of the eye-popping security statistics we see."
(Image: tostphoto via Adobe Stock)
Figure 4:
Qualitative Metrics
Says Rob Black, founder and managing principal, Fractional CISO: "Qualitative cybersecurity metrics are horrible at successfully driving the correct organizational behavior. Many organizations use the high, medium, and low measurements for risk. This is wrong on so many levels.
"You would never hear someone in the finance department saying that we need 'high' to fund the project. They would give a number. So should cybersecurity professionals. Try getting 'medium' insurance. These qualitative metrics do not work for other lines of business. They should not be used by the security department. Qualitative metrics should go the way of the cubit!"
(Image: thevinman via Adobe Stock)
Figure 5:
One Attack Risk Metric To Rule Them All
Says Brian Contos, CISO at Verodin: "'How secure are we from attacks?' When I see [a single] metric created [to answer this question], I cringe because generally the math is predicated on the juxtaposition of discovered vulnerabilities to patched vulnerabilities. That's a great metric to have when trying to understand how successful you are at patching vulnerabilities, and we should all, of course, be doing this. But it doesn't really address how secure you are from an attack.
"This [is gauged with metrics that] can be broken into categories, such as: How effective are my network, endpoint, email, and cloud security tools? How effective is my MSSP in adhering to their SLAs? How effective is my security team at responding to incidents? And how effective are the processes that my security team follows?"
(Image: Dmitry Sunagatov via Adobe Stock)
Figure 6:
Growth Of Security Program
Says Ernesto DiGiambattista, founder and chairman at ZeroNorth: "More people, applications, and tools are often considered a measure of success, but this approach is flawed as it does not necessarily equate to an improved security posture. More important considerations are the degrees to which you are closing gaps in your security program, often with the people and tools you already have in place. Of course, growth might be necessary in some or all areas, but this metric alone is certainly not a measure of success."
(Image: Brad Nixon via Adobe Stock)
Figure 7:
CVSS-Based Risk Scoring
Says Michael Roytman, chief data scientist at Kenna Security: "Only a small percentage of all vulnerabilities are ever exploited, but CVSS scores don’t reflect this truth. CVSS scores do not consider how widespread a vulnerability is and the public availability of a known exploit. Essentially, CVSS does not take into consideration the threat or the probability that a vulnerability will be exploited as part of a hack, and yet many organizations rely on it as their sole compass for patching vulnerabilities.
"When security teams are evaluating which vulnerabilities need to be patched first, their prioritization needs to go beyond CVSS and consider the likelihood of these vulnerabilities being exploited."
(Image: Sirichai via Adobe Stock)
Figure 8:
Capability Maturity Model Integration Scores
Says Brad Nigh, director of professional services and innovation at FRSecure: "Organizations often use CMMI as a classification of how mature components of their security programs are. CMMI focuses on process and documentation with the benefit of being able to plug a new employee in with minimal concern of disruption to the process/program. The problem with the CMMI scale is that it doesn't factor the value of assets an organization has.
"As a result, you get a false sense of security — an assumption that you're safe because of your well-oiled processes without giving consideration to whether the processes actually work for your environment and if they address your biggest risks/vulnerabilities."
(Image: Gorodenkoff via Adobe Stock)
Figure 9:
Mean Time To Detect/Respond
Says Randy Watkins, CTO at CriticalStart: "Most organizations recognize MTTD and MTTR as the de facto metrics when it comes to the investigation of cybersecurity alerts. The problem comes in the measuring of 'mean' time to respond. With the amount of alerts actually requiring response, looking at the mean time may impose an artificial ceiling on the time available to triage a compromise.
"To account for the investigations that take longer for triage and response, opt into calculating the median time to detect. Eliminating outliers on both sides of the timeline will give a more accurate picture of the security team's performance in regard to response."
(Image: Rawpixel.com via Adobe Stock)
Figure 10:
Employees Completed Training
Says Michael Coates, CEO and co-founder of Altitude Networks: "The percent of employees that have completed security training is a false flag. This gives a false sense of security on the security posture and resiliency of a corporation. Security awareness is good and shouldn't go away. But if an organization is deriving any sort of confidence merely because a high percentage of employees received annual training, then they are really focused on the wrong items."
(Image: kasto via Adobe Stock)
Figure 11:
Number Of Records Breached
Says Jeff Williams, CTO and co-founder at Contrast Security: "Number of records breached is a very poor way for companies and individuals to understand the severity of a breach. An attacker could completely own all of a company's servers, steal all their money, and destroy all their records without disclosing a single 'record.'"
(Image: vegefox.com via Adobe Stock)
Figure 12:
Mean Time To Failure
Says Pankaj Parekh, chief product and strategy officer at SecurityFirst: "This is often misleading because in a modern, complex data center, individual components will always fail. It's much more meaningful to measure the degree of fault tolerance and resiliency in the infrastructure so that if any part fails, the overall data center operation is not impacted. This is the idea behind the 'Chaos Monkey,' invented at Netflix in 2011 to randomly disable a server and make sure the chaos was survivable."
(Image: Brian Jackson via Adobe Stock)
Figure 13:
Number Of Threats Blocked By Security Controls
Says Tim Bandos, vice president of cybersecurity, Digital Guardian: "Of course, it sounds amazing to report to the board that your controls blocked millions upon millions of threats at your perimeter firewall, but anecdotally this is the absolute worst. It sends the wrong message in relation to the effectiveness of your cybersecurity program and doesn't truly gauge how resilient your organization is to an actual threat, such as ransomware or a state-sponsored attack.
"A better metric, in my opinion, is the mean cycle time from initial infection to detection, or the duration to neutralize a successful threat, because at some point they will get in!"
(Image: ASDF via Adobe Stock)
Figure 14:
Counting Vulnerabilities
Says Martin Gallo, director of strategic research at SecureAuth: "An example of a commonly used vanity metric is counting the vulnerabilities affecting an application, system, or network, and then using it as a measurement of how secure that system is. While we can all agree that it's important to reduce the amount of vulnerabilities, merely counting issues without considering the potential impact and likelihood of those vulnerabilities being exploited is a recipe for poor risk management.
"Likewise, some corporate assets are more critical than others. Applying the same metrics to the most vital and those of lesser importance can cause confusion and likely won't yield any particular action."
(Image: Aris Suwanmalee via Adobe Stock)
Figure 15:
Phishing Click Rate
Says Dennis Dillman, vice president of security awareness at Barracuda Networks: "Though lowering click-rate seems like a good way demonstrate the ROI of your user-awareness solution, it shouldn't be the main focus of your training program. When focusing entirely on lower click rates, admins tend to repeatedly send very similar phishing emails to their users. This repetition teaches users to recognize a spear-phishing attack, but it doesn't prepare users for the variety of attacks they might encounter.
"There are metrics beyond the click rate that are important to pay attention to. To better assess the effectiveness of your training program, look to see how many enter their credentials on to a spoofed landing page, how many are replying to your simulations, and how many suspicious emails are getting reported to your IT team."
(Image: momius via Adobe Stock)
Figure 16:
Days To Patch
Says Menachem Shafran, vice president of product at XM Cyber: "In many organizations, this is a very basic and common metric. This is because it is easy to get from a vulnerability scanner. Most organizations track how long it takes them to patch vulnerabilities, either in general or, in better cases, divided to CVSS risk score and assets groups. The problem with this metric is it doesn't really reflect your current risk. You might have in your environment vulnerabilities that have a low score and are on noncritical assets yet could help adversaries gain access to more important assets."
(Image: Sergei Fedulov via Adobe Stock)
Figure 17:
Number Of Incidents Handled
Says Nimmy Reichenberg, chief strategy officer at Siemplify: "When it comes to security operations, my least favorite vanity metric is 'number of incidents handled.' This is classic case of a metric that reports on 'busyness' rather than 'business.' [It] fails to flesh out how good SecOps is at understanding which incidents actually need handling, how quickly the most critical events are addressed to reduce dwell time, as well as how good the function is at automating away false-positives to counterintuitively reduce the incidents that they need to handle."
(Image: patpitchaya via Adobe Stock)
Figure 18:
Incidents Remediated Per Staff Member
Says Chris DeRamus, CTO and co-founder of DivvyCloud: "Another useless metric is tracking the number of incidents remediated per staff member. In today's cybersecurity landscape, the number of active threats facing an organization is easily in the millions. Similarly, the number of incidents each staff member can remediate is inconsequential compared to the total number of threats and vulnerabilities, even for the most experienced and skilled security practitioners.
"It's not humanly possible to track all active threats and remediate all security incidents in real time, so companies shouldn't waste their time with these metrics."
(Image: stokkete via Adobe Stock)
Figure 19:
Case Open Time/Time to Close
Says Joe McMann, chief security officer and strategy lead at Capgemini: "One particular example of a metric that gets us riled up is 'length of time to close an incident,' as it's completely ambiguous, highly variable, and has a multitude of dependencies. The objective shouldn’t be to close a ticket as fast as possible to keep a queue at zero; this isn't a call center.
"Our real goal as an enterprise defender should be effective response, complete and thorough analysis, and leveraging lessons learned to drive toward a more active defensive posture. I want to measure my ability to enumerate the entire attack life cycle, and I want confidence that this analysis has resulted in new signatures, detections, or mitigations."
(Image: rcfotoStock via Adobe Stock)
Figure 20:
Percent Of Security Program Controls Covered
Says Todd Boehler, vice president of product strategy at ProcessUnity, and Ed Leppert, founder of Cybersecurity GRC: "The percent of security program controls covered in policies is a metric with a double-edged sword. It is important to make sure your policies are comprehensive and cover the controls within your security program, so I don't want to diminish the importance of this as a metric.
"But great policies won't reduce your risk unless they are understood and adhered to, so you also need to have a corresponding metric along with this to measure the understanding of the policy by employees — testing knowledge after reviewing — and/or policy adherence via assessments."
(Image: DOC RABE Media via Adobe Stock)
Figure 21:
Open Tickets Per Analyst
Says Chris Triolo, vice president of customer success at Respond Sofware: "We hate this one – it's a race to see how quickly you can close a ticket, rather than analyze and remediate or ensure we don't have a problem. This is the classic 'what gets measured gets done' problem.
"If we measure the number of closed tickets, you get a lot of closed tickets vs. measuring the number of tickets with a true incident that needed remediation – that would be a valuable metric."
(Image: Gorodenkoff via Adobe Stock)
About the Author
You May Also Like