Fighting Alert Fatigue with Actionable Intelligence
By fine-tuning security system algorithms, analysts can make alerts intelligent and useful, not merely generators of noise.
To address the barrage of advanced cyberattacks, organizations are turning more and more to an increasing number of security products. Research by Enterprise Strategy Group found 40% of organizations use 10 to 25 different security solutions, while 30% use 26 to 50. That translates — depending on the size of the organization — to tens of thousands of alerts daily.
While alerts are crucial in helping analysts identify and mitigate damages from cyberattacks, they also can be too much of a good thing. A 2017 Ponemon Institute study found that more than half of security alerts are false positives and that companies waste an average of 425 hours per week responding to them. The pressure to quickly differentiate critical alerts from the "noise" becomes overwhelming for understaffed security teams, leading to alert fatigue, frustration, burnout, and desensitization.
Worse, when analysts are unable to open and respond to alerts, many may be left unread, or marked as "read" or "closed" without being addressed. Analysts may also become weary of chasing down red herrings and start to ignore them altogether. When analysts are desensitized to false positives or overburdened by a sea of alerts, the organization is at risk of missing a harmful threat that slips through their defense. The 2014 Target breach and 2015 Sony breach are just two prominent examples of this problem.
Context Is Key
Preventing alert fatigue requires proactive fine-tuning of system alert algorithms by security analysts, who can leverage their skill sets, experience, and knowledge of their environment to add context so that alerts are intelligent and useful, not merely generators of noise.
For example, since valid users occasionally make mistakes logging in, you may not want to receive an alert for every failed login. Instead, you can add intelligence to the algorithm triggering alerts for multiple failed logins, first by grouping them by a specific source, such as a specific IP address logging in and failing multiple times. Then you can add more conditional logic, such as a time-based parameter set to notice something like 10 failed logins for a specific user within a five-minute period. This added context triggers alerts to the more suspicious activity that warrants investigation, saving time, preventing desensitization, and shortening the gap in response time to a valid attack.
As another example, consider a company with a business justification for its users accessing IP addresses in China. A security team might be inclined to add a rule that displays alerts for any foreign traffic. But the better approach in this specific case might be to modify the system to check such IP addresses against a threat intelligence source and to trigger an alert if it matches. Again, this approach would limit alerts to only those that are worth investigating.
New network, endpoint, and SIEM technologies incorporate machine learning and behavioral analysis as a means of automating the addition of context to alerting. Proactively, artificial intelligence can scrape up metadata and may even be able to suggest rules for alerts that analysts may be missing, but analysts still need to vet those rules to ensure the alerts are relevant and actionable. The strongest approach uses both artificial and human intelligence, in concert, to generate intelligent alerts so that analysts only spend time investigating and mitigating critical events and not all the noise.
Blue Team Efficiencies
Intelligent alerting also greatly impacts the structure of a blue team. It allows such teams to operate more effectively and efficiently, enabling lean security organizations to make the most of limited personnel resources — a huge benefit, considering the growing talent shortage. Organizations save on costs and are able to deploy their more experienced analysts to perform other important tasks, like threat hunting and response. And, freed up from chasing down so many false positives, analysts have more time to educate themselves on emerging cybercrime trends, as well as to sharpen existing skills and learn new ones — all measures that ultimately improve an organization's security posture.
Organizations must recognize that alert fatigue has serious security implications and take the steps necessary to empower analysts to make informed decisions and take quick action through intelligent alerting. Security teams will save time and energy by responding to fewer false positives and focusing more on investigating the alerts that matter.
Related Content:
Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.
About the Author
You May Also Like
Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024Securing Tomorrow, Today: How to Navigate Zero Trust
Nov 13, 2024The State of Attack Surface Management (ASM), Featuring Forrester
Nov 15, 2024Applying the Principle of Least Privilege to the Cloud
Nov 18, 2024The Right Way to Use Artificial Intelligence and Machine Learning in Incident Response
Nov 20, 2024