Encoding the Analyst: Why AI Security Tools Are Thinking Like an Expert – Only Faster
Despite our best efforts, human defenders simply cannot process information at machine-speeds — and cyber-criminals are taking advantage. When human knowledge meets AI's precision, Cyber AI can augment the human at every stage of safeguarding the digital business.
We're firmly in a brave new world of cyber defense. Soldiers now fight with ones and zeros and the digital enterprise is the new battleground. Both sides are arming up, trying to stay one step ahead of their opponent. But despite our best efforts, human defenders simply cannot process information at machine-speeds — and cybercriminals are taking advantage.
Whereas security teams take an average of 196 days to identify a data breach, modern strains of ransomware can encrypt an entire digital infrastructure in minutes, a disparity that illuminates why a data breach on average costs US businesses $3.92 million. Neither humans nor machines can overcome this fundamental challenge — at least, not alone. Rather, the solution requires synthesizing the intuition and knowledge of human professionals with the speed and precision of artificial intelligence.
Information overload
For one, investigating threats is time-consuming, a resource that is increasingly in short supply for the teams tasked with containing them. When confronted with a fast-acting threat, security professionals have mere moments to discern its nature and assess what response is necessary. And yet identifying this pressing threat amongst the countless alerts generated by an organization's numerous tools is like finding a needle in a haystack.
It's no wonder that nearly three-quarters of security teams report alert fatigue. Between managing various security tools, triaging incoming alerts, and attempting to respond to threats at the speed which cybercriminals target businesses, analysts are racing to keep up. By the time an analyst encounters a genuine threat, they may have already run out of time.
The abundance of alerts is due in part to the intrinsic shortcomings of conventional security tools, which rely on black-and-white "rules" to detect threats. Such rule-based tools are limited to two, equally sub-optimal strategies: either the rules they use to trigger alerts are extremely specific, flagging only a limited number of predefined threats, or they cast a wide net, catching lots of threats but generating a huge number of false positives. Most tools opt for the latter approach, leaving urgent security incidents buried under a mountain of irrelevant information.
Piecing the puzzle together
Further complicating matters is the fact that these conventional tools are, for the most part, designed to protect individual devices and applications, rather than an entire business holistically. This reality leaves the majority of security teams overwhelmed by point solutions that can detect threats to email, cloud, or IoT, but which fail to provide a complete understanding of a business's vulnerabilities.
This dynamic understanding is critical to differentiate a genuine threat from the noise of a network. A normal data transfer for an executive could indicate insider threat for an intern, and normal communications for a CCTV camera may be highly abnormal for a video-conferencing camera. That nuance and difference can't be captured without self-learning cyber AI.
Just one advanced threat can generate dozens of alerts across these numerous point solutions. Piecing these alerts together well enough to understand and respond to the threat can take days, even for experienced professionals. Security teams need technology that is not only capable of understanding what is normal for each unique user across the entire digital infrastructure, instead of applying uniform rules to individual devices, but that can help teams piece together these alerts together.
Where human meets machine
In the face of complex digital infrastructures, advanced attacks and a multitude of alerts, humans can't be expected to keep up.
Through its ability to learn "normal" for each unique user within a business, Bayesian AI can correlate hundreds of weak indicators of compromise to avoid false positive alerts, automatically prioritizing threats and allowing for rapid triaging. While AI offers speed, scale and precision, human intuition and knowledge are still critical to effectively piece together the story of an attack, which is why the Cyber AI Analyst learned from more than a hundred world-class human analysts for three years.
The AI Analyst also leverages unsupervised learning to "reason" on its own, functionally "thinking" like an analyst. Based on available evidence, it creates a hypothesis and then tests it, repeating this process as many times as it needs to arrive at a conclusion and then communicating that conclusion in the form of an easily understood narrative. This all happens at machine speed, buying back valuable time for security teams.
Accelerating tme to meaning with AI
The World Economic Forum estimates that by 2020, the world will have lost $3 trillion from cybercrime. In the last year, a third of businesses have detected they have been attacked. But that's only the incidents that have been identified -- countless breaches run undetected and uninvestigated across companies.
We cannot keep throwing more security tools or more security analysts at the same problems and expect to solve them. Security workflows are long overdue for an update. AI can close the "time to meaning" gap, sifting through alerts to compile a primary and actionable understanding of the most dangerous threats. It can investigate numerous threats at once and come to intelligent conclusions, enabling humans to focus their time on critical, high level tasks. When human knowledge meets AI's precision, Cyber AI can augment the human at every stage of safeguarding the digital business.
— Justin Fier is Director of Threat Intelligence & Analytics at Darktrace.
Read more about:
Security NowAbout the Author
You May Also Like