Behavioral Analytics: The Future of Just-in-Time Awareness Training?
It’s high time we leveraged modern threat detection tools to keep users on the straight and narrow road of information security.
My mom bought a new car the other day and like most new cars today, it comes equipped with all the modern bells and whistles, including driver assistance features. If she starts wandering out of her lane, beeps and flashing lights direct her to get back in her lane. Or if she gets too close to the car ahead of her, the car brakes automatically. Great stuff for those who aren’t always paying close attention, right?
I’d say it’s high time we brought these kinds of features into the information security space, because right now we’re trusting employees to drive our “cars”— or expensive IT infrastructure – and the precious information that flows through it. But humans are fallible, and too often they’re just not paying close enough attention. As a result, they’re getting into accidents that are costing us dearly.
The good news is User Behavior Analytics (UBA) tools offer the promise of solving this problem – if they evolve in the right direction. These tools — which draw information from various other data gathering systems in the market, such as security information and event management (SIEMs), data loss pevention (DLP) systems, etc. — are providing real value in identifying patterns and signs that reveal the presence of bad actors in the IT environment.
These bad actors are typically malware or system vulnerabilities, but also insiders who are trying to commit malicious acts. Once identified, those bad actors can be dealt with, and risk reduced. It’s really positive, but it can go even further.
Right now, UBA and these other threat detection tools are great at identifying and addressing the symptoms of technical failure (such as system vulnerabilities), but we’ve only just tapped into their capacity to really track and respond to the symptoms associated with human failure. But this can and I believe will change.
For my tastes, the scope of UBA needs to be broader, to expand beyond identifying and tracking the traces left by malicious insiders, system vulnerabilities, and malware, and into tracking and then directly addressing patterns of human ignorance* and inattention. (*Author’s note: I’m using “ignorance” to mean “lack of understanding” or “unawareness,” not to imply that end users are ignorant!)
It will start when UBA takes a lesson from phishing simulations. The information security community loves phishing simulation tools – and why not? These tools do a great job at identifying employees who put the organization at risk by clicking on (fake) phishing attempts. Once you know who falls prey to phishing, you can target them with just-in-time education and (ideally) improve their performance and their ability to protect the organization. It works perfectly – or so say advocates.
However, phishing simulations have some real limitations; they’re too easy to manipulate to get the results you want; they can make employees feel “violated;” some phishing lures can scare IT or HR; and phishing is just one threat among many, after all. Despite those negatives, they do deliver just-in-time education — they are the warning light when employees drift “out of their lane.” That’s a real positive – and this is where UBA tools come in, because of their capacity to identify so many more problems.
Imagine, if you will, a robust UBA system that identified and addressed human risks related to data classification, access controls, password reuse, remote connectivity, inappropriate use of cloud computing, etc., coupled with correctives to guide the user back to safety. Here’s how it might work:
The first time Joe Employee saves a document to an unapproved cloud storage site (for example), he gets a system-generated pop-up that directs him to company policy on the use of cloud storage. Problem solved, 70% of the time—but not always.
So the next time he does it, the system provides a two-minute video on the problems with unapproved cloud usage. More improvement. But, Joe is among the 5% who still don’t get it, so when he does it again the system enrolls him in a required 15-minute training course on Acceptable Use policies.
Despite that, there will be .1% who still do it wrong – and that’s where human intervention or termination might need to come into play.
Now imagine this kind of identification and correction across the spectrum of human risk. I’m not talking about malicious acts, but rather inadvertent acts, or acts committed out of ignorance or inattention —the very same problems we too often try to correct with boring policies and lengthy security awareness training courses.
Can we “tune” UBA systems to identify these kinds of triggers? I believe we can. Pair these risk triggers with a flexible deployment of just-in-time training and you’ve created “lane assistance” warnings for information security, with the added benefit of only training those who need it and not wasting the time of those who don’t.
About the Author
You May Also Like