It's Time to Dump The 'Insider Threat'
Blaming the "insider threat" merely hides your real security risks
When my fine editor (Tim Wilson) asked me to write a series of posts on this site, the conversation when something like this:
Tim: Hey, Rich, can you help contribute to our Insider Threat site? We'd love you to do a series of posts over the next few months.
Me: Sure. By "Insider Threat" do you mean "internal data security?"
Tim: Yep.
Me: OK, but I really hate that term. Can we call it something else?
Tim: It's one of our most popular sites. No.
Me: OK, but can I rant on how stupid a term it is?
Tim: Sure. If it makes you feel better.
I've never been a fan of the term "insider threat" because I think it actually distracts us from properly characterizing and focusing on the problem. For many years, it meant a rogue internal user, and that's still how many people use it. But the problem is that for every Bradley Manning (Wikileaks), there might be hundreds of Albert Gonzaleses trying to crack your perimeter.
Thus, I find it far more useful to more precisely characterize the threats we are dealing with:
>> Rogue employees: trusted individuals who exceed their authority for personal gain or to deliberately damage the organization.
>> Accidental disclosures: trusted individuals who accidentally damage the organization through inadvertent misuse of data.
>> Risky business process: a potential leak due to a business process that is either poorly secured or against policy (but for legitimate business reasons).
>> External attacker on the inside: an external attacker who has penetrated the organization and is operating internally. This threat actor might have compromised a trusted account and appear like an internal user.
Although we use some of the same overlapping technologies, the implementation details can widely vary when we address all four of these threats. That's why I prefer using an overall term like "data protection" (or information-centric security) than the nebulous "insider threat."
For example, when dealing with potential rogue employees, you tend to rely more on monitoring technologies over time. You don't want to interfere with legitimate business activities, so we use policies in tools like DLP to track mishandling of sensitive information. And how the employee extracts the data also tends to follow a pattern. They aren't necessarily technically proficient and will often rely on USB storage, CD/DVD, or private email to extract data. They usually know the data they want before the attack.
No, I don't have numbers to back this up (I wish I did), but that's what I most often hear from folks who have to deal with these incidents.
An external attacker will exhibit some of the same behavior, but is likely more technically proficient, will target different data (often, but not always), might leave signs of "hunting around," will access a different set of systems, and tends to use different extraction techniques.
I'm not saying you will use this info to build out some super complex set of correlating rules, but where you drop in your security for each risk is probably going to be different. For example, USB monitoring/blocking, Web filtering, and Web/email DLP will stop a lot of rogue employees. For an external attacker, you might find file-activity monitoring and egress filtering more effective.
"Insider threat" doesn't make much sense because when you start trying to address your risks, a bunch of different ones all involve something going on inside your perimeter. To really tackle them, you need to break them apart, prioritize, and use both different security tools or different settings on the same tools.
I feel better now.
Rich Mogull is founder of Securosis LLC and a former security industry analyst for Gartner Inc.
About the Author
You May Also Like