5 Lessons From The FBI Insider Threat Program

Finding ways to improve enterprise insider theft detection and deterrence

Dark Reading Staff, Dark Reading

March 1, 2013

6 Min Read
Dark Reading logo in a gray background | Dark Reading

SAN FRANCISCO -- RSA CONFERENCE 2013 -- Insider threats may not have garnered the same sexy headlines that APTs did at this year's RSA Conference. But two presenters with the Federal Bureau of Investigation (FBI) swung the spotlight back onto insiders during a session this week that offered enterprise security practitioners some lessons learned at the agency after more than a decade of fine-tuning its efforts to sniff out malicious insiders following the fallout from the disastrous Robert Hanssen espionage case.

1. Insider threats are not hackers.
Often people think of the most dangerous insiders being hackers who are running special technology tools on internal networks. Not so, says Patrick Reidy, CISO for the FBI.

"You're dealing with authorized users doing authorized things for malicious purposes," he says. "In fact, going over 20 years of espionage cases, none of those involve people having to do something like run hacking tools or escalate their privileges for purposes of espionage."

Reidy says that just less than a quarter of insider incidents tracked on a yearly basis come at the hand of accidental insiders, or what he calls the "knucklehead problem." However, at the FBI his insider threat team spends 35 percent of their time dealing with these problems. He believes the FBI and other organizations should be looking for ways to "automate out of this problem set" by focusing on better user education. Dropping those simpler incidents gives insider threat teams more time to concentrate on the more complex problem of malicious insiders, he says.

2. Insider threat is not a technical or "cybersecurity" issue alone.
Unlike many other issues in information assurance, the risk from insider threats is not a technical problem, but a people-centric problem, says Kate Randal, insider threat analyst and lead researcher for the FBI.

"So you have to look for a people centric solution," she says. "People are multidimensional, so what you have to do is take a multidisciplinary approach."

This starts by focusing efforts on identifying and looking at your internal people, your likely enemies, and the data that would be at risk. In particular, understanding who your people really are should be examined from three important informational angles: cyber, contextual, and psychosocial.

"The combination of these three things is what's most powerful about this methodology," Randal says. "In an ideal world we'd want to collect as much about these areas [as possible], but that's never going to happen. So what's important is adopting a method working with your legal and managerial departments to figure out what works best within the limitations of your environment."

3. A good insider threat program should focus on deterrence, not detection.
For a time the FBI put its back into coming up with predictive analytics to help predict insider behavior prior to malicious activity. Rather than coming up with a powerful tool to stop criminals before they did damage, the FBI ended up with a system that was statistically worse than random at ferreting out bad behavior. Compared to the predictive capabilities of Punxsutawney Phil, the groundhog of Groundhog Day, that system did a worse job of predicting malicious insider activity, Reidy says.

"We would have done better hiring Punxsutawney Phil and waving him in front of someone and saying, 'Is this an insider or not an insider?'" he says.

Rather than getting wrapped up in prediction or detection, he believes organizations should start first with deterrence.

"We have to create an environment in which it is really difficult or not comfortable to be an insider," he says, explaining that the FBI has done this in a number of ways, including crowdsourcing security by allowing users to encrypt their own data, classify their own data, and come up with better ways to protect data. Additionally, the agency has found ways to create "rumble strips" in the road to let users know that the agency has these types of policies in place and that their interaction with data is being used.

4. Detection of insider threats has to use behavioral-based techniques.
Following the failure to develop effective predictive analytics, the FBI moved toward a behavioral detection methodology that has proved far more effective, Reidy says. The idea is to detect insider bad behavior closer to that "tipping point" of when a good employee goes rogue.

"We look at how people operate on the system, how they look contextually, and try to build baselines and look for those anomalies," he says.

Whatever analytics an organization uses, whether it is print file behavior or data around file interactions, Reidy recommends a minimum of six months of baseline data prior to even attempting any detection analysis.

"Even if all you can measure is the telemetry to look at prints from a print server, you can look at things like what's the volume, how many and how big are the files, and how often do they do print," he says

5. The science of insider threat detection and deterrence is in its infancy.
According to Randal, it was bad science that led the FBI to the point where they were using a worse than random predictive analysis. Part of the issue is that even now the science of insider detection and deterrence is still in its infancy. One of the issues with its slow growth is that much of the existing research just focuses on looking at data from the bad guys.

"So what the FBI has done is to really try to push this diagnostic approach of collecting data from and comparing it between a group of known bad and a group of assumed good [insiders] and try to apply that methodology to those three realms [cyber, contextual and psychosocial]."

In particular, some of the research the FBI has done with regard to psychosocial diagnostic indicators has been a bit surprising, she says.

"What we learned from this study is that some of the things we thought would be the most diagnostic in terms of disgruntlement or workplace issues really weren't that much," she says, explaining that more innate psychological risk factors come into play. For example, stress from a divorce, inability to work in a team environment, and exhibiting behaviors of retaliatory behavior all scored high as risk indicators when comparing the bad insiders with the good.

While enterprises will not be able to do the same kind of psychological screening that the FBI does with its employees, there are ways to incorporate this knowledge into insider prevention programs.

"You can try to elicit this information from other avenues: observables, behavioral manifestations, making supervisors more aware of the insider threat problem, and creating an environment where they may be more willing to report some of these things as they see them," she says. "One of the best resources that your security program has is the collaboration of the HR department."

Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.

About the Author

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights