Security Training 101: Stop Blaming The User
To err is human, so it makes sense to quit pointing fingers and start protecting the organization from users -- and vice versa.
More on Security
Live at Interop ITX
We've all seen them: the "don't take the bait" anti-phishing posters plastered throughout most enterprises. As companies struggle with the various forms of malicious email, from spearphishing to whaling, I've noticed security leaders have begun to emphasize monitoring and deterring human, email-related mistakes. In that vein, companies are inserting language into employee agreements regarding cybersecurity hygiene and creating policies that punish lax security habits.
I believe it's time that we, as cybersecurity professionals, take a step back and reconsider our approach to user error. After all, to err is human, and our energies would be better served not on assessing blame but on protecting the organization from users — and users from themselves.
A user-education program is of preeminent importance. Every modern control framework, from ISO to the NIST Cybersecurity Framework, requires user education. The problem I see in today's standard corporate information security program is that user education is the first and only line of defense against many threats. For example, many companies don't allow personally identifiable information to be transferred unencrypted, but have no data-loss prevention technology to prevent it. Frankly, this is irresponsible. When a Social Security number is accidently put in an email, the user gets blamed — not the information security group. This training-only strategy also creates an environment in which every user has to do the right thing, every time, without failure.
Users Make Mistakes: Be Prepared for It
Several months ago, I witnessed a Fortune 250 CISO dress down his director of governance, risk, and compliance because a recent audit found sensitive information on a shared resource not designed for that purpose. Immediately, the CISO and the director discussed implementing new information-classification training requirements for users and a scanning program to find any mistakes by other users, who would also need more training. This line of thinking appears to be common in less-sophisticated enterprises. No discussion of preventive techniques, only detection and blame.
A common methodology for user interface experience testing is to pretend the user is drunk. The thought is, if a drunk user can navigate your application, a sober one can easily do the same. The same methodology should apply to cybersecurity. In the security professional community, we have failed by counting on users to constantly do the right thing. Our focus must not be on eliminating human errors but on preventing them in the first place.
As a consultant, I have reviewed hundreds of presentations for boards of directors throughout my career. Many CISOs struggle with establishing board-appropriate metrics: they wonder about the right level of reporting detail to include and how much board members will understand. But I can always count on the phishing test PowerPoint slide to appear during a presentation. "How many clicks this quarter versus last quarter? How many repeat offenders, even after training? After we introduced training, the click rate dropped from 44% to 32%." It's amazing how similar these slides are across different companies, regardless of the industry.
Typically, I see companies with pre-training click rates in the 20% to 30% range improve significantly after several quarters of effort. The absolute best training programs I've seen, at security-conscious companies, produce results in the 2% to 3% range. Although remarkable, even this level is too high when it takes just one administrator to fall for a scam. After all, 2% of 50,000 users is still 1,000 users.
In my opinion, the phishing test click-rate is a terrible metric for reporting. It assumes the user is responsible for phishing-related issues and takes the focus off of developing reliable, technical controls.
I would much rather see companies move the focus to detection, and instead track their phishing reportrate or how many users reported a test phishing email to the security group. Improving the number of users who report phishing emails creates a large "human sensor" network to support the information security operations center. Recently, I worked with a company that has seen great results, with fewer incidents, using this model. The approach also has the added benefit of enabling the information security market to work with carrots — rewarding users who report — versus "using a stick" to punish those who click through.
Ultimately, user errors should be classified into two categories: (1) mistakes anyone can make, and (2) mistakes no one should make. Training programs and detection techniques should be focused on the second category. As a community, we should focus on preventing the first, and to accomplish that, we must move beyond blaming users and accept accountability as security's gatekeepers.
Related Content:
About the Author
You May Also Like