Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.

Human-Centric Security Model Meets People Where They Are

Instead of fighting workarounds that compromise security, a human-centered system fixes the process issues that prompt people to work dangerously.

Karen D. Schwartz, Contributing Writer, Contributing Writer

December 7, 2023

6 Min Read
Confident businesswoman explaining an idea to her colleagues at a meeting.
Source: Panther Media GmbH via Alamy Stock Photo

Be honest: If you were racing against an important deadline, would you knowingly bypass your company's security rules to get the job done? If you answered "yes," you have plenty of company. According to Gartner's "Drivers of Secure Behavior" survey, 93% of employees who behave insecurely do so knowingly.

With so much public knowledge about the consequences of circumventing security policies, why do employees do it? Usually it's because it's the path of least resistance.

"In most companies you probably have to authenticate not only with a password, but with multifactor authentication. While it's much more secure than passwords alone, it's another thing employees have to do," explains Chris Mixter, a vice president analyst at Gartner. "In general, cybersecurity puts control in place that they can deliver at scale, but employees experience a lot of friction in complying, so they find ways around it."

The impact of friction is lending prominence to a new way of attacking the cybersecurity problem: putting humans squarely in the center of the mix.

The Many Paths to Human-Centric Security

Human-centric security considers people's behaviors, needs, and limitations at all points — not only in the incident response plan, but day to day as issues arise. That means readable policies that reduce friction at as many points possible, lower complexity in security-related processes, offer positive reinforcement instead of punishment, and help employees when they need it without judgment.

Gartner predicted that half of CISOs will adopt human-centric security through 2027 to reduce cybersecurity operational friction. And by 2030, 80% of enterprises will have a formally defined and staffed human risk management program, up from 20% in 2022.

Centering people is the approach used by Random Timer, a company that makes a productivity app of the same name. Traditionally, security has been very technology- and policy-driven, without enough consideration of the human element. This can make it feel restrictive and frustrating for end users, explains company founder Matthew Anderson.

"So we try to take a human-centric approach," he says. "For example, when we were implementing a new two-factor authentication system, we spent a lot of time talking to employees about what they liked and didn't like about our old system. We used that feedback to choose a solution that would address their biggest pain points around convenience and usability."

By far, friction is the biggest enemy of secure employees. And it's rampant: A Gartner report recently found that more than one in three employees say they find cybersecurity controls and policies hard to adhere to, unreasonable for their roles, and in conflict with their work objectives.

Using technology-focused approaches helps to reduce friction, but that can't do the whole job. For example, implementing browser security and passwordless access are good steps because the user doesn't even have to think about them. But many companies still aren't adopting these technologies, and, even if they do, the tech doesn't always work well with the decades-old technology employees still rely on to do their jobs.

These technologies also still cause friction, in their own ways. For example, a secure browser can block a lot of bad things, but the security team has to "allow" everything. That means if a user wants to visit a new website, they have to contact security to "allow-list" it.

Technology-based options can help, though. One is the pop-up screen, based on behavioral cues.

"If I'm sending an email to someone I've never emailed before, the system could be set up so I get an alert that's kind of like a modern check-engine light, where it's used as a warning to potentially change behavior," says Matthew Miller, a principal in the cybersecurity services area at KPMG. "It's embedding technology from a behavioral lens instead of a compliance lens, and it's not admonishing the user."

Understand Your Users

It's also critical to understand your users, Anderson adds. That means talking directly to users through interviews, observations, and surveys. With that feedback you can then prototype and release minimum viable products to gather even more feedback to refine the user experience. He even suggests having usability experts to advocate for employees.

Understanding the behaviors and motivations of users is critical, Miller agrees. He gives an example that when he was working at a bank — long enough ago that the cloud was still a new concept — several thousand interns would routinely work there every summer. Many of them were given projects using data, data analytics, and word clouds, so the company blocked a lot of the sites that would have allowed them to upload their results publicly, to protect the company's data.

His team found that one of the interns had uploaded files to the cloud.

"When asked about why and how he did this and [told] that he wasn't in trouble, he said that after running into blocked site after blocked site, he finally found one that wasn't blocked, so he figured that it must be the approved site to upload data," Miller explains.

Some companies take understanding the user experience to the extreme, but it yields results. For example, Santander, the largest bank in Spain, taught its cybersecurity staff the principles of user experience, which is typically the domain of developers and customer-facing employees. Now when an employee says, "I can't" or violates policy, cybersecurity personnel can ask user experience questions. Instead of asking why they did something, they might ask how often they have to do it, whether it's hard to do, and if the task is essential to their workflow. With that information, the cybersecurity team may be able to change the process — or eliminate it from the workflow if it's not essential.

Of course, there is always a training component, but thinking about training differently is key to the human-centric mindset. It means tailoring training to individual roles.

"Different types of employees interact in different ways with technology, customers, and data, so you have to get very specific in helping people develop the skills they need and establishing the behaviors that will then manage risk," Miller says.

Build a Culture of 'Yes'

If you expect employees to act more securely, it's important never to say "no." If you do, they will simply find a way to circumvent the system, Mixter says.

Johnson & Johnson, for example, turned all of the forbidden activities from its negative acceptable use policy into a positive self-service assessment instead. Based on the employees' answers, the automated system will direct them to a safe workaround. If the system determines that an employee is doing something new, it might send a training video in response. If the answers reveal that an employee is planning on using proprietary data incorrectly, the system might send the employee a synthetic data repository, which is based on real data sets but doesn't include actual proprietary data.

Companies that actually ask for feedback often do better, Mixter adds. SRI, a tech company based in California, puts comment boxes in its policies. That paid off with the insight that cyber policies aren't that readable by those outside of the cyber domain, which the company said has led to positive changes.

In the end, it comes down to the typical people/process/technology triangle, with people at the center.

"Technology provides the foundation, but process and philosophy drive success," Anderson says. "Fundamentally it requires a culture embracing user-centered design, not just new tech tools."

About the Author

Karen D. Schwartz, Contributing Writer

Contributing Writer, Dark Reading

Karen D. Schwartz is a technology and business writer with more than 20 years of experience. She has written on a broad range of technology topics for publications including ITProToday, CIO, InformationWeek, GCN, FCW, FedTech, BizTech, eWeek, and Government Executive.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights