News, news analysis, and commentary on the latest trends in cybersecurity technology.

Oh, the Humanity! How to Make Humans Part of Cybersecurity Design

Government and industry want to jump-start the conversation around "human-centric cybersecurity" to boost the usability and effectiveness of security products and services.

6 Min Read
human centered centric cybersecurity
Source: whiteMocca via Shutterstock

Many security teams view their nonsecurity coworkers as the potential weak point in any cybersecurity plan, so they bring in technology to mitigate their inevitable poor choices. The viewpoint is understandable: The "human element" contributed to 68% of breaches in 2023 and 74% of breaches in 2022, according to Verizon's "Data Breach Investigations Report."

Yet, the approach is failing companies that want to improve their cybersecurity, experts say. In a US government handout titled "Users Are Not Stupid," the National Institute of Standards and Technology (NIST) urges organizations to avoid creating insider threats through poor usability, layering on too much security, and failing to consider user feedback.

Instead, organizations should pursue a human-centric cybersecurity (HCC) approach, focusing on processes and products that account for users' needs and motivations and incentivize secure behaviors. An HCC program includes security-awareness and anti-phishing training, adds user feedback channels to security products, and aims to reduce the security responsibility placed on the average person. Tools that are critical for companies taking an HCC approach include security monitoring and user/entity behavior analytics (UEBA).

Yet HCC goes beyond just looking for user-centric or user-friendly security products, says Julie Haney, HCC program lead at NIST's Information Technology Lab.

Related:AI About-Face: 'Mantis' Turns LLM Attackers Into Prey

"It's really all about putting people at the forefront when we're designing and implementing security," Haney says. "If you don't have human-centered cybersecurity where you're considering that person, then you have unusable security solutions — so people are more prone to making errors or making risky decisions or implementing less secure work-arounds because they just need to get their jobs done."

Last month NIST launched its Human-Centered Cybersecurity Community of Interest (COI) to bring together practitioners, academics, and policymakers to discuss how to make security more effective and user-friendly.

Cybersecurity Power to People

The government agency isn't the only organization to focus on the human aspect of security. Increasingly, HCC has become a focus of enterprise security teams, with business intelligence firm Gartner expecting CISOs at half of large enterprises to adopt human-centric practices and designs for cybersecurity by 2027. In fact, Gartner listed human-centric security design as a top cybersecurity trend last year. The firm changed the name but continued to identify security behavior and culture programs (SBCPs) as a top cybersecurity trend in 2024.

Related:Time to Get Strict With DMARC

Security teams need to stop talking at other workers and instead talk to them and work with them to build a cybersecurity-focused culture, says Victoria Cason, a senior principal analyst at Gartner.

"Taking a human-centric approach is recognizing that we're not dealing with an inanimate object," she says. "We're dealing with a human that has different behaviors, different actions, different needs, and really trying to address their wants, desires, and behaviors when it comes to best security practices, as opposed to just telling them what to do."

Among the steps that Gartner identifies as part of an SBCP are conducting threat simulations, adding automation and data analytics to aid users in making secure choices, rewarding workers for reporting potential security incidents, and tracking metrics to demonstrate SBCP impact. Nearly half of companies focused on SBCP are taking each of those steps, according to Gartner data.

Minimizing cybersecurity-induced friction will not only improve companies' security posture but will also reduce the stress that comes with a traditionally adversarial job. Gartner expects that half of cybersecurity leaders will change jobs between 2023 and 2025, with a quarter of those exiting their positions actually leaving the industry for good due to stress.

Related:AI-Augmented Email Analysis Spots Latest Scams, Bad Content

HCC: A Work in Progress

Currently, there is no standard definition for HCC, which is partly why NIST is pushing for more research into how companies can better support the security growth of their workers. HCC broadly includes workers' attitudes about cybersecurity, their training, the usability of security products, and the creation of policies.

The latest "Federal Cybersecurity Research and Development Plan," published by the Biden administration in December 2023, identifies HCC as a priority for protecting the nation. Among the research areas espoused by the plan are finding models to determine the impacts of digital technologies and how their security properties can be validated.

"There is a need to reduce the burden of cybersecurity requirements on people, organizations, communities, and society, and to improve the usability and the user experience of digital technologies and systems," the plan states. "Research on human-centered computing aspects has indicated that including end users early in the process of design and development creates more usable systems and an improved user experience."

Gartner has come up with its own approach to implementing SBCPs, which it dubs the PIPE framework, short for practices, influences, platforms, and enablers.

"Most traditional awareness programs just rely on yearly or quarterly training, but that doesn't address the root cause of behavior," says Gartner's Cason. "So going beyond just the traditional computer-based training and phishing simulation, leveraging existing tools and capabilities like identity and access management (IAM) or security monitoring, to even emerging tools like AI can increase engagement and efficiency."

The most significant product category to encapsulate HCC may be human risk management, an evolution of the security-awareness and training market that adds adaptive human protection, according to business intelligence firm Forrester. As opposed to the checkbox compliance of many security-awareness training programs, human risk management focuses on positively educating workers while at the same time reducing the risk posed by their actions, according to a Forrester note published in February.

Employees Do Worry Over Cybersecurity

For the most part, workers are cognizant of the critical role in protecting the business. They are worried that they could be the cause of the next breach, with a third of workers (34%) concerned that they may take an action that leaves their organizations vulnerable, according to a survey of 1,000 workers by consultancy Ernst & Young.

Companies should work with those users and find ways to direct those concerns into productive action, rather than failing to support them and then blaming them when something goes wrong, says NIST's Haney.

"If someone clicks on that phishing link, organizations tend to put all the blame on the employee, but they're not actually looking up the chain to all of the procedural things, the process things, the people things that maybe went wrong in the organization before that," she says. "It's not just about the fault of the person at the end of the chain — there's often a lot of other things that have gone wrong before that."

Cybersecurity professionals should strive to develop a culture and mindset that does not label users as the enemy or the weakest link. Having conversations with users can uncover problems in the way security is being implemented, while empowering users to report issues can lead to earlier detection.

Finally, the advent of products — such as human risk analysis services — should be adopted carefully and with the right expectations. Tracking users who may make repeated mistakes can be useful but should not be punitive; rather, the approach should inform security teams about procedural problems or raise the possibility of additional training opportunities, Haney says.

"The data can be useful, but you have to be really careful to not, you know, start labeling people [as] a bad employee, or they're bad at security, and this is a person that's good at security," she says. "So there's that fine line that you have to walk."

Don't miss the latest Dark Reading Confidential podcast, where we talk about NIST's post-quantum cryptography standards and what comes next for cybersecurity practitioners. Guests from General Dynamics Information Technology (GDIT) and Carnegie Mellon University break it all down. Listen now!

Read more about:

CISO Corner

About the Author

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights