How IT Teams Can Use 'Harm Reduction' for Better Cybersecurity Outcomes
Copado's Kyle Tobener will discuss a three-pronged plan at Black Hat USA for addressing human weaknesses in cybersecurity with this medical concept — from phishing to shadow IT.
August 3, 2022
It's a well-known fact that humans are — and will remain — one of the weakest links in any company's cyber defenses. Security admins have tried to help the situation through random phishing tests and training, ultimatums, eliminating local control over a given device, and even naming and shaming those unlucky souls who clicked on the wrong link in an email.
Results have been middling at best, as shown by the finding in Verizon's "2022 Data Breach Investigations Report" (DBIR) that the vast majority of breaches start with phishing and social engineering.
Kyle Tobener, vice president and head of security and IT at Copado, says that it doesn't have to be that way. Instead, businesses can take a page from the medical community and find a much more effective approach through the principle of harm reduction. That essentially means adopting a focus on minimizing or mitigating bad outcomes from bad behavior rather than attempting to eliminate bad behavior completely.
How Harm Reduction Applies to Cybersecurity
In a session next week at Black Hat USA entitled "Harm Reduction: A Framework for Effective & Compassionate Security Guidance," Tobener plans to discuss this fresh way of thinking about user behavior, education, and awareness when it comes to cyber threats.
"Harm reduction is a big topic in the healthcare space, but it hasn't really made its way into information security all that much," he tells Dark Reading, adding that as a cancer survivor and brother of someone who wrestled with substance addiction, he learned about harm reduction firsthand.
"Unfortunately, what we see is still mostly abstinence-based guidance in a lot of scenarios by security people," he says.
To illustrate the contrast between the two approaches, he uses the example of the attention-grabbing Super Bowl ad back in February from Coinbase, which featured a QR code bouncing around the screen, Pong-like.
"If you went to Twitter right after that, there were thousands of security people saying that you should never use a QR code if you don't know where that QR code's from," he says. "That guidance is not effective whatsoever. I'm sure millions of people have used a QR code, and if your focus is giving guidance that isn't practical or pragmatic, that people aren't going to follow, then it's going to be very ineffective and you're wasting an opportunity to educate those people in a way that's actually useful."
In a harm-reduction approach, the answer would have been to assume that people were going to click on such an intriguing item (and indeed, QR codes are so widespread in their use in general that asking people to never use them is a simple non-starter), and build a defensive strategy with that in mind.
"Educate them on what to look for once they do something like use a QR code," Tobener explains. "How do you know that the website you went to is a safe one? If you only tell people not to do something, and then they do it and they go to the website, and they're not prepared to look for red flags, they're going to be worse off than they would be."
How to Deploy Harm Reduction
In his Black Hat talk, Tobener plans to address the implementation of harm reduction in a cybersecurity context with a three-pronged approach, starting with fomenting acceptance that risk-taking behaviors are here to stay.
"I think this is a very pragmatic approach that a lot of security people aren't willing to take; they come with a mindset that risk can be eradicated, which is just not realistic," he notes. "Just like the war on drugs was not effective, Prohibition was not effective, and D.A.R.E. programs and 'scared straight' were actually shown to be more harmful than helpful in kids."
After gaining buy-in from security teams and powers that be on the impossibility of preventing risky actions, the next step is prioritizing the reduction of the negative consequences of those risky behaviors, and understanding which battles to fight when it comes to corporate security policies.
"For example, in an enterprise context, you might have an enterprise password manager that everyone is supposed to use," Tobener explains. "But there will be people who don't want to use the corporate-provided password manager because they're not familiar with it, and they want to use their own. Instead of making them stop what they're doing, consider whether using their own password manager is better than not using a password manager at all. In other words, are there bigger fish to fry?"
The third prong that he plans to cover in this Black Hat USA session is that of compassion.
"The final piece of the framework is kind of a weird one for cybersecurity, but it's really important in the harm reduction space: Embracing compassion while providing guidance," he says. "This one is probably the hardest concept for security people and even healthcare people to wrap their heads around, which is the idea of improving people's situations by being compassionate and supportive, even if you're supporting them in doing what you consider to be the wrong thing."
Just like social stigma makes people avoid drug treatment rather than accept it, the harsh attitude and conflict-fraught approach coming from some cybersecurity teams toward users is going to make people less likely to want to do the right thing, he explains. For instance, in the above shadow-IT password manager example, teams could send threatening emails to offenders or even get line managers involved; or, they could work out a compromise, offer ease-of-use training, and generally take a "we're with you not against you" tack when discussing the issue.
"By being supportive and compassionate, you show them that you accept them despite what they're doing, and that even though it's not perfect now, they have a chance to improve in the future," Tobener says. "Oftentimes, when you are compassionate with people, they will then educate themselves. And make better choices in the long run."
The session will look to give attendees practicable takeaways about becoming a more effective security practitione when it comes to managing users who aren't listening to you.
"I get really tired of seeing on Twitter people telling people 'do this or you deserve the consequences,'" Tobener says. "I'm trying to raise the security consciousness to a place where we stop telling people not to do things, and instead say, OK, you shouldn't do this, but if you do, here's how to do it more safely."
Read more about:
Black Hat NewsAbout the Author
You May Also Like
Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024Securing Tomorrow, Today: How to Navigate Zero Trust
Nov 13, 2024The State of Attack Surface Management (ASM), Featuring Forrester
Nov 15, 2024Applying the Principle of Least Privilege to the Cloud
Nov 18, 2024The Right Way to Use Artificial Intelligence and Machine Learning in Incident Response
Nov 20, 2024