Scams Security Pros Almost Fell For
By working together as an industry, we can develop the technologies needed to account for human error.
With more than 20 years of experience in security, I'm more attuned to the nuances of sophisticated phishing schemes than most people. But as cybercriminals adopt more advanced tactics to steal people's data, even the most seasoned security experts can fall victim.
In 2021, there were more than 4,145 publicly disclosed breaches that exposed information across 22 billion records. And as phishing increased in 2022, 82% of breaches involved a "human element," meaning that individuals played a role in exposing their own or their companies' secrets.
Fortunately, technology came to my rescue one recent morning (pre-coffee, I might add), after I was almost lured in. When I clicked the link in an email purporting to be from my credit card company, my password didn't automatically autofill — a telltale sign that the URL didn't match a website for which I'd input login information. I took a closer look: Sure enough, the URL was slightly off.
I'm not the only one on my team who's been targeted. Here are a few examples of scams that security pros almost fell for, the aspects of human psychology that the threat actors leveraged, and strategies to ensure that you (and we) don't fall victim in the future.
Exploiting "Confirmation Bias"
On the morning in question, I was expecting an email from American Express. So when I appeared to receive one, confirmation bias kicked in. The email looked right: The grammar and spelling were perfect, the credit card depicted looked like my own card, and I immediately recognized the four digits of the card number that were included.
I clicked. Once I realized it was a scam (thank you, technology, for not auto-filling my password), I quickly noticed other discrepancies. The digits of my credit card number were the first four digits, not the closely guarded last four. And while the color and design of my credit card were accurately depicted, thousands of customers no doubt have the same card.
Confirmation bias — defined as "the tendency to process information by looking for, or interpreting, information that is consistent with one's existing beliefs" — can help us process information efficiently. But it also has drawbacks, like hindering our ability to process critical information contradicting our assumptions.
Being aware of our tendency toward confirmation bias is key. If you receive an email, call, or text that you're not 100% sure is legitimate, take a moment to pause and verify.
Preying on Our Deference to Authority
Several members of our security team recently received a text purporting to come from 1Password's CEO, Jeff Shiner. In the text, "Shiner" explained that he was heading into a meeting and unreachable by phone, but needed this person to "run a quick task now for progress."
"Boss alerts" like this are increasingly common. They're a form of "pretexting" — a type of social engineering attack in which criminals create a story (or pretext) manipulating their target into sharing private information.
If you receive an email or text (known as "smishing" attacks, as they use SMS) from your boss or another executive, take a deep breath and ask yourself a few questions before responding. Can you confirm the phone number? What is it requesting (or demanding) of you? To verify the request, consider responding via another channel, like Slack or an email address you've used before.
Fabricating Urgency
Another way scammers can throw us off guard is by creating a false sense of urgency. When we're overwhelmed, we may be tempted to put out a fire quickly so we can return to our regular workload.
Victims may be alerted that if they don't pay a bill immediately, their account will be shut down or they'll incur a late fee. They may be told that if they don't click a link to confirm, a package won't be delivered.
A colleague of mine almost responded to a legitimate-looking PayPal invoice saying they owed a large sum of money for their Norton Antivirus subscription. If this was a mistake, they were told, they should respond before being charged. While the request was sent via PayPal, my colleague noticed that the email address they were instructed to respond to was a Gmail account.
Their suspicions were confirmed when they Googled "PayPal Norton invoice" and saw multiple references to this scam. To spare other victims, they alerted PayPal.
Technology Can Save the Day
While phishing training and tests can raise awareness, they can't solve the root cause of the problem. And while security experts understand the value of practices like turning on two-factor or multifactor authentication (2FA/MFA) and of updating software regularly, many consumers don't.
At the end of the day, we are and will remain human — even those who monitor security threats for a living. Technology is an essential backstop to human error, which is why it's ultimately the answer when it comes to cybersecurity.
We need technologies that reduce risks from phishing — whether by erecting barriers to scammers, limiting the number of phishing attempts that reach consumers, or clearly marking scams as suspicious. Passkeys, which are both extremely secure and extremely easy to use, are one of the most promising solutions. Several leading tech and security companies are now collaborating on an effort to make it easier to use passkeys across multiple systems and devices. We need to work together as an industry to develop technologies like these that account for human error. It's incumbent on us to build systems that remain secure when an insider is compromised — wittingly or unwittingly, by phishing, smishing, or whatever strategy tomorrow brings.
About the Author
You May Also Like