Phishing Today, Deepfakes Tomorrow: Training Employees to Spot This Emerging Threat

Cybercriminals are evolving their tactics, and the security community anticipates voice and video fraud to play a role in one of the next big data breaches -- so start protecting your business now.

Ian Cruxton, CSO, Callsign

January 16, 2020

5 Min Read
Dark Reading logo in a gray background | Dark Reading

Deepfake fraud is a new, potentially devastating issue for businesses. In fact, last year a top executive at an unidentified energy company was revealed to have been conned into paying £200,000 by scammers using artificial intelligence to replicate his boss's voice — simply because he answered a telephone call, which he believed was from his German parent company. The request was for him to transfer the funds, which he dutifully sent to what he presumed was his parent company. In the end, the funds were stolen by sophisticated criminals at the forefront of what I believe is a frightening new age of deepfake fraud. Although this was the first reported case of this kind of fraud in the UK, it certainly won't be the last.

Recently, a journalist paid just over $550 to develop his own deepfake, placing the face of Lieutenant Commander Data from Star Trek: The Next Generation over Mark Zuckerberg's. It took only two weeks to develop the video.

When the Enterprise Evolves, the Enemy Adapts
We're no strangers to phishing emails in our work inboxes. In fact, many of us have received mandatory training and warnings about how to detect them — the tell-tale signs of spelling errors, urgency, unfamiliar requests from "colleagues," or the slightly unusual sender addresses. But fraudsters know that continuing with established phishing techniques won't survive for much longer. They also understand the large potential gains from gathering intelligence from corporations using deepfake technology — a mixture of video, audio, and email messaging— to extract confidential employee information under the guise of the CEO or CFO.

Deepfake technology is still in its early days, but even in 2013, it was powerful enough to make an impact. While serving at the National Crime Agency (NCA) in the UK, I saw how a Dutch NGO pioneered the technology to create the deepfake of a 10-year-old girl, identifying thousands of child sex offenders around the globe. In this case, the AI video deepfake technology was implemented by a humanitarian-focused organization with the purpose of fighting crime.

But as the technology evolves, we're seeing how much of the research into deepfakes surrounds its unlawful and criminal applications — many of which present seriously detrimental financial and reputational consequences. As more businesses educate their employees to detect and thwart traditional phishing and spearphishing attacks, it's not difficult to see how the fraudsters may instead turn their efforts to fruitful deepfake technology to execute their schemes.  

How Deepfakes Will Thrive in the Modern Workplace
With the sheer amount of jobs requiring their employees to be online, it's critical that workforces are educated and provided with the tools to detect, refute, and protect against deepfake attacks and fraudulent activity taking place in the workplace. It's not difficult to see why corporate deepfake detection in particular is so crucial: Employees by nature are often eager to satisfy the requests of their seniors, and do so with as little friction as possible.

The stakes are raised even further when considering how large teams, remote workers, and complex hierarchies make it even more difficult for employees to distinguish between a colleague's "status quo" and an unusual request or attitude. Add into that equation the fast-tempo demands to deliver through agile working methodologies, and it is easy to see how a convincingly realistic video request from a known boss to transfer funds could attract less scrutiny from an employee than a video from someone they know less well.

A New Era of Employee Security Training
Companies must empower employees to question and challenge requests that are deemed to be unusual, either because of the atypical action demanded or the out-of-character manner or style of the person making the request. This can be particularly challenging for organizations with very hierarchical and autocratic leadership that does not encourage or respect what it perceives as challenges to its authority. Fortunately, some business owners and academics are already looking into ways to solve the issue of detecting deepfakes.

Facebook, for instance, announced the launch of the Deepfake Detection Challenge in partnership with Microsoft and leading academics in September last year, and lawmakers in the US House of Representatives recently passed legislation to combat deepfakes. But there is much to be done quickly if we are to stay ahead of the fraudsters.

If organizations can no longer assume the identity of the email sender or individual at the other end of the phone, they must develop programs and protocol for training employees to over-ride their natural inclination to assume that any voice caller or video subject is real, and instead consider that there may be a fraudster leveraging AI and deepfake technology to spoof the identities of their colleagues.

Cybercriminals are constantly evolving their tactics and broadening their channels, and the security community anticipates voice and video fraud to play a role in one of the next big data breaches. So start protecting your business sooner rather than later.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's top story: "How to Comprehend the Buzz About Honeypots."

About the Author

Ian Cruxton

CSO, Callsign

After nearly 35 years in the law enforcement, Ian Cruxton joined the private sector as CSO of Callsign, an identity fraud, authorization, and authentication company. While at the National Crime Agency (NCA), he led 7 of the 12 organized crime threats and regularly briefed the Home Secretary and Immigration Minister regarding law enforcement's response to organized crime. He also led the NCA's International Operations, including with the UK's Interpol, Europol, and International Liaison Officer network. Having been in law enforcement and security for over three decades, Ian has seen the incredible impact that technology has had on the crime landscape, and brings his high caliber of experience to the role of CSO at Callsign.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights