Criminals Use Deepfake Videos to Interview for Remote Work

The latest evolution in social engineering could put fraudsters in a position to commit insider threats.

picture of a face being digitally scanned to imply a deepfake video is being created
mike via Adobe Stock

Security experts are on the alert for the next evolution of social engineering in business settings: deepfake employment interviews. The latest trend offers a glimpse into the future arsenal of criminals who use convincing, faked personae against business users to steal data and commit fraud.

The concern comes following a new advisory this week from the FBI Internet Crime Complaint Center (IC3), which warned of increased activity from fraudsters trying to game the online interview process for remote-work positions. The advisory said that criminals are using a combination of deepfake videos and stolen personal data to misrepresent themselves and gain employment in a range of work-from-home positions that include information technology, computer programming, database maintenance, and software-related job functions.

Federal law-enforcement officials said in the advisory that they’ve received a rash of complaints from businesses.

“In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking,” the advisory said. “At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually.”

The complaints also noted that criminals were using stolen personally identifiable information (PII) in conjunction with these fake videos to better impersonate applicants, with later background checks digging up discrepancies between the individual who interviewed and the identity presented in the application.

Potential Motives of Deepfake Attacks

While the advisory didn’t specify the motives for these attacks, it did note that the positions applied for by these fraudsters were ones with some level of corporate access to sensitive data or systems.

Thus, security experts believe one of the most obvious goals in deepfaking one's way through a remote interview is to get a criminal into a position to infiltrate an organization for anything from corporate espionage to common theft.

“Notably, some reported positions include access to customer PII, financial data, corporate IT databases and/or proprietary information,” the advisory said.

“A fraudster that hooks a remote job takes several giant steps toward stealing the organization’s data crown jewels or locking them up for ransomware,” says Gil Dabah, co-founder and CEO of Piiano. “Now they are an insider threat and much harder to detect.”

Additionally, short-term impersonation might also be a way for applicants with a “tainted personal profile” to get past security checks, says DJ Sampath, co-founder and CEO of Armorblox. 

“These deepfake profiles are set up to bypass the checks and balances to get through the company's recruitment policy," he says.

There’s potential that in addition to getting access for stealing information, foreign actors could be attempting to deepfake their way into US firms to fund other hacking enterprises.

“This FBI security warning is one of many that have been reported by federal agencies in the past several months. Recently, the US Treasury, State Department, and FBI released an official warning indicating that companies must be cautious of North Korean IT workers pretending to be freelance contractors to infiltrate companies and collect revenue for their country,” explains Stuart Wells, CTO of Jumio. “Organizations that unknowingly pay North Korean hackers potentially face legal consequences and violate government sanctions.”

What This Means for CISOs

A lot of the deepfake warnings of the past few years have been primarily around political or social issues. However, this latest evolution in the use of synthetic media by criminals points to the increasing relevance of deepfake detection in business settings.

“I think this is a valid concern,” says Dr. Amit Roy-Chowdhury, professor of electrical and computer engineering at University of California at Riverside. “Doing a deepfake video for the duration of a meeting is challenging and relatively easy to detect. However, small companies may not have the technology to be able to do this detection and hence may be fooled by the deepfake videos. Deepfakes, especially images, can be very convincing and if paired with personal data can be used to create workplace fraud.”

Sampath warns that one of the most disconcerting parts of this attack is the use of stolen PII to help with the impersonation.

“As the prevalence of the DarkNet with compromised credentials continues to grow, we should expect these malicious threats to continue in scale,” he says. “CISOs have to go the extra mile to upgrade their security posture when it comes to background checks in recruiting. Very often these processes are outsourced, and a tighter procedure is warranted to mitigate these risks.”

Future Deepfake Concerns

Prior to this, the most public examples of criminal use of deepfakes in corporate settings have been as a tool to support business email compromise (BEC) attacks. For example, in 2019 an attacker used deepfake software to impersonate the voice of a German company’s CEO to convince another executive at the company to urgently send a wire transfer of $243,000 in support of a made-up business emergency. More dramatically, last fall a criminal used deepfake audio and forged email to convince an employee of a United Arab Emirates company to transfer $35 million to an account owned by the bad guys, tricking the victim into thinking it was in support of a company acquisition.

According to Matthew Canham, CEO of Beyond Layer 7 and a faculty member at George Mason University, attackers are increasingly going to use deepfake technology as a creative tool in their arsenals to help make their social engineering attempts more effective.

“Synthetic media like deepfakes is going to just take social engineering to another level,” says Canham, who last year at Black Hat presented research on countermeasures to combat deepfake technology.

The good news is that researchers like Canham and Roy-Chowdhury are making headway on coming up with detection and countermeasures for deepfakes. In May, Roy-Chowdhury’s team developed a framework for detecting manipulated facial expressions in deepfaked videos with unprecedented levels of accuracy.

He believes that new methods of detection like this can be put into use relatively quickly by the cybersecurity community.

“I think they can be operationalized in the short term -- one or two years -- with collaboration with professional software development that can take the research to the software product phase,” he says.

About the Author

Ericka Chickowski, Contributing Writer

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights