Elaborate Deepfake Operation Takes a Meeting With US Senator
The threat actors managed to gain access to Sen. Ben Cardin (D-Md.) by posing as a Ukrainian official, before quickly being outed.
September 30, 2024
Earlier this month, Senator Ben Cardin (D-Md.), who serves as the Democratic chair of the Senate Foreign Relations Committee, was targeted in an advanced deepfake operation that managed to in part succeed in duping the politician.
The operation was centered around Cardin's professional association with Dymtro Kuleba, the former Ukrainian Minister of Foreign Affairs. Cardin's office reportedly received an email from someone they believed to be Kuleba, who Cardin already knew from past meetings.
Kuleba and Cardin met via Zoom through what seemed to be a live audio-video connection "that was consistent in appearance and sound to past encounters," according to a notice issued by the Senator’s security office.
But it was here that things began to go awry for the threat actors. "Kuleba" began to ask Cardin questions such as, "do you support long-range missiles into Russian territory? I need to know your answer," and other "politically charged questions in relation to the upcoming election" according to the notice.
At this point, Cardin and his staff knew something was wrong.
The malicious actor on the other side of the call continued on, attempting to bait the senator into commenting on a political candidate and other things.
"The Senator and their staff ended the call, and quickly reached out to the Department of State who verified it was not Kuleba," said Nicolette Llewellyn, the director of Senate Security, in the notice.
Deepfake Scams Are on the Rise
On Sept. 25, Cardin commented on the encounter, describing the person on the other side of the screen as a malign actor that engaged in a deceptive attempt to try to have a conversation with him.
"After immediately becoming clear that the individual I was engaging with was not who they claimed to be, I ended the call and my office took swift action, alerting the relevant authorities," Sen. Cardin said. "This matter is now in the hands of law enforcement, and a comprehensive investigation is underway."
However, how far these threat actors managed to get was impressive — and concerning. Had they not revealed their scheme by acting out of character for the person they were impersonating, they may have gleaned sensitive or important information.
"On an individual level, [deepfakes] can lead to blackmail and extortion," says Eyal Benishti, CEO of Ironscales. "For businesses, deepfakes pose risks of significant financial loss, reputational damage, and corporate espionage. On a governmental level, they threaten national security and can undermine the democratic processes" — no doubt referencing a deepfake robocall that was created to impersonate President Joe Biden with the goal of getting Biden supporters to stay home for a lower voter turnout.
It's clear that deepfake schemes are becoming a bigger threat and more widely used by malicious actors. In a July report that Trend Micro shared with Dark Reading, the researchers found that 80% of the consumers involved a survey had seen deepfake images, and 64% had seen deepfake videos. Roughly half of consumers surveyed had heard of deepfake audio clips. And concerningly, 35% of the respondents said they had experienced a deepfake scam themselves, with even more saying that they know someone who has.
These types of scams come in all kinds of varieties, such as the deepfake videos of UK Prime Minister Keir Starmer and Prince William that were circulating on Meta platforms earlier this year to promote a cryptocurrency platform called Immediate Edge. The platform was fraudulent and aimed to dupe potential victims by making it seem as though it was backed by reputable public figures. According to researchers who studied the disinformation campaign, the deepfake ads reached nearly 900,000 people who spent more than £20,000 on the platform.
"The rise of deepfakes — be it through images, videos, or audio — is hard to deny," Benishti says. "These attacks are becoming increasingly sophisticated and often indistinguishable from reality, thanks to the accessibility of generative AI tools."
And that means that defenses need to shift, Benishti says.
"Currently, there are no foolproof methods to easily detect deepfakes," he says. "Until technology catches up, we must prioritize awareness, education, and training to equip individuals and organizations with the skills and strategies needed to act on their suspicions, implement effective verification processes, and ultimately improve their ability to discern what is real and what is not."
These recommendations apply to anyone, not just high-ranking or high-profile individuals such as Cardin.
"Cybercriminals capitalize on opportunity, regardless of status, which means that anyone could be a target," Benishti adds. "It's crucial for everyone, not just prominent figures, to stay alert and skeptical of any urgent or unexpected requests. Vigilance and verification are key defenses against these evolving threats."
About the Author
You May Also Like