News, news analysis, and commentary on the latest trends in cybersecurity technology.

4 Ways to Fight AI-Based Fraud

Generative AI is being used to make cyberscams more believable. Here's how organizations can counter that using newly emerging tools and reliable methods.

Laura Wilber, Senior Industry Analyst, Enea

October 2, 2024

5 Min Read
A video editor concept with one man's face covered in wireframe to prepare it for pasting onto another man's face in another window
Source: Tero Vesalainen via Alamy Stock Photo

COMMENTARY

As cybercriminals finesse the use of generative AI (GenAI), deepfakes, and many other AI-infused techniques, their fraudulent content is becoming disconcertingly realistic, and that poses an immediate security challenge for individuals and businesses alike. Voice and video cloning isn't something that only happens to prominent politicians or celebrities; it's defrauding individuals and businesses of significant losses that run into millions of dollars.

AI-based cyberattacks are rising, and 85% of security professionals, according to a study by Deep Instinct, attribute this rise to generative AI.

The AI Fraud Problem

Earlier this year, Hong Kong police revealed that a financial worker was tricked into transferring $25 million to criminals through a multiperson deepfake video call. While this kind of sophisticated deepfake scam is still quite rare, advances in technology mean that it's becoming easier to pull off, and the huge gains make it a potentially lucrative endeavor. Another tactic is to target specific workers by making an urgent request over the phone while masquerading as their boss. Gartner now predicts that 30% of enterprises will consider identity verification and authentication solutions "unreliable" by 2026, primarily due to AI-generated deepfakes.

A common type of attack is the fraudulent use of biometric data, an area of particular concern given the widespread use of biometrics to grant access to devices, apps, and services. In one example, a convicted fraudster in the state of Louisiana managed to use a mobile driver's license and stolen credentials to open multiple bank accounts, deposit fraudulent checks, and buy a pick-up truck. In another, IDs created without facial recognition biometrics on Aadhar, India's flagship biometric ID system, allowed criminals to open fake bank accounts.

Another kind of biometric fraud is also rapidly gaining ground. Rather than mimicking the identities of real people, as in the previous examples, cybercriminals are using biometric data to inject fake evidence into a security system. In these injection-based attacks, the attackers game the system to grant access to fake profiles. Injection-based attacks grew a staggering 200% in 2023, according to Gartner. One common type of prompt injection involves tricking customer service chatbots into revealing sensitive information or allowing attackers to take over the chatbot entirely. In these cases, there is no need for convincing deepfake footage.

There are several practical steps CISOs can take to minimize AI-based fraud.

1. Root Out Caller ID Spoofing

Deepfakes, in keeping with many AI-based threats, are effective because they work in combination with other tried-and-tested scamming techniques, such as social engineering and fraudulent calls. Almost all AI-based scams, for example, involve caller ID spoofing, which is when a scammer's number is disguised as a familiar caller. That increases believability, which plays a key part in the success of these scams. Stopping caller ID spoofing effectively pulls the rug out from under the scammers.

One of the most effective methods in use is to change the ways that operators identify and handle spoofed numbers. And regulators are catching up: In Finland, the regulator Traficom has led the way with clear technical guidance to prevent caller ID spoofing, a move that is being closely watched by the EU and other regulators globally.

2. Use AI Analytics to Fight AI Fraud

Increasingly, security pros are joining cybercriminals at their own game — deploying the AI tactics scammers use, only to defend against attacks. AI/ML models excel at detecting patterns or anomalies across vast data sets. This makes them ideal for spotting the subtle signs that a cyberattack is taking place. Phishing attempts, malware infections, or unusual network traffic could all indicate a breach.

Predictive analytics is another key AI capability that the AI community can exploit in the fight against cybercrime. Predictive AI models can predict potential vulnerabilities — or even future attack vectors — before they are exploited, enabling pre-emptive security measures such as using game theory or honeypots to divert attention from the valuable targets. Enterprises need to be able to confidently detect subtle behavior changes taking place across every facet of their network in real time, from users to devices to infrastructure and applications.

3. Zone in on Data Quality

Data quality plays a critical role in pattern recognition, anomaly detection, and other machine learning-based methods used to fight modern cybercrime. In AI terms, data quality is measured by accuracy, relevancy, timeliness, and comprehensiveness. While many enterprises have relied on (insecure) log files, many are now embracing telemetry data, such as network traffic intelligence from deep packet inspection (DPI) technology, because it provides the "ground truth" upon which to build effective AI defenses. In a zero-trust world, telemetry data, like the kind supplied by DPI, provides the right kind of "never trust, always verify" foundation to fight the rising tide of deepfakes.

4. Know Your Normal

The volume and patterns of data across a given network are a unique signifier particular to that network, much like a fingerprint. For this reason, it is critical that enterprises develop an in-depth understanding of what their network's "normal" looks like so that they can identify and react to anomalies. Knowing their networks better than anyone else gives enterprises a formidable insider advantage. However, to exploit this defensive advantage, they must address the quality of the data feeding their AI models.

In summary, cybercriminals have been quick to exploit AI, and in particular GenAI, for increasingly realistic frauds that can be implemented at a scale previously not possible. As deepfakes and AI-based cyber threats escalate, businesses must leverage advanced data analytics to strengthen their defenses. By adopting a zero-trust model, enhancing data quality, and utilizing AI-driven predictive analytics, organizations can proactively counter these sophisticated attacks and protect their assets — and reputations — in an increasingly perilous digital landscape.

About the Author

Laura Wilber

Senior Industry Analyst, Enea

Laura Wilber is a Senior Industry Analyst at Enea. She supports cross-functional and cross-portfolio teams with technology and market analysis, product marketing, product strategy, and corporate development. She is also an ESG Advisor & Committee Member. Her expertise includes cybersecurity and networking in enterprise, telecom, and industrial markets, and she loves helping customers meet today's challenges while musing about what the next ten years will bring.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights