Malvertisers Fool Google With AI-Generated Decoy Content
Seemingly innocent "white pages," including an elaborate Star Wars-themed site, are bypassing Google's malvertising filters, showing up high in search results to lure users to second-stage phishing sites.
December 19, 2024
Threat actors appear to have found yet another innovative use case for artificial intelligence in malicious campaigns: to create decoy ads for fooling malvertising-detection engines on the Google Ads platform.
The scam involves attackers buying Google Search ads and using AI to create ad pages with unique content and absolutely nothing malicious about them. The goal is to use these decoy ads to then lure visitors to phishing sites for stealing credentials and other sensitive data.
With malvertising, threat actors create malicious ads that are rigged to surface high up in search engine results when people search for a particular product or service. The ads often spoof popular and trusted brands and involve webpages and content that are replicas of the originals but serve instead to redirect users to phishing pages or download an attacker's malware of choice on systems of users who interact with the malicious ads.
While many malvertisement campaigns are targeted at consumers, there have been several recently focused on corporate users as well. One example is a campaign that sought to distribute the Lobshot backdoor on corporate systems, and another that phished employees at Lowe's.
A Steady, Post-Macro Increase in Malvertising
"We are seeing more and more cases of fake content produced for deception purposes," researchers at Malwarebytes said in a report on the campaign this week. These so called "white pages," as they are being referred to in the criminal underground, serve as legitimate-looking decoys, or front-end webpages that hide malicious content and activities behind them, according to Malwarebytes.
"The content is unique and sometimes funny if you are a real human, but unfortunately a computer analyzing the code would likely give it a green check," Malwarebytes security researcher Jerome Segura wrote. White pages, incidentally, are in contrast to "black pages," which are the actual malicious landing pages containing harmful content or malware.
The use of AI to plant decoy content on Google Ads adds a new wrinkle to malvertising scams, which have seen a remarkable surge in volume recently. Malwarebytes has pinned the increase to Microsoft's decision in 2022 to block macros in Word, Excel, and PowerPoint files downloaded from the Internet — a top malware vector for threat actors. That decision forced attackers to look for other malware distribution vectors, one of which happens to be malvertising, according to Malwarebytes.
Though Google and operators of other major online ad distribution networks have been battling against the scourge — and have gotten better at quickly identifying and removing malvertising content — bad actors have consistently managed to remain a step ahead. A Malwarebytes study found Amazon to be the most spoofed brand in malvertising campaigns, followed by Rufus, Weebly, NotePad++, and TradingView.
Spoofing Brands With AI-Generated Content
In its report, Malwarebytes provided two examples of AI-generated decoy ads it spotted recently on Google Ads. One of the decoy ads targeted users searching the Internet for the Securitas OneID mobile app, and the other targeted users of the Parsec remote desktop app, which is popular among gamers.
The Securitas OneID scam involved an entirely AI-generated website, complete with AI-generated images of supposed executives of the company.
"When Google tries to validate the ad, they will see this cloaked page with pretty unique content and there is absolutely nothing malicious within it," Segura wrote.
With the Parsec ad, the threat actors used some creative license of their own to generate a heavily Star Wars-influenced website, replete with references to the parsec astronomical measurement unit. The artwork for the website even included several AI-generated Star Wars-themed posters, which while impressive, would likely have suggested to users that the site had nothing to do with the legitimate Parsec app.
"Ironically, it is quite straightforward for a real human to identify much of the cloaked content as just fake fluff. Sometimes, things just don’t add up and are simply comical," Segura wrote. Even so, as a cloaking mechanism for a malvertising campaign," he added, "the website would have passed Google's validation checks.
About the Author
You May Also Like