Attackers Dangle AI-Based Facebook Ad Lures to Hijack Business Accounts
The offending ads and pages leveraged interest in AI to spread a malicious credential-stealing browser extension.
August 23, 2023
A threat actor has been abusing paid Facebook ads to lure victims with the promise of AI technology to spread a malicious Chrome browser extension that steals users' credentials with the ultimate aim to take over business accounts.
Meta, Facebook's parent company, has removed the fraudulent pages and ads after Trend Micro reported the activity, which leverages the social media platform's paid promotion, Trend Micro senior threat researchers Jindrich Karasek and Jaromir Horejsi revealed in a blog post today.
The ads feature fake profiles of marketing companies or departments that promise to use AI to boost productivity, increase reach and revenue, or help with teaching. Some lures even dangle access to the conversational AI chatbot Google Bard — currently in limited release — to get victims to bite.
"Telltale signs of these fake profiles include purchased or bot followers, fake reviews by other hijacked or inauthentic profiles, and a limited online history," the researchers wrote.
The threat actor's main goal in the campaign appears to be to target and infect business social networking managers or administrators and marketing specialists, who also are often administrators of a company’s social networking sites, they said.
In fact, in one attack, a Trend Micro researcher who aided with a victim's incident response observed the threat actor adding suspicious users to the victim's Meta Business Manager. While the actor so far has not tried to contact the victim, the victim's prepaid promotion budget was used to promote the threat actor's own content. This demonstrates the actor's intent to leverage stolen accounts for malicious purposes.
How It Works
If a Facebook user takes the bait and clicks on one of the campaign's ads, they are redirected to a simple website that lists the advantages of using large language models (LLMs) that also contains a link for downloading the actual "AI package."
The attacker evades antivirus detection by distributing the package as an encrypted archive — typically hosted on cloud storage sites like Google Drive or Dropbox — with simple passwords like "999" or "888."
Once opened and decrypted with the correct password, the package usually contains a single MSI installer file, which drops a few files belonging to a Chrome extension. That extension aims to steal Facebook cookies, the user's access token, and the browser's user agent, as well as the user's managed pages, business account information, and advertisement account information. It also attempts to access the user's IP address.
AI As a Popular Lure
The campaign bucks a growing trend among threat actors to leverage people's interest in AI technology and the benefits it can provide professionals to socially engineer malicious scams.
"Early [AI] adopters will have a strong competitive advantage, including creative industries like marketing, copywriting, and data analysis and processing," the Trend Micro researchers wrote. However, this also opens opportunities for cybercriminals who want to capitalize on the growing interest in AI, they said.
In a similar campaign discovered in April, attackers hid the RedLine Stealer behind what appear to be legitimate sponsored ads on hijacked Facebook business and community pages that promoted free downloads to AI chat apps.
A report by Deep Instinct also released today found that 70% of security professionals say generative AI is positively impacting employee productivity and collaboration, with 63% stating the technology has also improved employee morale.
Avoiding Compromise
In addition to removing the offending pages and ads, Meta also has shared with Trend Micro that it will continue to strengthen its detection systems to find similar fraudulent ads and pages using insights from both internal and external threat research.
Deploying an antivirus solution with Web reputation services is a good countermeasure to threats like this, according to Trend Micro.
"Users should always scan the files they download from the Internet and stay vigilant against threat actors who might abuse the hype surrounding new developments in artificial intelligence," the researchers wrote.
People also should pay attention to the following "red flags" that can alert them to this type of campaign: a "hot shot" look and feel to the landing site that contains the link to the malicious file; promise of access to Google Bard even though its availability is currently limited; the offered service appearing too good to be true, since official access to AI-based systems is expensive and/or limited; any inconsistency in the wording and appearance of promotional posts; and a broadly available yet password-protected file offered on the landing site.
About the Author
You May Also Like
Applying the Principle of Least Privilege to the Cloud
Nov 18, 2024The Right Way to Use Artificial Intelligence and Machine Learning in Incident Response
Nov 20, 2024Safeguarding GitHub Data to Fuel Web Innovation
Nov 21, 2024The Unreasonable Effectiveness of Inside Out Attack Surface Management
Dec 4, 2024