Microsoft Cracks Down on Malicious Copilot AI Use
According to the tech giant, it has observed a threat group seeking out vulnerable customer accounts using generative AI, then creating tools to abuse these services.
January 13, 2025
NEWS BRIEF
Microsoft's Digital Crimes Unit is pursuing legal action to disrupt cybercriminals who create malicious tools that evade the security guardrails and guidelines of generative AI (GenAI) services to create harmful content.
According to an unsealed complaint in the Eastern District of Virginia, though the company goes to great lengths to create and enhance secure AI products and services, cybercriminals continue to innovate their tactics and bypass security measures.
"With this action, we are sending a clear message: the weaponization of our AI technology by online actors will not be tolerated," said Microsoft in a blog post about the lawsuit.
In the court filings that were unsealed on Jan. 13, Microsoft noted that it had "observed a foreign-based threat-actor group develop sophisticated software that exploited exposed customer credentials scraped from public websites."
The group tried to access accounts with generative AI services in order to alter the capabilities of those services, then resold this unlawful access to other malicious actors, providing instructions on how to use the tools to create harmful content.
Since discovering the group's actions, Microsoft has revoked access and enhanced safeguards to mitigate this kind of activity in the future.
As the company continues to seek out proactive measures it can take alongside legal action, it highlights a report, "Protecting the Public From Abusive AI-Generated Content," that provides recommendations for organizations and governments to protect the public from AI-created threats.
Read more about:
News BriefsAbout the Author
You May Also Like