Google AI Platform Bugs Leak Proprietary Enterprise LLMs

The tech giant fixed privilege-escalation and model-exfiltration vulnerabilities in Vertex AI that could have allowed attackers to steal or poison custom-built AI models.

The letters "LLM" in grey and purple 3D text on top of a computer component with the words "large language model" underneath in white
Source: Krot Studio via Alamy Stock Photo

Google has fixed two flaws in Vertex AI, its platform for custom development and deployment of large language models (LLMs), that could have allowed attackers to exfiltrate proprietary enterprise models from the system. The flaw highlights once again the danger that malicious manipulation of artificial intelligence (AI) technology present for business users.

Researchers at Palo Alto Networks Unit 42 discovered the flaws in Google's Vertex AI platform, a machine learning (ML) platform that allows enterprise users to train and deploy ML models and AI applications. The platform is aimed at allowing for custom development of LLMs for use in an organization's AI-powered applications.

Specifically, the researchers discovered a privilege escalation flaw in the "custom jobs" feature of the platform, and a model exfiltration flaw in the "malicious model" feature, Unit 42 revealed in a blog post published on Nov. 12.

___________________________________

Don't miss the upcoming free Dark Reading Virtual Event, "Know Your Enemy: Understanding Cybercriminals and Nation-State Threat Actors," Nov. 14 at 11 a.m. ET. Don't miss sessions on understanding MITRE ATT&CK, using proactive security as a weapon, and a masterclass in incident response; and a host of top speakers like Larry Larsen from the Navy Credit Federal Union, former Kaspersky Lab analyst Costin Raiu, Ben Read of Mandiant Intelligence, Rob Lee from SANS, and Elvia Finalle from Omdia. Register now!

Related:5 Ways to Save Your Organization From Cloud Security Threats

___________________________________

The first bug allowed for exploitation of custom job permissions to gain unauthorized access to all data services in the project. The second could have allowed an attacker to deploy a poisoned model in Vertex AI, leading to "the exfiltration of all other fine-tuned models, posing a serious proprietary and sensitive data exfiltration attack risk," Palo Alto Networks researchers wrote in the post.

Unit 42 shared its findings with Google, and the company has "since implemented fixes to eliminate these specific issues for Vertex AI on the Google Cloud Platform (GCP)," according to the post.

While the imminent threat has been mitigated, the security vulnerabilities once again demonstrate the inherent danger that occurs when LLMs are exposed and/or manipulated with malicious intent, and how quickly the issue can spread, the researchers said.

"This research highlights how a single malicious model deployment could compromise an entire AI environment," the researchers wrote. "An attacker could use even one unverified model deployed on a production system to exfiltrate sensitive data, leading to severe model exfiltration attacks."

Related:2 Zero-Day Bugs in Microsoft's Nov. Update Under Active Exploit

Poisoning Custom LLM Development

The key for exploiting the flaws that were discovered lies within a feature of Vertex AI called Vertex AI Pipelines, which allow users to tune their models using custom jobs, also referred to as "custom training jobs." "These custom jobs are essentially code that runs within the pipeline and can modify models in various ways," the researchers explained.

However, while this flexibility is valuable, it also opens the door to potential exploitation, they said. In the case of the vulnerabilities, Unit 42 researchers were able to abuse permissions within what's called a "service agent" identity of a "tenant project" — connected through the project pipeline to the "source project," or fine-tuned AI model created within the platform. A service agent has excessive permissions to many permissions within a Vertex AI project.

From this position, the researchers could either inject commands or create a custom image to create a backdoor that allowed them to gain access to the custom model development environment. They then deployed a poisoned model for testing within Vertex AI that allowed them to gain further access to steal other AI and ML models from the test project.

Related:Amazon Employee Data Compromised in MOVEit Breach

"In summary, by deploying a malicious model, we were able to access resources in the tenant projects that allowed us to view and export all models deployed across the project," the researchers wrote. "This includes both ML and LLM models, along with their fine-tuned adapters."

This method presents "a clear risk for a model-to-model infection scenario," they explained. "For example, your team could unknowingly deploy a malicious model uploaded to a public repository," the researchers wrote. "Once active, it could exfiltrate all ML and fine-tuned LLM models in the project, putting your most sensitive assets at risk."

Mitigating AI Cybersecurity Risk

Organizations are just beginning to have access to tools that can allow them to build their own in-house, custom LLM-based AI systems, and thus the potential security risks and solutions to mitigate them are still very much uncharted territory. However, it's become clear that gaining unauthorized access to LLMs created within an organization is one surefire way to expose that organization to compromise.

At this stage, key to securing any custom-built models is to limit the permissions of those in the enterprise that have access to it, the Unit 42 researchers noted. "The permissions required to deploy a model might seem harmless, but in reality, that single permission could grant access to all other models in a vulnerable project," they wrote in the post.

To protect against such risks, organizations also should implement strict controls on model deployments. A fundamental way to do this is to ensure an organization's development or test environments are separate from its live production environment.

"This separation reduces the risk of an attacker accessing potentially insecure models before they are fully vetted," Balassiano and Shaty wrote. "Whether it comes from an internal team or a third-party repository, validating every model before deployment is vital."

Don't miss the upcoming free Dark Reading Virtual Event, "Know Your Enemy: Understanding Cybercriminals and Nation-State Threat Actors," Nov. 14 at 11 am ET. Don't miss sessions on understanding MITRE ATT&CK, using proactive security as a weapon, and a masterclass in incident response; and a host of top speakers like Larry Larsen from the Navy Credit Federal Union, former Kaspersky Lab analyst Costin Raiu, Ben Read of Mandiant Intelligence, Rob Lee from SANS, and Elvia Finalle from Omdia. Register now!

About the Author

Elizabeth Montalbano, Contributing Writer

Elizabeth Montalbano is a freelance writer, journalist, and therapeutic writing mentor with more than 25 years of professional experience. Her areas of expertise include technology, business, and culture. Elizabeth previously lived and worked as a full-time journalist in Phoenix, San Francisco, and New York City; she currently resides in a village on the southwest coast of Portugal. In her free time, she enjoys surfing, hiking with her dogs, traveling, playing music, yoga, and cooking.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights