Why LLMs Are Just the Tip of the AI Security Iceberg

With the right processes and tools, organizations can implement advanced AI security frameworks that make hidden risks visible, enabling security teams to track and address them before impact.

Diana Kelley, CISO, Protect AI

August 28, 2024

3 Min Read
Tip of an iceberg with a view of the larger iceberg out of sight
Source: Björn Forenius via Alamy Stock Photo

COMMENTARY

From the headlines, it's clear that security risks associated with generative AI (GenAI) and large language models (LLMs) haven't gone unnoticed. This attention isn't undeserved — AI tools do come with real-world risks that range from "hallucinations" to exposing private and proprietary data. Yet it's vital to recognize that they are part of a much broader attack surface associated with AI and machine learning (ML).

The rapid rise of AI has fundamentally changed companies, industries, and sectors. At the same time, it has introduced new business risks that extend from intrusions and breaches to the loss of proprietary data and trade secrets.

AI isn't new — organizations have been incorporating various forms of the technology into their business models for more than a decade — but recent mass adoption of AI systems, including GenAI, have changed the stakes. Today's open software supply chains are incredibly important to innovation and business growth, but this comes with risk. As business-critical systems and workloads increasingly leverage AI, attackers are taking notice and setting their sights, and attacks, on these technologies. 

Unfortunately, due to the opacity of these systems, most businesses and government agencies cannot identify these highly dispersed and often invisible risks. They don't have visibility into where the threats exist, or the necessary tools to enforce security policies on the assets and artifacts entering or being used in their infrastructure, and may not have had the opportunity to skill up their teams to manage AI and ML resources effectively. This could set the stage for an AI-related SolarWinds- or MOVEit-type supply chain security incident

To complicate matters, AI models typically incorporate a vast ecosystem of tools, technologies, open source components, and data sources. Malicious actors can inject vulnerabilities and malicious code into tools and models that reside within the AI development supply chain. With so many tools, pieces of code and other elements floating around, transparency and visibility become increasingly important, yet that visibility remains frustratingly out of reach for most organizations.

Looking Under the Surface (of the Iceberg)

What can organizations do? Adopt a comprehensive AI security framework, like MLSecOps, that delivers visibility, traceability and accountability across AI/ML ecosystems. This approach supports secure-by-design principles without interfering with regular business operations and performance.

Here are five ways to put an AI security program to work and mitigate risks:

  1. Introduce risk management strategies: It's vital to have clear policies and procedures in place to address security, bias, and fairness across your entire AI development stack. Tooling that supports policy enforcement allows you to efficiently manage risks in regulatory, technical, operational, and reputational domains.

  2. Identify and address vulnerabilities: Advanced security scanning tools can spot AI supply chain vulnerabilities that could cause inadvertent or intentional damage. Integrated security tools can scan your AI bill of materials (AIBOM) and pinpoint potential weaknesses and suggested fixes within tools, models, and code libraries.

  3. Create an AI bill of materials: Just as a traditional software bill of materials catalogs various software components, an AIBOM inventories and tracks all elements used in building AI systems. This includes tools, open source libraries, pre-trained models, and code dependencies. Utilizing the right tools, it's possible to automate AIBOM generation, establishing a clear snapshot of your AI ecosystem at any given moment.

  4. Embrace open source tools: Free, open source security tools specifically designed for AI and ML can deliver many benefits. These include scanning tools that can detect and protect against potential vulnerabilities in ML models and prompt injection attacks in LLMs.

  5. Encourage collaboration and transparency: AI bug bounty programs offer early insights into new vulnerabilities and provide a mechanism to mitigate them. Over time, this collaborative framework strengthens the overall security posture of the AI ecosystem.

LLMs are fundamentally changing business — and the world. They introduce remarkable opportunities to innovate and reinvent business models. But without a security-first posture, they also present significant risks.

Complex software and AI supply chains don't have to be invisible icebergs of risk lurking below the surface. With the right processes and tools, organizations can implement an advanced AI security framework that makes hidden risks visible, enabling security teams to track and address them before impact.

About the Author

Diana Kelley

CISO, Protect AI

Diana Kelley is CISO for Protect AI. She was Cybersecurity Field CTO for Microsoft, Global Executive Security Advisor at IBM Security, GM at Symantec, VP at Burton Group (now Gartner), a Manager at KPMG, CTO and co-founder of SecurityCurve, and Chief vCISO at SaltCybersecurity.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights