Nvidia Embraces LLMs & Commonsense Cybersecurity Strategy

Nvidia doesn't just make the chips that accelerate a lot of AI applications — the company regularly creates and uses its own large language models, too.

4 Min Read
A hand holding a virtual AI display
Source: The KonG via Shutterstock

As the builder of the processors used to train the latest AI models, Nvidia has embraced the generative AI (GenAI) revolution. It runs its own proprietary large language models (LLMs), and several internal AI applications; the latter include the company's NeMo platform for building and deploying LLMs, and a variety of AI-based applications, such as object simulation and the reconstruction of DNA from extinct species.

At Black Hat USA next month in a session entitled "Practical LLM Security: Takeaways From a Year in the Trenches," Richard Harang, principal security architect for AI/ML at the chip giant, plans to talk about lessons the Nvidia team has learned in red-teaming these systems, and how cyberattack tactics against LLMs are continuing to evolve. The good news, he says, is that existing security practices don't have to shift that much in order to meet this new class of threats, even though they do pose an outsized enterprise risk because of how privileged they are.

"We've learned a lot over the past year or so about how to secure them and how to build security in from first principles, as opposed to trying to tack it on after the fact," Harang says. "We have a lot of valuable practical experience to share as a result of that."

AIs Pose Recognizable Issues, With a Twist

Businesses are increasingly creating applications that rely on next-generation AI, often in the form of integrated AI agents capable of taking privileged actions. Meanwhile, security and AI researchers have both already pointed out potential weaknesses in these environments, from AI-generated code expanding the resulting application's attack surface to overly helpful chatbots that give away sensitive corporate data. Yet, attackers often do not need specialized techniques to exploit these, Harang says, because they're just new iterations of already known threats.

"A lot of the issues that we're seeing with LLMs are issues we have seen before, in other systems," he says. "What's new is the attack surface and what that attack surface looks like — so once you wrap your head around how LLMs actually work, how inputs get into the model, and how outputs come out of the model ... once you think that through and map it out, securing these systems is not intrinsically more difficult than securing any other system."

GenAI applications still require the same essential triad of security attributes that other apps do — confidentiality, integrity, and availability, he says. So software engineers need to perform standard security architecting due-diligence processes, such as drawing out the security boundaries, drawing out the trust boundaries, and looking at how data flows through the system.

In the defenders favor: Because randomness is often injected into AI systems to make them "creative," they tend to be less deterministic. In other words, because the same input does not always produce the same output, attacks do not always succeed in the same way either.

"For some exploits in a conventional information security setting, you can get close to 100% reliability when you inject this payload," Harang says. "When [an attacker] introduces information to try to manipulate the behavior of the LLM, the reliability of LLM exploits in general is lower than conventional exploits."

With Great Agency Comes Great Risks

One thing that sets AI environments apart from their more traditional IT counterparts is their ability for autonomous agency. Companies do not just want AI applications that can automate the creation of content or analyze data, they want models that can take action. As such, those so-called agentic AI systems do pose even greater potential risks. If an attacker can cause an LLM to do something unexpected, and the AI systems has the ability to take action in another application, the results can be dramatic, Harang says.

"We've seen, even recently, examples in other systems of how tool use can sometimes lead to unexpected activity from the LLM or unexpected information disclosure," he says, adding: "As we develop increasing capabilities — including tool use — I think it's still going to be an ongoing learning process for the industry."

Harang notes that even with the greater risk, it's important to realize that it's a solvable issue. He himself avoids the "sky is falling" hyperbole around the risk of GenAI use, and often taps it to hunt down specific information, such as the grammar of a specific programming function, and to summarize academic papers.

"We've made significant improvements in our understanding of how LLM-integrated applications behave, and I think we've learned a lot over the past year or so about how to secure them and how to build security in from first principles," he says.

Read more about:

Black Hat News

About the Author

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights