Generative AI Projects Pose Major Cybersecurity Risk to Enterprises

Developers' enthusiasm for ChatGPT and other LLM tools leaves most organizations largely unprepared to defend against the vulnerabilities that the nascent technology creates.

Image shows the hands of a person in a plaid long-sleeved shirt typing on a keyboard with "ChatGPT" written over the hands
Source: Skorzewiak via Alamy Stock Photo

Organizations' rush to embrace generative AI may be ignoring the significant security threats that large language model (LLM)-based technologies like ChatGPT pose, particularly in the open source development space and across the software supply chain, new research has found.

A report by Rezilion released June 28 explored the current attitude of the open source landscape to LLMs, particularly in their popularity, maturity, and security posture. What researchers found is that despite the rapid adoption of this technology among the open source community (with a whopping 30,000 GPT-related projects on GitHub alone), the initial projects being developed are, overall, insecure — resulting in an increased threat with substantial security risk for organizations.

This risk only stands to increase in the short-term as generative AI continues its rapid adoption throughout the industry, demanding an immediate response to improve security standards and practices in the development and maintenance of the technology, says Yotam Perkal, director of vulnerability research at Rezilion.

"Without significant improvements in the security standards and practices surrounding LLMs, the likelihood of targeted attacks and the discovery of vulnerabilities in these systems will increase," he tells Dark Reading.

As part of its research, the team investigated the security of 50 of the most popular GPT and LLM-based open source projects on GitHub, ranging in length of development time from two to six months. What they found is that while they were all extremely popular with developers, their relative immaturity was paired with a generally low security rating.

If developers rely on these projects to develop new generative-AI-based technology for the enterprise, then they could be creating even more potential vulnerabilities against which organizations are not prepared to defend, Perkal says.

"As these systems gain popularity and adoption, it is inevitable that they will become attractive targets for attackers, leading to the emergence of significant vulnerabilities," he says.

Key Areas of Risk

The researchers identified four key areas of generative AI security risk that the adoption of generative AI in the open source community presents, with some overlap between the groups:

  • Trust boundary risk; 

  • Data management risk; 

  • Inherent model risk; 

  • And basic security best practices.

Trust boundaries in open source development help organizations establish zones of trust in which they can have confidence in the security and reliability of an application's components and data. However, as users enable LLMs to use external resources such as databases, search interfaces, or external computing tools — thus greatly enhancing their functionalities — the inherent unpredictability of LLM completion outputs can be exploited by malicious actors.

"Failure to address this concern adequately can significantly elevate the risks associated with these models," the researchers wrote.

Data-management risks such as data leakage and training-data poisoning also expose enterprises to risk if not addressed by developers when working not just with generative AI, but any machine-learning (ML) system, the researchers said. Not only can LLM unintentionally leak sensitive information, proprietary algorithms, or other confidential data in its responses, threat actors also deliberately can poison an LLM's training data to introduce vulnerabilities, backdoors, or biases that can undermine the security, effectiveness, or ethical behavior of the model, the researchers warned.

Inherent underlying model risks meanwhile account for two of the top LLM security problems: inadequate AI alignment and overreliance on LLM-generated content. "In fact, OpenAI, the creator of ChatGPT, warns its users about these risks in the main ChatGPT interface," the researchers noted in the report.

These risks refer to the phenomenon of generative AI like ChatGPT returning false or fabricated data sources or recommendations, commonly called "hallucinations," which researchers from Vulcan Cyber's Voyager18 already have warned could open up organizations to supply-chain attacks. This can occur when a developer asks ChatGPT or another generative AI solution from recommendations for code packages that don't exist, both Vulcan and Rezilion researchers noted. An attacker can identify this and publish a malicious package as a substitute, which developers will then use and write directly into an application.

Other users, unaware of the malicious intent, can then pose similar questions to ChatGPT, and the risk spreads from there across the supply chain because the AI model is unaware of the code manipulation, they noted. "Consequently, unsuspecting users may unknowingly adopt the recommended package, putting their systems and data at risk," the researchers wrote.

And finally, general security best-practice risk stems from open source adoption of generative AI, in the areas of improper error handling or insufficient access controls (which are not unique to LLMs or ML models). An attacker can exploit the information in the LLM error messages to gather sensitive information or system details, which they can then use to launch a targeted attack or exploit known vulnerabilities, the researchers said. And insufficient access controls also could allow users with limited permissions to perform actions beyond their intended scope, potentially compromising the system.

How Organizations Can Prepare, Mitigate Risk

Rezilion researchers presented specific ways that organizations can mitigate each of these risks, as well as overall advice for how they can prepare as the open source community continues to embrace these models for next-generation software development.

The first step to this preparation is an awareness that integrating generative AI and LLMs comes with unique challenges and security concerns that may be different than anything organizations have encountered before, Perkal says. Moreover, the responsibility for preparing and mitigating LLM risks lies not only with organizations integrating the technology but also the developers involved in building and maintaining these systems, he notes.

As such, organizations should adopt what Perkal calls a "secure-by-design" approach when implementing generative AI-based systems, he says. That entails leveraging existing frameworks like the Secure AI Framework (SAIF), NeMo Guardrails, or MITRE ATLAS to incorporate security measures directly into their AI systems, Perkal says.

It's also "imperative" that organizations monitor and log LLM interactions and regularly audit and review the AI system's responses to detect potential security and privacy issues, and then to update and tweak the LLM accordingly, he concludes.

About the Author

Elizabeth Montalbano, Contributing Writer

Elizabeth Montalbano is a freelance writer, journalist, and therapeutic writing mentor with more than 25 years of professional experience. Her areas of expertise include technology, business, and culture. Elizabeth previously lived and worked as a full-time journalist in Phoenix, San Francisco, and New York City; she currently resides in a village on the southwest coast of Portugal. In her free time, she enjoys surfing, hiking with her dogs, traveling, playing music, yoga, and cooking.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights