News, news analysis, and commentary on the latest trends in cybersecurity technology.

Will AI Code Generators Overcome Their Insecurities This Year?

In just two years, LLMs have become standard for developers — and non-developers — to generate code, but companies still need to improve security processes to reduce software vulnerabilities.

6 Min Read
AI code generation keyboard developer
Source: TippaPatt via Shutterstock

The use of large language models (LLMs) for code generation surged in 2024, with a vast majority of developers using OpenAI's ChatGPT, GitHub Copilot, Google Gemini, or JetBrains AI Assistant to help them code.

However, the security of the generated code — and developers' trust in that code — continues to lag. In September, a group of academic researchers found more than 5% of the code generated by commercial models and nearly 22% of the code generated by open source models contained package names that do not exist. And in November, a study of the code generated by five different popular artificial intelligence (AI) models found that at least 48% of the generated code snippets contained vulnerabilities.

While code-generating AI tools are accelerating development, companies need to adapt secure coding practices to keep up, says Ryan Salva, senior director of product and lead for developer tools and productivity at Google.

"I am deeply convinced that, as we adopt these tools, we can't just keep doing things the exact same way, and we certainly can't trust that the models will always give us the right answer," he says. "It absolutely has to be paired with good, critical human judgment every step of the way."

One significant risk is hallucinations by code-generating AI systems, which — if accepted by the software developer — result in vulnerabilities and defects, with 60% of IT leaders describing the impact of AI-coding errors as very or extremely significant, according to the "State of Enterprise Open-Source AI" report published by developer-tools maker Anaconda.

Companies need to make sure that AI is augmenting developers' efforts, not supplanting them, says Peter Wang, chief AI and innovation officer and co-founder at Anaconda.

"Users of these code-generation AI tools have to be really careful in vetting code before implementation," he says. "Using these tools is one way malicious code can slip in, and the stakes are incredibly high."

Developers Pursue Efficiency Gains

Nearly three-quarters of developers (73%) working on open source projects use AI tools for coding and documentation, according to GitHub's 2024 Open Source Survey, while a second GitHub survey of 2,000 developers in the US, Brazil, Germany, and India found that 97% had used AI coding tools to some degree.

The result is a significant increase in code volume. About a quarter of code produced within Google is generated by AI systems, according to Google's Salva. Developers who use GitHub regularly and GitHub Copilot are more active as well, producing 12% to 15% more code, according the company's Octoverse 2024 report.

Overall, developers like the increased efficiency, with about half of developers (49%) finding that they save at least two hours a week due to their use of AI tools, according to the annual "State of Developer Ecosystem Report" published by software tools maker JetBrains.

In the push to get developer tools into the market, AI firms chose versatility over precision, but those will evolve over the coming year, says Vladislav Tankov, director of AI at JetBrains.

"Before the rise of LLMs, fine-tuned and specialized models dominated the market," he says. "LLMs introduced versatility, making anything you want just one prompt away, but often at the expense of precision. We foresee a new generation of specialized models that combine versatility with accuracy."

In October, JetBrains launched Mellum, an LLM specialized in code-generation tasks. The company trained the model in several phases, Tankov says, starting with a "general understanding and progressing to increasingly specialized coding tasks. This way, it retains a general understanding of the broader context, while excelling in its key function."

As part of its efforts, JetBrains has feedback mechanisms to reduce the likelihood of vulnerable code suggestions and extra filtering and analysis steps for AI-generated code, he says.

Security Remains a Concern

Overall, developers appear to increasingly trust the code generated by popular LLMs. While the majority of developers (59%) have security concerns with using AI-generated code, according to the JetBrains report, more than three-quarters (76%) believe that AI-powered coding tools produce more secure code than humans.

The AI tools can help accelerate development of secure code, as long as developers know how to use the tools safely, Anaconda's Wang says. He estimates that AI tools can as much as double developer productivity, while producing errors 10% to 30% of the time.

Senior developers should use code-generating AI tools as "a very talented intern, knocking out a lot of the rote grunt work before passing it on for refinement and confirmation," he says. "For junior developers, it can reduce the time required to research and learn from various tutorials. Where junior developers need to be careful is with using code-generation AI to pull from sources or draft code they don't understand."

Yet AI is also helping to fix the problem as well.

GitHub's Wales points to tools like the service's Copilot Autofix as a way that AI can augment the creation of secure code. Developers using Autofix tend to fix vulnerabilities in their code more than three times faster than those who do so manually, according to GitHub.

"We've seen improvements in remediation rates since making the tool available to open source developers for free, from nearly 50% to nearly 100% using Copilot Autofix," Wales says.

And the tools are getting better. For the past few years, AI providers have seen code-suggestion acceptance rates increase by about 5% per year, but they have largely plateaued at an unimpressive 35%, says Google's Salva.

"The reason for that is that these tools have largely been grounded in the context that's surrounding the cursor, and that's in the [integrated development environment (IDE)] alone, and so they basically just take context from a little bit before and a little bit after the cursor," he says. "By expanding the context beyond the IDE, that's what tends to get us the next significant step in improving the quality of the response."

Discrete AIs for Developers' Pipelines

AI assistants are already specializing, targeting different aspects of the development pipeline. While developers continue to use AI tools integrated into their development environments and standalone tools, such as ChatGPT and Google's Gemini, development teams will likely need specialists to effectively produce secure code.

"The good news is that the advent of AI is already reshaping how we think about and approach cybersecurity," says GitHub's Wales. "2025 will be the era of the AI engineer, and we'll see the composition of security teams start to alter."

As attackers become more familiar with code-generation tools, attacks that attempt to leverage the tools may become more prevalent as well, says JetBrains' Tankov.

"Security will become even more pressing as agents generate larger volumes of code, some potentially bypassing thorough human review," he says. "These agents will also require execution environments where they make decisions, introducing new attack vectors — targeting the coding agents themselves rather than developers."

As AI code-generation becomes the de facto standard in 2025, developers will need to be more cognizant of how they can check for vulnerable code and ensure their AI tools are prioritizing security.

About the Author

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights