News, news analysis, and commentary on the latest trends in cybersecurity technology.

New Generative AI Tools Aim to Improve Security

The debate over whether ChatGPT and other generative AI tools will benefit defenders or further embolden attackers may be ongoing, but companies are going forward with new tools.

The words AI over a processor chip on a circuit board.
Source: Kaikoro via Adobe Stock

Generative artificial intelligence (AI technologies, such as OpenAI's ChatGPT, have the potential to help security professionals defend against sophisticated, unpredictable attacks. The technology will also embolden attackers. Security vendors, however, aren't waiting to see which way the debates go before releasing their own GPT-based tools for security professionals.

Consider this not-exhaustive list of announcements from the past two months:

  • Airgap Networks announced ThreatGPT, an advanced machine learning model for its Zero Trust Firewall.

  • Endor Labs launched a private beta Endor Labs DroidGPT, a chatbot to help developers select better (secure, not outdated, less risky) open source software components for their projects.

  • Microsoft unveiled Microsoft Security Copilot to help security teams investigate and respond to security incidents.

  • Overhaul announced RiskGPT, a feature for its compliance and risk platform to improve supply chain visibility, incident response time, and risk assessment capabilities.

  • SentinelOne unveiled a new threat hunting platform, which combines neural networks' natural language interface based on ChatGPT, GPT-4, and other large language models (LLMs).

  • Skyhawk Security added the Threat Detector feature, which uses the ChatGPT API, to its cloud threat detection and response platform.

  • Tenable Research released four tools on GitHub that use generative AI to identify vulnerabilities faster and more efficiently. They are G-3PO to automate reverse engineering, BurpGPT for Web application security research, Escalate GPT to identify identity and access management policy issues, and an AI Assistant for GNU Debugger to simplify debugging.

Tenable Research touted "ample opportunity" for defenders to harness LLMs in the white paper "How Generative AI is Changing Security Research," published in tandem with the release of its four tools.

"From log parsing and anomaly detection to triage and incident response capabilities, defenders could have the upper hand," the research group for Tenable Security wrote. "In addition to our examples of using LLMs like ChatGPT to reduce the manual workload of reverse engineering tasks and security research, another avenue where AI could prove to be a key tool for development teams is static code analysis to identify potentially exploitable code. Coupled with advanced threat detection and intelligence from trained AI models, there is an abundance of use cases to aid defenders."

Fears of What AI Can Do

The surging use of ChatGPT is stoking fears that generative AI, which consists of algorithms that can generate realistic images, audio, or video or produce written content, will be used to do more harm than the efficiencies it provides. On Monday, renowned AI creator Geoffrey Hinton announced that he quit his job at Google. Hinton, who has been referred to as the “Godfather of AI,” warned that tech giants like Google, Meta, Apple, and Microsoft may be moving too fast in unleashing it.

"It's hard to see how you can prevent the bad actors from using it for bad things," Hinton told The New York Times.

Leaders of those companies have responded, saying they take the potential misuse seriously and are creating safeguards. Speaking during a panel session at the World Economic Forum’s Growth Summit in Geneva on Wednesday, Microsoft chief economist Michael Schwarz said both Microsoft and its partner, ChatGPT creator OpenAI, “are really committed to making sure that AI is safe, that AI is used for good and not used for bad.”

However, Schwarz acknowledged the risks. "We do have to worry a lot about the safety of this technology, just like any other technology," he said. "By all means possible, we have to put in safeguards."

Changing Security Research

For many in the cybersecurity industry, AI would significantly accelerate the ability to develop new tools and discover new threats at a scale that would be impossible otherwise, and concerns about its abuse should not be reasons for holding back on using it.

"While there’s certainly a dark side to these emerging technologies, Tenable Research also sees opportunities to use AI for the greater good," the research group wrote in the white paper. "For example, the art of bug hunting requires extensive security and coding skills, and it can take years for an individual to develop the necessary expertise to find zero-day vulnerabilities. As researchers, we turn our mindsets to using and developing tools that can reduce manual labor. With these generative models, we have a unique opportunity to change the trajectory of security research."

During an informal panel discussion held for the media by Tenable Security during the RSA Conference last week, participants agreed that putting the brakes on AI development is unrealistic. "It's certainly not going to stop our adversaries," warned Mark Weatherford, Alert Enterprises CSO and chief strategy officer for the National Cybersecurity Center.

"There’s just no way to stop it," added Tenable deputy CTO Robert Hansen. "Even if you wanted to stop ChatGPT from innovating, all the hackers I know are working on this right now, racing toward whatever they're trying to accomplish."

About the Author

Jeffrey Schwartz, Contributing Writer

Jeffrey Schwartz is a journalist who has covered information security and all forms of business and enterprise IT, including client computing, data center and cloud infrastructure, and application development for more than 30 years. Jeff is a regular contributor to Channel Futures. Previously, he was editor-in-chief of Redmond magazine and contributed to its sister titles Redmond Channel Partner, Application Development Trends, and Virtualization Review. Earlier, he held editorial roles with CommunicationsWeek, InternetWeek, and VARBusiness. Jeff is based in the New York City suburb of Long Island.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights