Embrace Generative AI for Security, but Heed Caution
AI could be a net positive for security, with a caveat: It could make security teams dangerously complacent.
COMMENTARY
There's a lot of talk out there about the impact of generative artificial intelligence (AI) on cybersecurity — good and bad.
On one side, you have the advocates convinced of generative AI's potential to help fend off bad actors; on the other, you have the skeptics that fear generative AI will dramatically accelerate the volume and severity of security incidents in the coming years.
We're in the early innings of generative AI. But its potential has become hard to ignore.
It's already proving its value as an accelerant to automation — which is an attractive proposition for any chief information security officer (CISO) looking to shift their team's focus from tedious day-to-day tasks to more strategic projects.
We're also getting a glimpse of the future. Security teams worldwide are already experimenting with large language models (LLMs) as a force multiplier to:
Scan large volumes of data for hidden attack patterns and vulnerabilities.
Simulate tests for phishing attacks.
Generate synthetic data sets to train models to identify threats.
I believe generative AI will be a net positive for security, but with a large caveat: It could make security teams dangerously complacent.
Simply put, an overreliance on AI could lead to a lack of supervision in an organization's security operations, which could easily create gaps in the attack surface.
Look, Ma — No Hands!
There's a general belief that if AI becomes smart enough, it will require less human oversight. In a practical sense, this would result in less manual work. That sounds great in theory, but in reality, it's a slippery slope.
False positives and negatives are already a big problem in cybersecurity. Ceding more control to AI would only make things worse.
To break it down, LLMs are built on statistical, temporal text analysis and don't understand context. This leads to hallucinations that are very tough to detect, even when thoroughly inspected.
For example, if a security pro uses LLM-based guidance on remediating a vulnerability related to remote desktop protocol, it's likely to recommend the most common remediation method for such issues, rather than the actual best fit. The guidance might be 100% wrong, yet appear plausible.
The LLM has no understanding of the vulnerability, nor what the remediation process means. It relies on a statistical analysis of typical remediation processes for that class of vulnerabilities.
The Accuracy and Inconsistency Conundrum
The Achilles' heel of LLMs lies in the inconsistency and inaccuracy of their outputs.
Tom Le, Mattel's CISO, knows this all too well. He and his team have been applying generative AI to amplify defenses but are finding that, more often than not, the models "hallucinate."
According to Le, "Generative AI hasn't reached a 'leap of faith' moment yet, where companies could rely on it without employees overseeing the outcome."
His sentiment reinforces my point that generative AI poses a threat by way of human complacency.
You Can't Take the Security Pro Out of Security
Contrary to what the doomers may think, generative AI will not replace humans — at least not in our lifetime. Intuition is just unbeatable at detecting certain security threats.
For example, in application security, SQL injection and other vulnerabilities can create huge cyber-risks, detectable only when humans run reverse engineering and fuzzing on the application.
Using humans to write code also results in code that is much easier for other humans to read, parse, and understand. In code that AI auto-generates, vulnerabilities can be far more difficult to detect, because there is no human developer familiar with the app's code. Security teams that use AI-generated code will need to spend more time ensuring they are familiar with the AI's output and identifying issues before they turn into exploits.
Looking to generative AI for fast code should not cause security teams to lower their guard, and may mean spending more time ensuring code is safe.
AI Is Not All Bad
Despite both the positive and negative sentiments today, generative AI has the potential to augment our capabilities. It just has to be applied judiciously.
For instance, deploying generative AI in conjunction with Bayesian machine learning (ML) models can be a safer method to automate cybersecurity. This method makes generative AI safer by making training, assessment, and measurement of output easier. This combination of generative AI and Bayesian ML models are also easier to inspect and debug when inaccuracies occur. This method can be used either to create new insights from data or to validate the output of a generative AI model.
Alas, cyber pros are people, and people are not perfect. We may be slow, exhausted after long workdays, and error-prone, but we have something AI does not: judgment and nuance. We have the ability to understand and synthesize context; machines don't.
Handing over security tasks entirely to generative AI, with no human oversight and judgment, could result in short-term convenience and long-term security gaps.
Instead, use generative AI to surgically augment your security talent. Experiment. In the end, the work you put upfront will save your organization unnecessary headaches later.
About the Author
You May Also Like