News, news analysis, and commentary on the latest trends in cybersecurity technology.
Cisco Previews AI Defenses to Cloud Security PlatformCisco Previews AI Defenses to Cloud Security Platform
Set for release in March, Cisco AI Defense will provide algorithmic red teaming of large language models with technology that came over as part of the Robust Intelligence acquisition last year.
Cisco is expanding its cloud security platform with new technology that will let developers detect and mitigate vulnerabilities in artificial intelligence (AI) applications and their underlying models.
The new Cisco AI Defense offering, introduced Jan. 15, is also designed to prevent data leakage by employees who use services like ChatGPT, Anthropic, and Copilot. The networking giant already offers AI Defense to early-access customers and plans to release it for general availability in March.
AI Defense is integrated with Cisco Secure Access, the revamped secure service edge (SSE) cloud security portfolio that Cisco launched last year. The software-as-a-service offering includes zero-trust network access, VPN-as-a-service, a secure Web gateway, cloud access security broker, firewall-as-a-service, and digital experience monitoring.
Administrators can view the AI Defense dashboard in the Cisco Cloud Control interface, which hosts all of Cisco's cloud security offerings.
Gaps in AI Capabilities
AI Defense is intended to help organizations that are concerned about the security risks associated with AI but are under pressure to implement the technology into their business processes, said Jeetu Patel, Cisco's chief product officer and executive VP, at the launch event.
"You need to have the right level of speed and velocity to keep innovating in this world, but you also need to make sure that you have safety," Patel said. "These are not trade-offs that you want to have. You want to make sure that you have both."
According to Cisco's 2024 AI Readiness Survey, 71% of respondents don't believe they are fully equipped to prevent unauthorized tampering of AI within their organizations. Further, 67% said they have a limited understanding of the threats specific to machine learning. Patel said AI Defense addresses these issues.
"Cisco AI Defense is a product which is a common substrate of safety and security that can be applied across any model, that can be applied across any agent, any application, in any cloud," he said.
Model Validation at Scale
Cisco AI Defense is primarily targeted at enterprise AppSecOps organizations. It allows developers to validate AI models before applications and agents are deployed into production.
Patel noted that the challenge with AI models is that they are constantly changing with new data added to them, which changes the behavior of the applications and agents.
"If models are changing continuously, your validation process also has to be continuous," he said.
Seeking a way to offer the equivalent of red teaming, Cisco last year acquired Robust Intelligence, a startup founded in 2019 by Harvard researchers Yaron Singer and Kojin Oshiba, and the core component of AI Defense. The Robust Intelligence Platform uses algorithmic red teaming to scan for vulnerabilities, along with a mechanism Robust Intelligence created called Tree of Attacks with Pruning, an AI-based method of using automation to systematically jailbreak large language models (LLMs).
According to Patel, Cisco AI Defense uses detection models from generative AI (GenAI) platform provider Scale AI and threat intelligence telemetry from Cisco's Talos and its recently acquired Splunk to continuously validate the models and automatically recommend guardrails. Further, he noted that Cisco designed AI Defense to distribute those guardrails through the network fabric.
"This essentially allows us to deliver a purpose-built model and data for going out, allowing us to validate if a model is going to work as per expectations or if it's going to surprise us," said Patel, adding that it typically takes most organizations seven to 10 weeks to validate a model. "We can do it within 30 seconds because this is completely automated," he said.
An Industry-First?
Analysts believe Cisco is the first major player to launch technology that can address automated model verification at that scale.
"I don't know anyone else who's done anything close to this," says Frank Dickson, group VP for IDC's security and trust research practice. "I've heard people doing what we might call an LLM firewall, but it's not as intricate and complex as this. The ability to do this kind of automated pen testing in 30 seconds looks pretty slick."
Scott Crawford, research director for the 451 Research Information Security channel with S&P Global Market Intelligence, agrees, noting that a variety of large vendors are approaching security for GenAI in different ways.
"But in Cisco's case, it made the first acquisition of a startup with this focus with its pickup of Robust Intelligence, which is at the heart of this initiative," Crawford says. "There are a range of other startups in this space, any of which could be an acquisition target in this emerging field, but this was the first such acquisition by a major enterprise IT vendor."
Addressing AI security will be a major concern this year, given the rise in attacks against vulnerable models, Crawford says.
"We have already seen examples of LLM exploits, and experts have considered the ways in which it can be manipulated and attacked," he says.
Such incidents, often described as LLMjacking, are waged by exploiting vulnerabilities with prompt injections, supply chain attacks, and data and model poisoning. One notable LLMjacking attack was discovered last year by the Sysdig Threat Research Team, which observed stolen cloud credentials targeting 10 cloud-hosted LLMs. In that incident, the attackers accessed credentials from a system running a vulnerable version of Laravel (CVE-2021-3129).
About the Author
You May Also Like