News, news analysis, and commentary on the latest trends in cybersecurity technology.
Ensuring AI Safety While Balancing Innovation
Experts will explore the oft-neglected necessity of AI safety and its integration with security practices at next month's Black Hat USA in Las Vegas.
As tools and technology that use artificial intelligence (AI) continue to emerge at a rapid pace, the rush to innovate often overshadows critical conversations about safety. At Black Hat 2024 — next month in Las Vegas — a panel of experts will explore the topic of AI safety. Organized by Nathan Hamiel, who leads the Fundamental and Applied Research team at Kudelski Security, the panel aims to dispel myths and highlight the responsibilities organizations have regarding AI safety.
Hamiel says that AI safety is not just a concern for academics and governments.
"Most security professionals don't think much about AI safety," he says. "They think it's something that governments or academics need to worry about or maybe even organizations creating foundational models."
However, the rapid integration of AI into everyday systems and its use in critical decision-making processes necessitate a broader focus on safety.
"It's unfortunate that AI safety has been lumped into the existential risk bucket," Hamiel says. "AI safety is crucial for ensuring that the technology is safe to use."
Intersection of AI Safety and Security
The panel discussion will explore the intersection of AI safety and security and how the two concepts are interrelated. Security is a fundamental aspect of safety, according to Hamiel. An insecure product is not safe to use, and as AI technology becomes more ingrained in systems and applications, the responsibility of ensuring these systems' safety increasingly falls on security professionals.
"Security professionals will play a larger role in AI safety because of its proximity to their current responsibilities securing systems and applications," he says.
Addressing Technical and Human Harms
One of the panel's key topics will be the various harms that can manifest from AI deployments. Hamiel categorizes these harms using the acronym SPAR, which stands for secure, private, aligned, and reliable. This framework helps in assessing whether AI products are safe to use.
"You can't start addressing the human harms until you address the technical harms," Hamiel says, underscoring the importance of considering the use case of AI technologies and the potential cost of failure in those specific contexts. The panel will also discuss the critical role organizations play in AI safety.
"If you're building a product and delivering it to customers, you can't say, 'Well, it's not our fault, it's the model provider's fault,'" Hamiel says.
Organizations must take responsibility for the safety of the AI applications they develop and deploy. This responsibility includes understanding and mitigating potential risks and harms associated with AI use.
Innovation and AI Safety Go Together
The panel will feature a diverse group of experts, including representatives from both the private sector and government. The goal is to provide attendees with a broad understanding of the challenges and responsibilities related to AI safety, allowing them to take informed actions based on their unique needs and perspectives.
Hamiel hopes that attendees will leave the session with a clearer understanding of AI safety and the importance of integrating safety considerations into their security strategies.
"I want to dispel some myths about AI safety and cover some of the harms," he says. "Safety is part of security, and information security professionals have a role to play."
The conversation at Black Hat aims to raise awareness and provide actionable insights to ensure that AI deployments are safe and secure. As AI continues to advance and integrate into more aspects of daily life, discussions like these are essential, Hamiel says.
"This is an insanely hot topic that will only get more attention in the coming years," he notes. "I'm glad we can have this conversation at Black Hat."
Read more about:
Black Hat NewsAbout the Author
You May Also Like