Feds: Reducing AI Risks Requires Visibility & Better Planning

While attackers have targeted AI systems, failures in AI design and implementation are far more likely to cause headaches, so companies need to prepare.

The letters AI hovering over what appears to be a tablet, being held by a hand
Source: Deemerwha studio via Shutterstock

When the US Department of Energy (DoE) analyzed the use of artificial intelligence and machine learning (AI/ML) models in critical infrastructure last month, the agency came up with a top 10 list of potential beneficial applications of the technology, including simulations, predictive maintenance, and malicious-event detection. 

Predictably, the DoE also came up with four broad categories of risk: unintentional failure modes, adversarial attacks against AI, hostile applications of AI, and compromise of the AI supply chain. 

The DoE is not alone — the Biden administration is driving an extensive government assessment of the benefits and risks of using AI, especially in the critical infrastructure networks. On May 3, for example, the Department of Transportation issued a request for information asking for interested parties to describe both the benefits and dangers of AI to the transportation system. On April 29, the Department of Homeland Security (DHS) spelled out its own take, describing three broad categories of risk: attacks using AI, attacks targeting AI systems, and failure of design or implementation.

Yet the DHS also gave broad recommendations on how organizations can mitigate the risk of AI, focusing on a four-part strategy: governing by creating policy and a culture of risk management, mapping all the current assets or services using AI, measuring by monitoring the ongoing usage of AI, and managing by implementing a risk management strategy. 

It's a good, broad overview of what organizations need to do to mitigate AI risk, but it's just a start, says Malcolm Harkins, chief security and trust officer at HiddenLayer, an AI risk management firm.

"If you look at this like a book, they're great chapters — great macro business processes," he says. "The real success or failure will become the depth of [your approach], and then the efficacy and efficiency with which you do it."

A variety of risks have already targeted organizations. Malicious AI/ML models hosted on Hugging Face and other repositories have demonstrated the potential of attacks through the supply chains, as described by the DoE. Indirect prompt-injection attacks against ChatGPT and other large language models (LLMs) have demonstrated that the most promising AI models could be co-opted or corrupted by attackers, as highlighted by the DHS.

Attackers are also widely experimenting with AI models to make their operations more efficient and their attacks — especially phishing attacks — more effective.

(Try to) Ignore the AI Hype & Start Small

For organizations, the growing use of AI means growing exposure to the risks. Organizations won't be able to avoid adopting AI/ML models: Even if they are not rushing to adopt AI in their own operations, an increasing number of products include — or at least claim to include — AI features. 

In its report, "Safety and Security Guidelines for Critical Infrastructure Owners and Operators," the DHS describes AI risk management in terms of a framework of ongoing processes for that Map, Measure, and Manage exposure to AI in the business, with an overarching Govern function that regulates activities.

For many companies, the Map and Measure parts of the DHS mitigation strategy will initially be the most important, HiddenLayer's Harkins says. 

"I'm a former finance procurement guy — I need an inventory; I need to discover the assets to manage," he says. "Where is AI in use? Where am I getting it from a third party because they've started incorporating into the technology they provided to me, and then how do I ask the right questions of my third-party risk management to make sure they've done it right?"

Mapping involves identifying all the uses of AI in the organization's environment, documenting the possible safety and security risks of those implementations, and reviewing third-party supply chains for AI risk. Measuring focuses on defining metrics to detect and manage AI risk, as well as the continuous monitoring of AI implementations. 

Operational Technology Requires More Strict Controls

The DHS paper focuses specifically on critical infrastructure owners and operators, which consider AI models and platforms as possible solutions to solve long-standing challenges, such as logistics and cyber defense, with the top AI use categories including operational awareness, performance optimization, and automation of operations.

Using AI in the world of operational technology means that companies have to worry about the secure transfer of data into the cloud because — while smaller ML models can run on-premises — the most advanced AI models are run in the cloud as a service, says Phil Tonkin, field CTO for Dragos, a provider of cybersecurity for critical infrastructure. 

Thus, organizations need to minimize the amount of data sent to the cloud, secure those communications, and monitor the connection for anomalous behavior that could indicate malicious activity, he says. 

"While you may establish trust between that AI service and the OT system, you still have potential risks that may come down through those now-trusted links," Tonkin says. "So monitoring all of the traffic, in and out, is the one the way to do it."

The DHS has already implemented, or is in the process of implementing, AI in four pilot programs

The Cybersecurity and Infrastructure Security Agency has already completed a pilot using AI cybersecurity systems to detect and remediate software vulnerabilities in critical infrastructure and US government systems. DHS also announced it would be using an AI platform to help the Homeland Security Investigations agency investigating fentanyl distribution and child sexual exploitation, and the Federal Emergency Management Agency plans to use AI to support communities in developing plans for mitigating risks and improving resilience. Finally, the United States Citizenship and Immigration Services plans to use AI to improve officer training.

About the Author

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights