Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.
NIST Weighs in on AI Risk
NIST is developing the AI Risk Management Framework and a companion playbook to help organizations navigate algorithmic bias and risk.
As organizations begin to adopt AI products, systems, and services into their environment, they are looking for guidance on mitigating algorithmic biases and other risks. The big fear of AI is that it may be used in ways the designers did not attend.
This focus – being aware of human impact on technology – is part of the “socio-technical” effort by the National Institute of Standards and Technology to develop a framework to help organizations navigate bias in AI and incorporating trust in the systems. NIST is currently asking for public and private comments on the second draft of the Artificial Intelligence Risk Management Framework and on the companion NIST AI RMF Playbook. The AI RMF Playbook is intended to help organizations implement the framework, with suggested actions, references, and supplementary guidance.
The framework is split into to four functions: Govern, Map, Measure, and Manage. The playbook will offer guidance on the first two functions, Govern and Map. Recommendations for the latter two, Measure and Manage, will be available at a later date.
NIST says its socio-technical approach will “connect the technology to societal values,” and will develop guidance that considers ways humans can impact how technology is used. The framework also examines “the interplay between bias and cybersecurity and how they interact with each other,” NIST said when the first draft was introduced.
The NIST Artificial Intelligence Risk Management Framework has focused on three types of biases associated with AI: statistical, systemic, and human. Current recommendations include fostering a governance structure with clear individual roles and responsibilities, and a professional culture that supports transparent feedback on technologies and products. A systemic bias would be a business or operating process which contributes to a consistently skewed decision.
“From my experience, what I’ve seen is the reliance on AI too much,” says Chuck Everette, director of cybersecurity advocacy at Deep Instinct. ”I see it too often that organizations forget that threats are constantly evolving and changing, therefore you have to make sure your AI algorithms are properly tuned and your models are properly being adapted to the newest threats. Also, I’ve seen cases where bias data has been used and introduced, therefore leaving environments open to certain types of attack due to inaccurate data training.”
The comment period for both draft versions end Sept. 29. The final version of the AI RMF is expected in early 2023, and playbook will be after that.
About the Author
You May Also Like
Transform Your Security Operations And Move Beyond Legacy SIEM
Nov 6, 2024Unleashing AI to Assess Cyber Security Risk
Nov 12, 2024Securing Tomorrow, Today: How to Navigate Zero Trust
Nov 13, 2024The State of Attack Surface Management (ASM), Featuring Forrester
Nov 15, 2024Applying the Principle of Least Privilege to the Cloud
Nov 18, 2024