News, news analysis, and commentary on the latest trends in cybersecurity technology.
MITRE Launches AI Incident Sharing Initiative
The collaboration with industry partners aims to improve collective AI defenses. Trusted contributors receive protected and anonymized data on real-world AI incidents.
October 4, 2024
MITRE's Center for Threat-Informed Defense announced the launch of the AI Incident Sharing initiative this week, a collaboration with more than 15 companies to increase community knowledge of threats and defenses for systems that incorporate artificial intelligence (AI).
The initiative, which falls under the purview of the center's Secure AI project, aims to facilitate quick and secure collaboration on threats, attacks, and accidents involving AI-enabled systems. It expands the reach of the MITRE ATLAS community knowledge base, which has been collecting and characterizing data on anonymized incidents for two years. Under this initiative, a community of collaborators will receive protected and anonymized data on real-world AI incidents.
Incidents can be submitted online by anyone. Submitting organizations will be considered for membership with the goal of enabling data-driven risk intelligence and analysis at scale.
Secure AI also extended the ATLAS threat framework to incorporate information on the generative AI-enabled system threat landscape, adding several new generative AI-focused case studies and attack techniques, as well as new methods to mitigate attacks on these systems. Last November, in collaboration with Microsoft, MITRE released updates to the ATLAS knowledge base focused on generative AI.
"Standardized and rapid information sharing about incidents will allow the entire community to improve the collective defense of such systems and mitigate external harms," said Douglas Robbins, vice president, MITRE Labs, in a statement.
MITRE operates a similar information-sharing public-private partnership with the Aviation Safety Information Analysis and Sharing database for sharing data and safety information to identify and prevent hazards in aviation.
Collaborators on Secure AI span industries, with representatives from financial services, technology, and healthcare. The list includes AttackIQ, BlueRock, Booz Allen Hamilton, CATO Networks, Citigroup, Cloud Security Alliance, CrowdStrike, FS-ISAC, Fujitsu, HCA Healthcare, HiddenLayer, Intel, JPMorgan Chase Bank, Microsoft, Standard Chartered, and Verizon Business.
About the Author
You May Also Like