News, news analysis, and commentary on the latest trends in cybersecurity technology.

MITRE Launches AI Incident Sharing Initiative

The collaboration with industry partners will improve collective AI defenses. Trusted contributors receive protected and anonymized data on real-world AI incidents.

Two people watching as one person passes files to another person.
Source: Myron Standret via Alamy Stock Photo

MITRE’s Center for Threat-Informed Defense announced the launch of the AI Incident Sharing initiative this week, a collaboration with more than 15 companies to increase community knowledge of threats and defenses for AI-enabled systems

The incident sharing initiative falls under the purview of the center’s Secure AI project, and aims to enable quick and secure collaboration on threats, attacks and accidents involving AI-enabled systems. It expands the reach of the MITRE ATLAS community knowledge base, which has been collecting and characterizing data on anonymized incidents for two years. Under this initiative, a community of collaborators will receive protected and anonymized data on real-world AI incidents. 

Incidents can be submitted via web (at https://ai-incidents.mitre.org/) by anyone. Submitting organizations will be considered for membership with the goal of enabling data-driven risk intelligence and analysis at scale. 

Secure AI also extended the ATLAS threat framework to incorporate information on the generative AI-enabled system threat landscape, adding several new generative AI-focused case studies and attack techniques, as well as new methods to mitigate attacks on these systems. In November 2023, in collaboration with Microsoft, MITRE released updates to the ATLAS knowledge base focused on generative AI. 

"Standardized and rapid information sharing about incidents will allow the entire community to improve the collective defense of such systems and mitigate external harms," said Douglas Robbins, vice president, MITRE Labs, in a statement.

MITRE operates a similar information-sharing public private partnership with the Aviation Safety Information Analysis and Sharing database for sharing data and safety information to identify and prevent hazards in aviation.

Collaborators on Secure AI span industries, with representatives from financial services, technology, and healthcare. The list includes AttackIQ, BlueRock, booz Allen Hamilton, CATO Networks, Citigroup, Cloud Security Alliance, CrowdStrike, FS-ISAC, Fujitsu, HCA Healthcare, HiddenLayer, Intel, JPMorgan Chase Bank, Microsoft, Standard Chartered, and Verizon Business.

About the Author

Jennifer Lawinski, Contributing Writer

Jennifer Lawinski is a writer and editor with more than 20 years experience in media, covering a wide range of topics including business, news, culture, science, technology and cybersecurity. After earning a Master's degree in Journalism from Boston University, she started her career as a beat reporter for The Daily News of Newburyport. She has since written for a variety of publications including CNN, Fox News, Tech Target, CRN, CIO Insight, MSN News and Live Science. She lives in Brooklyn with her partner and two cats.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights