Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.

Risk Strategies Drawn From the EU AI Act

The EU AI Act provides a governance, risk, and compliance (GRC) framework that helps organizations take a risk-based approach to using AI.

Kyle McLaughlin, General Counsel, Secureframe

October 9, 2024

4 Min Read
Smartphone screen with ChatGPT Login webpage with European flag on the background
Source: Vitor Miranda via Alamy Stock Photo

COMMENTARY

As artificial intelligence (AI) becomes increasingly prevalent in business operations, organizations must adapt their governance, risk, and compliance (GRC) strategies to address the privacy and security risks this technology poses. The European Union's AI Act provides a valuable framework for assessing and managing AI risk, offering insights that can benefit companies worldwide.

The EU AI Act applies to providers and users of AI systems in the EU, as well as those putting AI systems on the EU market or using them within the EU. Its primary goal is to ensure that AI systems are safe and respect fundamental rights and values, including privacy, nondiscrimination, and human dignity.

The EU AI Act categorizes AI systems into four risk levels. On one end of the spectrum, AI systems that pose clear threats to safety, livelihoods, and rights are deemed an Unacceptable Risk. On the other end, AI systems classified as Minimal Risk are largely unregulated, though subject to general safety and privacy rules.

The classifications to study for GRC management are High Risk and Limited Risk. High Risk denotes AI systems where there is a significant risk of harm to individuals' health, safety, or fundamental rights. Limited Risk AI systems pose minimal threat to safety, privacy, or rights but remain subject to transparency obligations.

The EU AI Act allows organizations to take a risk-based approach when assessing AI. The framework helps establish a logical approach for AI risk assessments, particularly for High and Limited Risk activities.

Requirements for High-Risk AI Activities

High-Risk AI activities can include credit scoring, AI-driven recruitment, healthcare diagnostics, biometric identification, and safety-critical systems in transportation. For these and similar activities, the EU AI Act mandates the following stringent requirements:

  1. Risk management system: Implement a comprehensive risk management system throughout the AI system's life cycle.

  2. Data governance: Ensure proper data governance with high-quality datasets to prevent bias.

  3. Technical documentation: Maintain detailed documentation of the AI system's operations.

  4. Transparency: Provide clear communication about the AI system's capabilities and limitations.

  5. Human oversight: Enable meaningful human oversight for monitoring and intervention.

  6. Accuracy and robustness: Ensure the AI system maintains appropriate accuracy and robustness.

  7. Cybersecurity: Implement state-of-the-art security mechanisms to protect the AI system and its data.

Requirements for Limited and Minimal Risk AI Activities

While Limited and Minimal Risk activities don't require the same level of scrutiny as High-Risk systems, they still warrant careful consideration.

  1. Data assessment: Identify the types of data involved, its sensitivity, and how it will be used, stored, and secured.

  2. Data minimization: Ensure that only essential data is collected and processed.

  3. System integration: Evaluate how the AI system will interact with other internal or external systems.

  4. Privacy and security: Apply traditional data privacy and security measures.

  5. Transparency: Implement clear notices that inform users of AI interaction or AI-generated content.

Requirements for All AI Systems: Assessing Training Data

The assessment of AI training data is crucial for risk management. Key considerations for the EU AI Act include ensuring that you have the necessary rights to use the data for AI training purposes, as well as implementing strict access controls and data segregation measures for sensitive data.

In addition, AI systems must protect authors' rights and prevent unauthorized reproduction of protected IP. They also have to maintain high-quality, representative datasets and mitigate potential biases. Finally, they must maintain clear records of data sources and transformations for traceability and compliance purposes.

How to Integrate AI Act Guidelines Into Existing GRC Strategies

While AI presents new challenges, many aspects of the AI risk assessment process build on existing GRC practices. Organizations can start by applying traditional due-diligence processes for systems that handle confidential, sensitive, or personal data. Then, focus on these AI-specific considerations:

  1. AI capabilities assessment: Evaluate the AI system's actual capabilities, limitations, and potential impacts.

  2. Training and management: Assess how the AI system's capabilities are trained, updated, and managed over time.

  3. Explainability and interpretability: Ensure that the AI's decision-making process can be explained and interpreted, especially for High-Risk systems.

  4. Ongoing monitoring: Implement continuous monitoring to detect issues, such as model drift or unexpected behaviors.

  5. Incident response: Develop AI-specific incident response plans to address potential failures or unintended consequences.

By adapting existing GRC strategies and incorporating insights from frameworks like the EU AI Act, organizations can navigate the complexities of AI risk management and compliance effectively. This approach not only helps mitigate potential risks but also positions companies to leverage AI technologies responsibly and ethically, thus building trust with customers, employees, and regulators alike.

As AI continues to evolve, so, too, will the regulatory landscape. The EU AI Act serves as a pioneering framework, but organizations should stay informed about emerging regulations and best practices in AI governance. By proactively addressing AI risks and embracing responsible AI principles, companies can harness the power of AI while maintaining ethical standards and regulatory compliance.

About the Author

Kyle McLaughlin

General Counsel, Secureframe

Kyle McLaughlin serves as General Counsel at Secureframe, an all-in-one platform for continuous security compliance. With a proven track record counseling prominent tech companies including Cisco and Duo Security, he specializes in IP issues, security, compliance, and privacy (holding CIPP/E certification). McLaughlin earned his JD from Wayne State University Law School and completed his bachelor's degree in Political Science and English at the University of Michigan.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights