Building an Effective Strategy to Manage AI Risks

As AI technologies continue to advance at a rapid pace, privacy, security and governance teams can't expect to achieve strong AI governance while working in isolation.

4 Min Read
Electronics that resemble a brain
Source: marcos alvarado via Alamy Stock Photo

COMMENTARY

AI technology is proliferating at a rapid pace, becoming an essential component of many businesses' operations. While organizations are achieving genuine benefits with them, the rise of AI-based systems does create new obstacles regarding data privacy, reputational risk, and new attack vectors for companies. 

AI systems rely on massive amounts of data, often including sensitive personal information. Improper handling of this data can lead to cyber vulnerabilities, privacy violations, and legal and regulatory issues. When tampered with, AI tools also have the potential to cause reputational damage — such as when Microsoft's AI chatbot Tay was exploited by users who fed it offensive and racist content, or when Amazon's AI recruiting tool exhibited bias against female candidates after being trained on historical hiring data that favored men. These incidents illustrate how important it is for organizations to implement safeguards to ensure AI systems aren't learning — and then generating outputs with — biased or malicious data.

Developing these safeguards requires a collaborative approach from privacy, security, and governance teams. While these teams, of course, have their areas of expertise, approaching AI governance from siloes won't work. Here are some ways that privacy, security, and governance teams can each flex their strengths to collaborate on strong AI governance:

Security Team

  • Infrastructure hardening: Strengthen your organization’s AI/ML infrastructure so that it can handle extensive training data — securely. You can achieve this by building strong access controls over who can access training data, setting multifactor authentication protocols for any access to the infrastructure, patching the infrastructure, and so on. Test the strength of your AI/ML infrastructure with red teaming or penetration testing. 

  • Alerting and monitoring: Develop robust monitoring capabilities to detect and prevent theft of your proprietary AI models.

  • Data leakage prevention: Adequate governance over the data being input into third-party AI systems is crucial; after all, this is what these AI systems will be trained on. Implement data leakage prevention solutions and strategies to monitor the sensitive data flowing into your and third-party AI systems, and ensure all AI solutions are vetted before uploading any sensitive information.

  • Employee training: Educate employees on AI-related risks, such as deepfakes and voice-based vishing attacks. Regularly test employees on their abilities to identify and thwart potential attacks.

Governance Team

  • Evaluate, evaluate, evaluate: Continuously evaluate the ethical implications of using AI technologies within your organization, and cross-check your AI and data practices with new regulation to ensure you remain compliant.

  • And educate, educate, educate: With the rapidly changing landscape, it’s crucial to continuously educate and enable your employees on risks and organizational policies for AI usage.

Privacy Team

  • AI risk assessments: Own the process of conducting risk assessments for AI projects to identify and mitigate privacy and compliance risks. 

  • Privacy by design: Implement privacy-by-design principles in AI development to build privacy-focused AI systems. These include ensuring consumer consent gathering strategies are understood, putting privacy safeguards in place to protect personal data, and implementing data anonymization strategies when training models.

  • Maintaining transparency and user preferences: Build workflows to ensure transparency and manage user consent, preferences, and data rights effectively.

Working Together

Collaboration between these teams is crucial for effective AI governance. Here's where they'll intertwine most:

  • AI task force: Develop a cross-functional AI task force within the organization with representation from key stakeholders across privacy, security, and governance. This task force will be responsible for developing AI guidelines and policies for the organization, approving the procurement of new AI technologies to carefully evaluate and mitigate AI risks, and ensuring that AI safety is prioritized when new systems are developed. Schedule regular meetings (monthly, for example) for the AI task force to discuss industry trends, emerging threats, and new use cases for AI technologies.

  • Dedicated communication channels: Establish channels where employees can easily ask questions and receive timely information about approved AI tools and their use. Utilize cross-functional forums like company all-hands or department-specific all-hands meetings to communicate organization-wide updates pertaining to AI use and considerations.

  • Leveraging technology and automation: The rise of AI systems has led to innovative technologies designed to identify and catalog AI systems within your organization and supply chain. It is essential to evaluate and implement these tools to enhance your AI governance framework. Integrating these advanced technologies, as well as solutions like SecurityPal for navigating security reviews, can help you maintain oversight, ensure compliance, and effectively manage the growing complexity of AI applications.

As AI technologies continue to advance at a rapid pace, privacy, security, and governance teams can't expect to achieve strong AI governance while working in isolation. The creation of an AI task force, fostering open communication around timely challenges and risks, will provide immediate benefits. Working together, rather than in siloes, these teams can ensure your AI systems are developed and deployed responsibly, ethically, and securely, ultimately safeguarding your organization's reputation and integrity.

About the Authors

Sanket Kavishwar

Sanket Kavishwar, Director of Product Management, Relyance AI

Sanket Kavishwar is director of product management at Relyance AI. With more than a decade of experience in SaaS, Sanket has a diverse background encompassing roles as a software engineer developing security solutions, a consultant implementing ERP solutions for enterprise clients, a customer success leader managing enterprise accounts, and a go-to-market leader scaling advanced privacy programs for enterprises. Currently, as a product leader, he focuses on delivering products at the intersection of privacy and security.

Kenneth Moras

Security & GRC Lead, Plaid

Kenneth Moras, security and GRC lead for Plaid, is a cybersecurity leader with extensive experience in building strategic risk management programs at Plaid and scaling cybersecurity programs at notable organizations such as Meta and Adobe. His expertise also extends to cybersecurity consulting for Fortune 500 companies during his tenure at KPMG.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights