3 Tips for Becoming the Champion of Your Organization's AI Committee

CISOs are now considered part of the organizational executive leadership and have both the responsibility and the opportunity to drive not just security but business success.

Matan Getz, CEO & Co-Founder, Aim Security

May 15, 2024

5 Min Read
The letters AI made up of 1s and 0s, on a blue background
Source: marcos alvarado via Alamy Stock Photo

COMMENTARY

We are now deep in the age of artificial intelligence (AI). Much more than a passing trend, this transformative technology is set to fundamentally alter the way we do business. As organizations get a handle on how AI can benefit their specific offerings, and while they try to ascertain the risks inherent in AI adoption, many forward-thinking companies have already set up dedicated AI stakeholders within their organization to ensure they are well-prepared for this revolution. Chief information security officers (CISOs) are the heart of this committee, and those ultimately responsible for implementing its recommendations. Therefore, understanding its priorities, tasks, and potential challenges is pivotal for CISOs who want to be business enablers instead of obstructors. 

Introducing: The AI Committee

An AI committee, sometimes referred to as the AI governance committee, is a group within an enterprise, responsible for overseeing the safety, legal, and security implications of that organization's AI use. Its main purpose is to ensure that AI technologies are developed, deployed, and used to boost business benefits like streamlined productivity, while making sure the organization considers the risks inherent in this use and takes active measures to safeguard the company's assets, customers, brand, and reputation accordingly. 

Who Sits on an AI Committee? 

The AI committee ideally represents a diverse group of internal and external organizational stakeholders, including: 

  • Executive leadership: Representatives from senior management or executive leadership, such as the CEO, CIO, or CTO, who provide strategic direction and support for AI initiatives.

  • General counsel: Legal counsel or compliance officers who advise on regulatory requirements, legal risks, and contractual obligations related to AI technologies.

  • Security leadership: Specialists in data privacy, cybersecurity, and information security who ensure that AI systems adhere to privacy regulations and security best practices. (This blog post will mostly focus on the CISO persona.) 

  • Data scientists and AI engineers: Professionals with expertise in data science, machine learning, and AI technologies who are responsible for developing and implementing AI systems.

  • External parties: External consultants, academics, or industry experts who provide independent perspectives and expertise on AI governance best practices. Other external parties can include stakeholder representatives, such as customers, partners, and advocacy groups who can provide input from the "outside" perspective. 

How the CISO Can Become the AI Committee Champion

Here are three fundamentals CISOs can use as a guide to being the pivotal asset in the AI committee and ensuring its success. 

1. Begin with a comprehensive assessment. 

The age-old saying in security applies to AI as well — you can't protect what you don't know. Before you get started in building a strategy for how to secure AI use across your organization, first understand who, what, and how AI has already been adopted. An AI gap analysis will allow you to first identify all shadow AI apps and models used across the organization (without your prior knowledge or approval), including public GenAI apps, third-party large language models (LLMs) and software-as-a-service (SaaS), and internally developed models. This inventory will also give you insight into usage patterns to understand what sort of AI use is organically popular for the employees, so you focus your future security efforts where they are needed most. By the way, note that these kinds of insights are invaluable for business stakeholders as well, so use them wisely. As the CISO, remember that you hold the most valuable information on the committee — GenAI usage data from across the organization, aka ROI. Armed with data, take the lead in setting up smart, secure, and realistic GenAI policies across the org. 

2. Implement a phased adoption approach.

CISOs always struggle with balancing productivity and security. So, how can CISOs who want to enable positive business benefits keep their foot on the gas and the brake at the same time? Implementing a phased adoption approach allows for security to escort adoption and assess real-time security implications of adoption. With gradual adoption, CISOs can embrace parallel security controls and measure their success. For example, start with an Enterprise chat option without connecting your organization’s data, or trial LLMs that don't learn on your data. Assuming a successful phased rollout, CISOs can keep one foot on the gas and their hands on the steering wheel, rather than reaching for the hand brake. 

3. Be the YES! guy — but with guardrails. 

Guardrails are a common security practice that enables security to engage controls for secure development, without slowing things down. How can CISOs adapt these same principles to the new GenAI frontier? The most common use case we see today is through contextual or prompt guardrails. LLMs have the capacity to generate text that may be harmful or illegal, or that violates internal company policies (or all three). To protect against such harmful threats, CISOs should set up content-based guardrails to define and then alert on prompts that are risky or malicious, or that violate compliance standards. Cutting-edge, AI-focused security solutions may also allow customers to set up and define their own unique parameters of safe prompts, and alert to and prevent prompts that fall outside of these guardrails. 

Remember that while the legal department is usually responsible for crafting the organization's safety and security policies, at the end of the day, the responsibility of enforcement falls on the CISO's shoulders. Make sure legal is creating policy that can actually be monitored, or expect failure. Apply this principle across the board — do not approve policies that you don't have a realistic way to enforce and measure.

The days of putting up fences to keep attackers out are long gone. CISOs and security practitioners are now considered part of the organizational executive leadership, and have both the responsibility and the opportunity to drive business success — not just security. Leveraging the AI committee to lead, not follow, is just another way CISOs can effectively change security reality for the better, ensuring their positive impact on the business. Armed with data, CISOs have a unique opportunity to lead employees, including IT, developers, and executives, on the best strategy to gain the benefits of GenAI, securely. 

About the Author(s)

Matan Getz

CEO & Co-Founder, Aim Security

Matan Getz is the CEO and co-founder of Aim Security. Throughout his extensive service in Israel’s military intelligence Unit 8200, Matan attained a proven track record of leading and managing cutting-edge technology departments. As a graduate of the elite leadership program Talpiot, and in his role as deputy CISO of the Israeli Defense Forces, Matan built the military’s largest data science defensive AI project. As a CIO at Unit 8200, Matan guided an R&D division of 150 engineers, where he led the secure and safe adoption of AI and big data, enabling their use by the unit. His IDF service merited him the prestigious Israel Defense Prize.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights