The Security Risk of Rampant Shadow AIThe Security Risk of Rampant Shadow AI
While employees want to take advantage of the increased efficiency of GenAI and LLMs, CISOs and IT teams must be diligent and stay on top of the most up-to-date security regulations.
COMMENTARY
The rapid rise of artificial intelligence (AI) has cast a long shadow, but its immense promise comes with a significant risk: shadow AI.
Shadow AI refers to the use of AI technologies, including AI models and generative AI (GenAI) tools outside of a company's IT-sanctioned governance. As more people use tools like ChatGPT to increase their efficiency at work, many organizations are banning publicly available GenAI for internal use. Among the organizations looking to prevent unnecessary security risks are those in the financial services and healthcare sectors, as well as technology companies like Apple, Amazon, and Samsung.
Unfortunately, enforcing such a policy is an uphill battle. According to a recent report, non-corporate accounts make up 74% of ChatGPT use and 74% of Gemini and Bard use at work. Employees can easily skirt corporate policies to continue their AI use for work, potentially opening up security risks.
The greatest among these is the lack of protection for sensitive data. As of March 2024, 27.4% of data inputted into AI tools would be considered sensitive, an increase from 10.7% at the same time last year. Protecting this information once it is put into a GenAI tool is virtually impossible.
The uncontrolled risk of shadow AI usage reveals the need for stringent privacy and security practices when employees use AI.
It all boils down to data. Data is the fuel of AI, but it is also the most valuable asset to organizations. Stolen, leaked, or corrupted data causes real, tangible harm to a business — regulatory fines from leaking personally identifiable information (PII), costs associated with leaked proprietary information like source code, and an increase in severe security breaches like hacks and malware.
To mitigate risk, organizations must secure their data while it's at rest, in transit, and in use. The counter to risky shadow AI use is having fine control over the information employees feed into large language models (LLMs).
How Can CISOs Secure GenAI and Company Data?
Securing sensitive company data is a challenging balancing act for chief information security officers (CISOs) as they weigh the desire for their organizations to take advantage of the perceived value of GenAI while also protecting the sole asset that makes these benefits possible — their data.
So, the question becomes: How do you do this? How do you get the balance right? How do you extract positive business outcomes while protecting the enterprise's most valuable asset?
At a high level, CISOs should look at protecting data through its entire life cycle. This includes:
Protecting the data before it is even ingested into the GenAI model
Securing the data assets while they are being used in the GenAI model
Ensuring that the data output is completely secured, as this new data will drive the business outcomes and create true value
If the data life cycle isn't secure, this becomes a business-critical exposure.
More specifically, a multifaceted approach is necessary to protect sensitive data from being leaked, and though it starts with limiting shadow AI as much as possible, it is just as important to preserve data security and privacy with some basic best practices:
Encryption: Data encryption across its life cycle is vital, but it's equally important to manage and store encryption keys securely and separately from the data itself.
Obfuscation: Use data tokenization to anonymize any sensitive or PII data that could be fed to an LLM. This prevents data that enters the AI pipeline from being corrupted or leaked.
Access: Apply granular, role-based access controls to data so that only authorized users can see and use the data in plain text.
Governance: Commit to ethical business practices, embed data privacy across all operations, and remain current on data privacy regulations.
As is often the case with most tech advancements, GenAI's ease and convenience come with some fallbacks. While employees want to take advantage of the increased efficiency of GenAI and LLMs for work, CISOs and IT teams must be diligent and stay on top of the most up-to-date security regulations to prevent sensitive data from entering the AI system. Along with making sure workers know the importance of data protection, it is key to mitigate potential risks by taking all measures to encrypt and secure data from the start.
About the Author
You May Also Like