Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.
Enterprise Generative AI Enters Its Citizen Development Era
Business users are building Copilots and GPTs with enterprise data. What can security teams do about it?
There are times where we get a clear before-and-after moment that demands a reevaluation of our most basic assumptions. This month, OpenAI announced custom GPTs, a no-code tool for people to create their own Generative Pre-trained Transformer (GPT) models based on their own data and using their own plug-ins. What used to be a tight mandate for a team inside a large R&D group or a chatbot startup can now be accomplished by my grandfather in five minutes while using a couple of wiki links as a knowledge base. Security leaders need to recognize that artificial intelligence (AI) tools are not something that is coming in the nebulous future; they are here.
More importantly, these GPTs can act on the user's behalf. OpenAI's tight integration with Zapier means thousands of connectors are at your disposal, letting the AI query your CRM, update your ERP, or monitor your servers with a few clicks. How does the AI authenticate to all these services, you might ask? Great question, but more on that later.
Another thought you might have is, well, this is amazing and all, but we will never allow this to happen in our highly regulated security-focused enterprise. You might have even blocked ChatGPT on the network level long ago and are now constantly monitoring for more bots to add to that deny list — which is annoying, but you can manage.
Enter Microsoft. Last week at its Ignite conference, Microsoft announced Copilot Studio, its own no-code GPT creator. It has everything the OpenAI tool has, from uploading files to use as a knowledge base to a chat interface for configuration and click-to-add integrations called plug-ins. Copilot Studio allows users to integrate their Copilots with Microsoft 365, Azure SaaS, and hundreds of other enterprise systems. This integration is done via user impersonation, meaning the Copilot acts on behalf of users.
Here's the thing about these Microsoft-generated user impersonation bots: You can't block them. You have no way to distinguish between an AI-generated operation and a user-triggered operation because they look exactly alike in the logs. Copilots are hosted as applications inside your M365 environment, so forget about network-level blocks. Users log into these Copilots with their corporate credentials. The bottom line is that while GPTs live in the consumer world, Copilots live in the enterprise world.
How Did This Happen So Quickly?
Well, it didn't. Microsoft and other major vendors — such as Salesforce, UiPath, and ServiceNow — have been building low-code/no-code platforms that lowered the bar to building enterprise applications for years now. These companies have been building out hundreds of integrations, visual builders, automated production deployments, and credential-sharing-as-a-service.
Chatbots are the killer app for low-code/no-code platforms. Who needs to code when you can leverage a platform that out of the box gives you everything you need to create, share, monitor, upgrade, and embed your bot within minutes inside the enterprise, directly on top of business data?
A crucial point here is just how easy it now is to build no-code apps. In recent years, professional developers and business users alike have used platforms, like the Power Platform, to build millions of new business applications, including some that handle sensitive data and facilitate business-critical processes. While some companies have started to centralize the GenAI apps being created by the engineering teams, this won't be enough. Security teams have to look at what business users are building as well. In fact, the sheer number of business users, combined with the ease of creating bots, suggests that security teams should, in fact, focus more on what business users are building.
Where Do We Even Begin?
Luckily, a growing number of organizations have already integrated citizen development (business users building apps) into their application security programs, and some of their insights have been publiclyshared. Industry standards that categorize, explain, and suggest remediation for security risks of low-code/no-code apps have emerged.
Not using code doesn't mean no vulnerabilities, especially logical ones. However, it typically does mean the lack of a software development life cycle (SDLC), visibility, and controls. Whether our users are creating a GPT or a Copilot, they are doing so today and in large quantities. For security leaders, it's either get on board now and bring these new developers under the security umbrella — or miss the train and hope for the best.
About the Author
You May Also Like