Does Desktop AI Come With a Side of Risk?

Artificial intelligence capabilities are coming to a desktop near you — with Microsoft 365 Copilot, Google Gemini with Project Jarvis, and Apple Intelligence all arriving (or having arrived). But what are the risks?

6 Min Read
A hand at a keyboard overlaid with an AI graphic
Source: 'Who is Danny' via Shutterstock

Artificial intelligence has come to the desktop.

Microsoft 365 Copilot, which debuted last year, is now widely available. Apple Intelligence just reached general beta availability for users of late-model Macs, iPhones, and iPads. And Google Gemini will reportedly soon be able to take actions through the Chrome browser under an in-development agent feature dubbed Project Jarvis.

The integration of large language models (LLMs) that sift through business information and provide automated scripting of actions — so-called "agentic" capabilities — holds massive promise for knowledge workers but also significant concerns for business leaders and chief information security officers (CISOs). Companies already suffer from significant issues with the oversharing of information and a failure to limit access permissions — 40% of firms delayed their rollout of Microsoft 365 Copilot by three months or more because of such security worries, according to a Gartner survey.

The broad range of capabilities offered by desktop AI systems, combined with the lack of rigorous information security at many businesses, poses a significant risk, says Jim Alkove, CEO of Oleria, an identity and access management platform for cloud services.

"It's the combinatorics here that actually should make everyone concerned," he says. "These categorical risks exist in the larger [native language] model-based technology, and when you combine them with the sort of runtime security risks that we've been dealing with — and information access and auditability risks — it ends up having a multiplicative effect on risk."

Related:Citizen Development Moves Too Fast for Its Own Good

Desktop AI will likely take off in 2025. Companies are already looking to rapidly adopt Microsoft 365 Copilot and other desktop AI technologies, but only 16% have pushed past initial pilot projects to roll out the technology to all workers, according to Gartner's "The State of Microsoft 365 Copilot: Survey Results." The overwhelming majority (60%) are still evaluating the technology in a pilot project, while a fifth of businesses haven't even reached that far and are still in the planning stage.

Most workers are looking forward to having a desktop AI system to assist them with daily tasks. Some 90% of respondents believe their users would fight to retain access to their AI assistant, and 89% agree that the technology has improved productivity, according to Gartner.

Bringing Security to the AI Assistant

Unfortunately, the technologies are black boxes in terms of their architecture and protections, and that means they lack trust. With a human personal assistant, companies can do background checks, limit their access to certain technologies, and audit their work — measures that have no analogous control with desktop AI systems at present, says Oleria's Alkove.

Related:Cleo MFT Zero-Day Exploits Are About to Escalate, Analysts Warn

AI assistants — whether they are on the desktop, on a mobile device, or in the cloud — will have far more access to information than they need, he says.

"If you think about how ill-equipped modern technology is to deal with the fact that my assistant should be able to do a certain set of electronic tasks on my behalf, but nothing else," Alkove says. "You can grant your assistant access to email and your calendar, but you cannot restrict your assistant from seeing certain emails and certain calendar events. They can see everything."

This ability to delegate tasks needs to become part of the security fabric of AI assistants, he says.

Cyber-Risk: Social Engineering Both Users & AI

Without such security design and controls, attacks will likely follow.

Earlier this year, a prompt injection attack scenario highlighted the risks to businesses. Security researcher Johann Rehberger found that an indirect prompt injection attack through email, a Word document, or a website could trick Microsoft 365 Copilot into taking on the role of a scammer, extracting personal information, and leaking it to an attacker. Rehberger initially notified Microsoft of the issue in January and provided the company with information throughout the year. It's unknown whether Microsoft has a comprehensive fix for the issue.

Related:Generative AI Security Tools Go Open Source

The ability to access the capabilities of an operating system or device will make desktop AI assistants another target for fraudsters who have been trying to get a user to take actions. Instead, they will now focus on getting an LLM to take actions, says Ben Kilger, CEO of Zenity, an AI agent security firm.

"An LLM gives them the ability to do things on your behalf without any specific consent or control," he says. "So many of these prompt injection attacks are trying to social engineer the system — trying to go around other controls that you have in your network without having to socially engineer a human."

Visibility Into AI's Black Box

Most companies lack visibility into and control of the security of AI technology in general. To adequately vet the technology, companies need to be able to examine what the AI system is doing, how employees are interacting with the technology, and what actions are being delegated to the AI, Kilger says.

"These are all things that the organization needs to control, not the agentic platform," he says. "You need to break it down and to actually look deeper into how those platforms actually being utilized, and how do people build and interact with those platforms."

The first step to evaluating the risk of Microsoft 365 Copilot, Google's purported Project Jarvis, Apple Intelligence, and other technologies is to gain this visibility and have the controls in place to limit an AI assistant's access on a granular level, says Oleria's Alkove.

Rather than a big bucket of data that a desktop AI system can always access, companies need to be able to control access by the eventual recipient of the data, their role, and the sensitivity of the information, he says.

"How do you grant access to portions of your information and portions of the actions that you would normally take as an individual, to that agent, and also only for a period of time?" Alkove asks. "You might only want the agent to take an action once, or you may only want them to do it for 24 hours, and so making sure that you have those kind of controls today is critical."

Microsoft, for its part, acknowledges the data-governance challenges, but argues that they are not new, just made more apparent due to AI’s arrival.

"AI is simply the latest call to action for enterprises to take proactive management of controls their unique, respective policies, industry compliance regulations, and risk tolerance should inform – such as determining which employee identities should have access to different types of files, workspaces, and other resources," a company spokesperson said in a statement.

The company pointed to its Microsoft Purview portal as a way that organizations can continuously manage identities, permission, and other controls. Using the portal, IT admins can help secure data for AI apps and proactively monitor AI use though a single management location, the company said. Google declined to comment about its forthcoming AI agent.

Read more about:

CISO Corner

About the Author

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights