Apple Intelligence Could Introduce Device Security Risks

The company focused heavily on data and system security in the announcement of its generative AI platform, Apple Intelligence, but experts worry that companies will have little visibility into data security.

6 Min Read
Golden Delicious apple on a branch against a blue sky with clouds
Source: Cosmo Condina via Alamy Stock Photo

Apple's long-awaited announcement of its generative AI (GenAI) capabilities came with an in-depth discussion of the company's security considerations for the platform. But the tech industry's past focus on harvesting user data from nearly every product and service have left many concerned over the data security and privacy implications of Apple's move. Fortunately, there are some proactive ways that companies can address potential risks.

Apple's approach to integrating GenAI — dubbed Apple Intelligence — includes context-sensitive searches, editing emails for tone, and the easy creation of graphics, with Apple promising that the powerful features require only local processing on mobile devices to protect user and business data. The company detailed a five-step approach to strengthen privacy and security for the platform, with much of the processing done on a user's device using Apple Silicon. More complex queries, however, will be sent to the company's private cloud and use the services of OpenAI and its large language model (LLM).

While companies will have to wait to see how Apple's commitment to security plays out, the company has put a lot of consideration into how GenAI services will be handled on devices and how the information will be protected, says Joseph Thacker, principal AI engineer and security researcher at AppOmni, a cloud-security firm.

"Apple's focus on privacy and security in the design is definitely a good sign," he says. "Features like not allowing privileged runtime access and preventing user targeting show they are thinking about potential abuse cases."

Apple spent significant time during its announcement reinforcing the idea that the company takes security seriously, and published a paper online that describes the company's five requirements for its Private Cloud Compute service, such as no privileged runtime access and hardening the system to prevent targeting specific users.

Still, large language models (LLMs), such as ChatGPT, and other forms of GenAI are new enough that the threats remain poorly understood, and some will slip through Apple's efforts, says Steve Wilson, chief product officer at cloud security and compliance provider Exabeam, and lead on the Open Web Application Security Project's Top 10 Security Risks for LLMs.

"I really worry that LLMs are a very, very different beast, and traditional security engineers, they just don't have experience with these AI techniques yet," he says. "There are very few people who do."

Apple Makes Security a Centerpiece

Apple seems to be aware of the security risks that concern its customers, especially businesses. The implementation of Apple Intelligence across a user's devices, dubbed the Personal Intelligence System, will connect data from applications in a way that has, perhaps, only been implemented through the company's health-data services. Conceivably, every message and email sent from a device could be reviewed by AI and context added through on-device semantic indexes.

Yet, the company pledged that, in most cases, the data never leaves the device, and the information is anonymized as well.

"It is aware of your personal data, without collecting your personal data," Craig Federighi, senior vice president of software engineering at Apple, stated in a four-minute video on Apple Intelligence and privacy during the company's June 10 launch, adding: "You are in control of your data, where it is stored and who can access it."

When it does leave the device, data will be processed in the company's Private Cloud Compute service, so Apple can take advantage of more powerful server-based generative-AI models, while still protecting privacy. The company says that it never stores or makes accessible any data to Apple. In addition, Apple will make every production build of its Private Cloud Compute platform available to security researchers for vulnerability research in conjunction with a bug-bounty program.

Such steps seemingly go beyond what other companies have promised and should assuage the fears of enterprise security teams, AppOmni's Thacker says.

"This type of transparency and collaboration with the security research community is important for finding and fixing vulnerabilities before they can be exploited in the wild," he says. "It allows Apple to leverage the diverse skills and perspectives of researchers to really put the system through the wringer from a security testing perspective. While it's not a guarantee of security, it will help a lot."

There's an App for (Leaking) That

However, the interactions between apps and data on mobile devices and the behavior of LLMs may be too complex to fully understand at this point, says Exabeam's Wilson. The attack surface area of LLMs continues to surprise the large companies behind the major AI models. Following its release of its latest Gemini model, for example, Google had to contend with inadvertent data poisoning that arose from training its model with untrusted data.

"Those search components are falling victim to these kind of indirect injection data-poisoning incidents, where they're off telling people to eat glue and rocks," Wilson says. "So it's one thing to say, 'Oh, this is a super-sophisticated organization, they'll get this right,' but Google's been proving over and over and over again that they won't."

Apple's announcement comes as companies are quickly experimenting with ways to integrate GenAI into the workplace to improve productivity and automate traditionally tough-to-automate processes. Bringing the features to mobile devices has happened slowly, but now, Samsung has released its Galaxy AI, Google has announced the Gemini mobile app, and Microsoft has announced Copilot for Windows.

While Copilot for Windows is integrated with many applications, Apple Intelligence appears to go beyond even Microsoft's approach.

Think Different (About Threats)

Overall, companies need to first gain visibility into their employees' use of LLMs and other GenAI. While they do not need to go to the extent of billionaire tech innovator Elon Musk, a former investor in OpenAI, who raised concerns that Apple — or OpenAI — would abuse users' data or fail to secure business information and pledged to ban iPhones at his companies, chief information security officers (CISOs) certainly should have a discussion with their mobile device management (MDM) providers, Exabeam's Wilson says.

Right now, controls to regulate data going into and out of Apple Intelligence do not appear to exist and, in the future, may not be accessible to MDM platforms, he says.

"Apple has not historically provided a lot of device management, because they are leaned in on personal use," Wilson says. "So it's been up to third parties for the last 10-plus years to try and build these third-party frameworks that allow you to install controls on the phone, but it's unclear whether they're going to have the hooks into [Apple Intelligence] to help control it."

Until more controls come online, enterprises need to set a policy, and to find ways to integrate their existing security controls, authentication systems, and data loss prevention tools with AI, says AppOmni's Thacker.

"Companies should also have clear policies around what types of data and conversations are appropriate to share with AI assistants," he says. "So while Apple's efforts help, enterprises still have work to do to fully integrate these tools securely."

About the Author

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights