News, news analysis, and commentary on the latest trends in cybersecurity technology.
Apple's AI Moves Will Impact Future Chip, Cloud Security Plans
Analysts say Apple's black-box approach provides a blueprint for rival chip makers and cloud providers.
July 1, 2024
The measures Apple has implemented to prevent customer data theft and misuse by artificial intelligence (AI) will have a marked impact on hardware security, especially as AI becomes more prevalent on customer devices, analysts say.
Apple emphasized customer privacy in new AI initiatives announced during the Worldwide Developers Conference a few weeks ago. It has built an extensive private hardware and software infrastructure to support its AI portfolio.
Apple has full control over its AI infrastructure, which makes it harder for adversaries to break into systems. The company's black-box approach also provides a blueprint for rival chip makers and cloud providers for AI inferencing on devices and servers, analysts say.
"Apple can bolster the abilities of an LLM [large language model] while not having any visibility into the data being processed, which is excellent from both customer privacy and corporate liability standpoints," says James Sanders, an analyst at TechInsights.
Apple's AI Approach
The AI back end includes new foundation models, servers, and Apple Silicon server chips. AI queries originating from Apple devices are packaged in a secure lockbox, unpacked in Apple's Private Compute Cloud, and verified as being from the authorized user and device; answers are sent back to devices and accessible to authorized users only. Data isn't visible to Apple or other companies and is deleted once the query is complete.
Apple has etched security features directly into device and server chips, which authorize users and protect AI queries. Data remains secure while on-device and during transit via features such as secure boot, file encryption, user authentication, and secure communications over the Internet via TLS (Transport Layer Security).
Apple is its own customer with a private infrastructure, which is a big advantage, while rival cloud providers and chip makers work with partners using different security, hardware, and software technologies, Sanders says.
"The implementations of that per cloud vary ... there's not a single way to do this, and not having a single way to do this adds complexity," Sanders says. "My suspicion is that the difficulty of implementing this at scale becomes a lot harder when you're dealing with millions of client devices."
Microsoft's Pluton Approach
But Apple's main rival, Microsoft, is already on its way to end-to-end AI privacy with security features in chips and Azure cloud. Last month the company announced a class of AI PCs called CoPilot+ PCs that require a Microsoft security chip called Pluton. The first AI PCs shipped this month with chips from Qualcomm, with Pluton switched on by default. Intel and AMD will also ship PCs with Pluton chips.
Pluton ensures data in secure enclaves is protected and accessible only to authorized users. The chip is now primed to protect AI customer data, says David Weston, vice president for enterprise and OS security at Microsoft.
"We have a vision for mobility of AI between Azure and client, and Pluton will be at the core of that," he says.
Google declined to comment on its chip-to-cloud strategy.
Intel, AMD, and Nvidia are also building black boxes in hardware that keep AI data safe from hackers. Intel didn't respond to requests for comment on its chip-to-cloud strategy, but in earlier interviews the company said it is prioritizing securing chips for AI.
Security Through Obscurity May Work
But a mass-market approach by chip makers could leave larger surfaces for attackers to intercept data or break into workflows, analysts say.
Intel and AMD have a documented history of vulnerabilities, including Spectre, Meltdown, and their derivatives, says Dylan Patel, founder of chip consulting firm SemiAnalysis.
"Everyone can acquire Intel chips and try to find attack vectors," he says. "That's not the case with Apple chips and servers."
In contrast, Apple is a relatively new chip designer and can take a clean-slate approach to chip design. A closed stack helps with "security through obscurity," Patel says.
Microsoft has three different confidential computing technologies in preview in the Azure cloud: AMD's SEP-SNV offering, Intel's TDX, and Nvidia's GPU. Nvidia's graphics processors are now a target of hackers with AI's growing popularity, and the company recently issued patches for high-severity vulnerabilities.
Intel and AMD work with hardware and software partners plugging their own technologies, which creates a longer supply chain to secure, says Alex Matrosov, CEO of hardware security firm Binarly. This gives hackers more chances to poison or steal data used in AI and creates problems in patching security holes as hardware and software vendors operate on their own timelines, he says.
"The technology is not really built from the perspective of seamless integration to focus on actually solving the problem," Matrosov says. "This has introduced a lot of layers of complexity."
Intel and AMD chips weren't inherently designed for confidential computing, and firmware-based rootkits may intercept AI processes.
"The silicon stack includes layers of legacy ... and then we want confidential computing. It's not like it's integrated," Matrosov says.
About the Author
You May Also Like