Privacy Anxiety Pushes Microsoft Recall AI Release Again

The Recall AI tool will be available to Copilot+ PC subscribers in December, and can be used to record images of every interaction on the device for review later. Critics say this introduces major privacy and security concerns along with useful functionality.

Microsoft Copilot logo displayed on smartphone screen
Source: GK Images via Alamy Stock Photo

Microsoft has made the decision to once again delay the release of its new artificial Intelligence tool, Recall, while the company works through trying to make sure all of the handy data it delivers can't be abused by adversaries.

The Recall tool will be part of the suite of services delivered through Microsoft's AI Assistant software, Copilot+. Recall's job, once it's rolled out, will be to gather "snapshots" of each action on the PC to be accessible later through an easy search. The software will be able to "recall" the exact moment the user saw a website, used an app, or interacted with a document.

Compelling use cases aside, information security professionals have balked at Recall's ability to keep its snapshots secure from would-be threat actors. For its part, Microsoft has taken these cybersecurity concerns seriously. In June, Microsoft announced it had added new privacy and security features to Recall just days ahead of its intended rollout date. That release was ultimately pushed back to October in order to take extra steps to shore up the tool's security. Now, the release date has been pushed back again.

"We are committed to delivering a secure and trusted experience with Recall," according to a statement about the delay from Brandon LeBlanc, senior product manager for Windows. "To ensure we deliver on these important updates, we’re taking additional time to refine the experience before previewing it with Windows Insiders. Originally planned for October, Recall will now be available for preview with Windows Insiders on Copilot+ PCs by December."

Related:Orgs Scramble to Fix Actively Exploited Bug in Apache Struts 2

Microsoft Pledges to Secure Recall

In late September, David Weston, Microsoft's vice president of enterprise and OS security, detailed the company's commitment to the security of Recall data, stressing the tool is opt-in only, encrypted, and includes malware protection; and, its data is protected in a virtualization-based security (VBS) enclave inaccessible by even admin and kernel users without biometric authentication.

"Using VBS Enclaves with Windows Hello enhanced sign-in security allows data to be briefly decrypted while you use the Recall feature to search. Authorization will time out and require the user to authorize access for future sessions," Weston wrote. "This restricts attempts by latent malware trying to 'ride along' with a user authentication to steal data."

Weston further assured those concerned about Recall's security that: in-private browsing information is never saved by Recall; users have an option to filter out specific sites or apps from Recall recording; content filtering keeps data like credit card and Social Security numbers from being stored; users can delete stored information by date, content, app, or website; and an icon clearly shows when snapshots are being saved, so users can easily pause the function.

Related:Delinea Joins CVE Numbering Authority Program

"Recall’s secure design and implementation provides a robust set of controls against known threats," Weston added. "Microsoft is committed to making the power of AI available to everyone, while retaining security and privacy against even the most sophisticated attacks."

Is Microsoft Eyeing Claude's 'Computer Use' Feature?

It appears Microsoft is taking the warnings from the cybersecurity community about Recall's potential business risks seriously, Bugcrowd founder Casey Ellis tells Dark Reading. Redmond might also have its eye on a recent release of a similar tool in Anthropic's Claude AI before rolling out Recall, he adds.

"After the initial reaction to Recall — and some of the security and privacy concerns raised by how it was implemented — Microsoft appears to be hastening slowly here," Ellis says. "I wouldn’t be surprised if they’re taking the opportunity to learn from how the market responds to and uses Anthropic’s 'computer use' feature, which is very similar to Recall from a privacy, security, and functionality standpoint."

Related:Does Desktop AI Come With a Side of Risk?

Released just days ago, the computer use feature allows the latest version of Claude to interact with a computer in the same way as a human. Claude's new feature, like Recall, ingests screenshots from Internet-connected computers. And in its Oct. 22 announcement of the release, Anthropic admitted the tool does indeed come with inherent cybersecurity risks.

"In this spirit, our Trust & Safety teams have conducted extensive analysis of our new computer-use models to identify potential vulnerabilities," the release announcement said. "One concern they've identified is prompt injection — a type of cyberattack where malicious instructions are fed to an AI model, causing it to either override its prior directions or perform unintended actions that deviate from the user's original intent."

Anthropic added that it hopes to work out this and other issues in its public beta phase, which will certainly be of keen interest to Microsoft as it works through its Recall release.

Claude, according to Anthropic, will not use this user-submitted data to train its own AI model. But when it comes to Microsoft, security consultant John Bambenek isn't so sure Recall will adhere to the same standard.

"AI systems require tons of data, which means Microsoft wants all the data on how users are interacting with their computers," Bambenek says. "I am not sure the feature is terribly useful for end users, however, it certainly is for training future models. It has enormous privacy implications, so hopefully the delay is useful in terms of minimizing the risks and potential harms to end users."

While Microsoft security teams and Anthropic's Claude feature testing move forward, Patrick Harr, CEO of SlashNext Email Security, warns these tools remain vulnerable to cyberattack.

"We continually see phishing and socially engineered attacks from professional groups, mimicking support staff that target company users either through email, other messaging apps, or even bot calls to provide remote access to their desktops," Harr says. "Once accessed into Recall, the threat actors have perfect timeline and information about that user that can be exploited. Proceed with caution until this update is done."

About the Author

Becky Bracken, Senior Editor, Dark Reading

Dark Reading

Becky Bracken is a veteran multimedia journalist covering cybersecurity for Dark Reading.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights