The Apple App Store Incident: Trouble in Paradise?

The fact that Apple’s security model has worked so well in the past doesn’t mean it will work well forever. Here’s why.

Paul Drapeau, Principal Security Researcher, Confer

January 22, 2016

4 Min Read
Dark Reading logo in a gray background | Dark Reading

Apple’s App Store and development ecosystem is often described as a walled garden, due to its closed developer ecosystem and stringent security. Since 2008, Apple’s approach has been reasonably successful in preventing malicious iOS apps from making their way into the store and onto users’ iPhones and iPads. But a very public incident in September 2015 highlighted a weakness in Apple’s security model that may spell trouble in the future.

Initially, Apple confirmed that 39 malware-infected apps had been found and removed from its China App Store, including popular messaging and car-hailing services. The tainted apps, which one research firm called “very harmful and dangerous,” contained code that steals sensitive user and device information. Independent researchers soon revised the number of infected apps upwards to 344, then 4,000, saying they may have been downloaded to hundreds of millions of devices.

This incident highlights an important issue: Developers are an attractive target for propagating advanced attacks because the product of their labor is widely distributed downstream and is trusted by end users and organizations.

In this case, legitimate developers in China were enticed into downloading an illicit copy of Apple’s XCode developer toolkit from a local server rather than using official Apple sources. It’s likely the attackers were exploiting the human tendency to take shortcuts: Due to China’s internet restrictions, downloads from the U.S. take up to three times longer. Unbeknownst to the developers, the illicit tools (dubbed XCodeGhost) inserted malware into their apps. Submitted to the App Store under the developers’ own trusted credentials, the apps were assumed to be safe.

Although Apple moved quickly to repair the damage, the incident is troubling in a number of ways. It tells cybercriminals that Apple security is not invincible and that attacking higher up the value chain, by compromising developer tools and credentials, can be an effective approach.

A falure on two counts 

The fact that the breach was discovered by external researchers suggests that Apple’s own security failed on two counts: Apple didn’t detect the developers’ use of the illicit tools and didn’t recognize the presence of malicious apps once they had gotten into the store. Furthermore, Apple is moving toward a similar development-and-deployment paradigm for Mac applications, potentially exposing them to the same risks as iOS apps.

This incident should serve as a warning. The fact that Apple’s security model has worked so well in the past doesn’t mean it will work forever. That’s especially true as the number of iOS devices gets larger—Apple had sold more than a billion as of last January—and the content they transport or store gets juicier. Today’s developer shops run almost exclusively on Macs. A huge number of business executives, government officials, and senior professionals conduct sensitive communications on their iPhones. Given this reality, attackers will persist in looking for new, creative ways to compromise those devices for profit, political gain, or sheer spite.

There’s no indication that Apple plans to overhaul its security regime any time soon. However, the recent insertion of malicious code could have been detected and shut down earlier and with less damage if Apple’s security strategy had employed large-scale, behavioral analytics on the endpoints themselves, alongside other security measures.

On the developer side of such an attack, it’s important to compare the reputation of each developer’s toolset against known good tools. By detecting if and when new versions show up in the environment and where they were downloaded from, it’s possible to quickly identify illicit sources or versions of the developer toolchain. On the end-user side of an attack, Apple would benefit from opening the iOS platform to allow monitoring of a device’s network connections, reputation services, and behavioral detection.

Historically, Apple’s implicit message about iOS security has been this: “Trust us. We have all these controls in place around tools and credentials, so only secure stuff you can trust gets into the App Store.” But in the wake of this recent attack, enterprises, developers and end users can be forgiven for wondering, “Is that still enough?”

Releated content:

About the Author

Paul Drapeau

Principal Security Researcher, Confer

Paul Drapeau is a Principal Security Researcher for Confer, which offers endpoint and server security via an open, threat-based, collaborative platform. Prior to joining Confer he led IT security for a public, global pharmaceutical research-and-development organization. He is an active member of the security community and has spoken at many security and other industry conferences including DEF CON and Infosec World. Paul has served as a volunteer organizer and ran the CTF competitions at Security BSides Boston in 2013 and 2014. He holds a Bachelor's degree in computer science from URI and various security certifications.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights