It's Cheap to Exploit Software — and That's a Major Security Problem
The solution? Follow in the footsteps of companies that have raised the cost of exploitation.
How much would it cost to hack your phone? The best guess for an iPhone is between $0 and $65,000 — and that price mainly depends on you. If you skipped a really important security update, the cost is closer to $0.
Say you were up to date. That $65,000 figure is an upper cost of exploiting the median individual — switch to an Android, a Mac, or a PC and it could get a lot lower. Apple has invested enormous resources in hardening the iPhone. The asking price for an individual exploit, rather than as a service, can go as high as $8 million. Compare that to the cost of an exploit of a PDF reader like Adobe Acrobat — notoriously riddled with security vulnerabilities — which according to this TrendMicro research report (PDF) is $250 and up.
Switch from targeting a specific person to targeting any of the thousands of people at a large company and there are myriad ways in. An attacker only needs to find the cheapest one.
The fact that a modern iPhone exploit sells for millions versus hundreds for an Adobe Acrobat exploit is an extraordinary achievement for Apple, worth celebrating and trying to replicate elsewhere. It reflects that big tech companies have quietly spent enormous resources to raise the cost to exploit software over the past 20 years.
How Do We Increase the Cost of Exploitation?
Outside the largest technology companies, the idea of trying to make software harder to exploit has often been seen as a lost cause. Imagine there's a worm moving across your network. It's hard to get 1,000 office workers to reboot their computers, so you put a firewall at the network perimeter to block the worm's network packets. That will keep the worm out, but the machines are still vulnerable if it gets inside the network.
The modern approach (zero trust, pioneered by Forrester) is to assume the "perimeter" is already breached — so now each device and application, regardless of network location, needs to be hardened. How? By raising the cost to exploit software itself.
Although this has been seen as a prohibitively expensive approach, it's gaining in popularity. Here are some techniques that have notably raised the cost of exploiting software, along with what makes them expensive or challenging to roll out:
Secure-by-design architecture: Designing out the possibility of common vulnerability patterns that lead to exploits. This is amazingly effective and a part of iPhone architecture that is underappreciated by the general public. Secure by design can happen at the hardware layer, or at the language layer, as with a language like Rust, designed by Mozilla to reduce the probability of programming mistakes that cause security vulnerabilities in Firefox. Firefox was released in 1998 and Rust in 2018; after five years of hard work, Rust now accounts for 10% of Firefox code. Imagine the effort it would take to port an entire operating system like Linux. Unless you're starting from scratch, secure by design is difficult and slow to implement.
Hardware and operating system exploit mitigation: Arguably, this is more of a perimeter, but if it is built in it can be effective, as there's no direct comparison to running the app outside the perimeter, since it requires an operating system to execute it. This approach was a big part of hardening in the early 2000s, especially Linux's write or execute and Microsoft's Data Execution Prevention. More recent approaches, such as control flow integrity, are theoretically sound, but often have performance costs developers generally aren't willing to pay.
Pay for vulnerabilities (also known as a bug bounty): Perhaps ironically, one of the cheapest techniques is just to pay hackers to share what they find. In theory, hackers could monetize exploits for far more than what the vendor will pay. But in practice, maximizing the value extracted takes a lot of work and may confront a hacker with ethical quandaries. Bug bounties are especially ideal for companies with lots of Internet-facing services, as they require little work to set up.
Automated testing tools: Starting in the early 2000s, several startups appeared around automated tests for security issues. The idea of code finding bugs in code seems intuitive, but is prone to noise, as human reviewers have context that is difficult to encode in a compiler-phase analysis. It remains popular because it's relatively low friction to implement: Set up a job that scans code as it moves through the development life cycle. There is a large market of tools that scan code at build time (SAST) and run time (DAST); the most common complaint about the tools is that they have a high volume of false positives.
Manual or automated code reviews: Transferring expertise from more senior to more junior developers, or using tools that lint or automatically find simple anti-patterns. This can be disproportionately effective. Code review automation can effectively implement a less ambitious version of secure by design, called "secure guardrails," where instead of ground-up re-architecting, automated comments guide developers onto a new way that avoids a whole class of vulnerabilities.
What Are Potential Solutions?
I believe the future requires three things. First, more security engineers and engineering: Hiring security engineers that have development backgrounds and getting engineering leadership buy in on the concept of increasing the cost to exploit software. Second, shifting our focus from tools that clean up detection and response to building tools that raise the cost to exploit. Third, not building new tools in an isolated, security-centric world, but in conjunction with developer stakeholders and considering the needs of the business to ship fast.
Software is eating the world, and software is cheap to exploit. We're definitely not going to slow down the former, so let's change the latter.
About the Author
You May Also Like