The Critical Difference Between Vulnerabilities Equities & Threat Equities

Why the government has an obligation to share its knowledge of flaws in software and hardware to strengthen digital infrastructure in the face of growing cyberthreats.

Adam Shostack, Leading expert in threat modeling

November 30, 2017

5 Min Read
Dark Reading logo in a gray background | Dark Reading

In mid-November, Rob Joyce, the White House cybersecurity coordinator, released a set of documents about the "vulnerabilities equities process," which he noted in a recent White House blog post:

At the same time, governments must promote resilience in the digital systems architecture to reduce the possibility that rogue actors will succeed in future cyber attacks. This dual charge to governments requires them to sustain the means to hold even the most capable actor at risk by being able to discover, attribute, and disrupt their actions on the one hand, while contributing to the creation of a more resilient and robust digital infrastructure on the other. Obtaining and maintaining the necessary cyber capabilities to protect the nation creates a tension between the government’s need to sustain the means to pursue rogue actors in cyberspace through the use of cyber exploits, and its obligation to share its knowledge of flaws in software and hardware with responsible parties who can ensure digital infrastructure is upgraded and made stronger in the face of growing cyber threats. 

This is a valuable step in the right direction, and the people who've done the work have worked hard to make it happen. However, the effort doesn't go far enough, and those of us in the security industry have an urgent need to go further to achieve the important goals that Joyce lays out: improving our defenses with knowledge garnered by government offensive and defensive operations. 

This is intended as a nuanced critique: I appreciate what's been done. I appreciate that it was hard work, and that the further work will be even harder. And it needs to be done.

The heart of the issue is our tendency in security to want to call everything a "vulnerability." The simple truth is that attackers use a mix of vulnerabilities, design flaws, and deliberate design choices to gain control of computers and to trick people into disclosing things like passwords. For example, in versions of PowerPoint up to and including 2013, there was a feature where you could run a program when someone "moused over" a picture. I understand that feature is gone in the latest Windows versions of PowerPoint but still present in the Mac version. I use this and other examples just to make the issues concrete, not to critique the companies. 

This is not a vulnerability or a flaw. It's a feature that was designed in. People designed it, coded it, tested it, documented it, and shipped it. Now, can an attacker reliably "weaponize" it by shipping it with a script in a zip file, for example, by referring to a UNC path to \\example.org\very\evil.exe? I don't know. What I do know is that the process as published and described by Joyce explicitly excludes such issues. As stated in the blog post:

The following will not be considered to be part of the vulnerability evaluation process:

  • Misconfiguration or poor configuration of a device that sacrifices security in lieu of availability, ease of use or operational resiliency.

  • Misuse of available device features that enables non-standard operation.

  • Misuse of engineering and configuration tools, techniques and scripts that increase/decrease functionality of the device for possible nefarious operations.

  • Stating/discovering that a device/system has no inherent security features by design.

These issues are different from vulnerabilities. None of them is a bug to fix. I do not envy the poor liaison who gets to argue with Microsoft were this feature to be abused, nor the poor messenger who had to try to convince Facebook that their systems were being abused during the elections. However senior that messenger, it's a hard battle to get a company to change its software, especially when customers or revenue are tied to it. I fought that battle to get Autorun patched in shipped versions of Windows, and it was not easy.

However, the goal, as stated by Joyce, does not lead to a natural line between vulnerabilities, flaws, or features. If our goal is to build more resilient systems, then we need to start by looking at the issues that happen — all of them — and understanding all of them. We can't exclude the ones that, a priori, are thought to be hard to fix, nor should we let a third party decide what's hard to fix. 

The equities process should be focused on government's obligation to share its knowledge of flaws in software and hardware with responsible parties who can ensure digital infrastructure is upgraded and made stronger in the face of growing cyberthreats. Oh, wait, that's their words, not mine. And along the way, "flaws" gets defined down to vulnerabilities.

At the same time, our security engineering work needs to move from vulnerability scanning and pen tests to be comprehensive, systematic, and structured. We need to think about security design, the use of safer languages, better sandboxes, and better dependency management. We need to threat model the systems we're building so that they have fewer surprises.

That security engineering work will reduce the number of flaws and exploitable design choices. But we'll still have clever attackers, and we need the knowledge that's gained from attack and defense to flow to software engineers in a systematic way. A future threats equities process will be a part of that, and industry needs to ask for it to be sooner rather than later.

Related Content:

 

About the Author

Adam Shostack

Leading expert in threat modeling

Adam Shostack is a leading expert on threat modeling. He's a member of the BlackHat Review Board, and helped create the CVE and many other things. He currently helps many organizations improve their security via Shostack & Associates, and helps startups become great businesses as an advisor and mentor. While at Microsoft, he drove the Autorun fix into Windows Update, was the lead designer of the SDL Threat Modeling Tool v3 and created the "Elevation of Privilege" game. Adam is the author of Threat Modeling: Designing for Security, and the co-author of The New School of Information Security. His personal home page can be found here

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights