Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.
NIST Misses Opportunity With New 'Minimum Standard' for Software Security Testing
The agency's response to President Biden's executive order creates serious, unresolved questions across the required techniques.
In response to ever-increasing cyberattacks, Executive Order (EO) 14028 on Improving the Nation's Cybersecurity, which was issued on May 12, 2021, makes cybersecurity a top priority and demands "bold changes:"
"The United States faces persistent and increasingly sophisticated malicious cyber campaigns that threaten the public sector, the private sector, and ultimately the American people’s security and privacy … the trust we place in our digital infrastructure should be proportional to how trustworthy and transparent that infrastructure is, and to the consequences we will incur if that trust is misplaced.… Incremental improvements will not give us the security we need; instead, the Federal Government needs to make bold changes and significant investments in order to defend the vital institutions that underpin the American way of life." – Cybersecurity EO §1 (emphasis added).
The EO directed the National Institute of Standards and Technology (NIST) to publish guidelines recommending minimum standards for software security testing within 60 days. Accordingly, NIST released Guidelines on Minimum Standards for Developer Verification of Software in July. NIST held a public workshop to gather input on the order and did not ask for or accept any public comment on the new guideline.
Basic Problems With NIST Software Security Testing Guidelines
Unfortunately, NIST failed to capitalize on this generational opportunity. After spending the past 25 years of my career doing software security testing on critical applications across a variety of sectors, I can confidently say that the NIST document isn't "bold," and it doesn't qualify as a "minimum standard for security testing" as there are serious unresolved questions across the required techniques.
What Qualifies as a Threat Model?
This guideline contains no specifics. Could a vendor simply adopt the OWASP Top 10? Does security testing have to tie back to the threat model? Does the threat model have to be published?
What Qualifies as Static Analysis?
Is "grep" code scanning sufficient? How often do we have to scan? Do the rules matter? What level of code path coverage is required? Are static rules a different threat model? How will we know exactly what was thoroughly tested?
What Qualifies as Dynamic Analysis?
Will any scanner suffice? Does a bug bounty count? Is 80% code coverage realistic for dynamic security tools? Should security testing coverage be focused on security-critical code and defenses? Including libraries? Are dynamic rules another threat model?
Which Bugs Are 'Must Fix'?
Does this replace the threat model with yet another list of priorities? Can I say only RCEs must be fixed? Must vendors investigate all false positives? What level of investigation is required? Do test results have to be published?
How to Verify Libraries?
This requires testing libraries for both known and unknown vulnerabilities. Practically, how can organizations handle a 10x increase in code to analyze?
Lack of Focus
The biggest problem is that the guideline doesn't drive vendors to focus on what matters. Instead, the guideline simply requires vendors to use a list of techniques. It's like requiring automobile manufacturers to "use these different kinds of robots" for safety checks, without saying anything about what the robots are supposed to check, whether they cover the important areas, whether they overlap, and which are best at what.
Even if a vendor does all the techniques, we simply won't know what was tested and whether serious security issues remain. Without a clear line of sight from the threat model through test results and remediation, vendors will undoubtedly leave critical aspects of their defenses untested and waste critical resources performing unimportant tests. This isn't what the EO envisioned:
"The development of commercial software often lacks transparency, sufficient focus on the ability of the software to resist attack, and adequate controls to prevent tampering by malicious actors. There is a pressing need to implement more rigorous and predictable mechanisms for ensuring that products function securely, and as intended." – Cybersecurity EO §4(a) (emphasis added).
'Bold Change' for Application Security Testing
We have two paths before us. The current path is yet another ignored security testing guideline while we wait for the inevitable catastrophic cybersecurity disaster. On the other path, we make a "bold change" that pushes us into a future where security testing is meaningful and visible.
Imagine a standard that truly helps buyers understand the security of software they're interested in. NIST could require vendors to produce a simple assurance case that details the assumed threats, defense strategy for each, and compelling evidence that associated defenses are complete, correct, and effective.
It's truly this simple. This approach gives vendors choices and sparks competition in the market. Focusing on what matters streamlines the expected outcome and eliminates wasted work, possibly even reducing security testing costs over time. Making assurance cases available could even ignite a "race to the top" for security — exactly the type of trust and transparency envisioned in the EO.
I encourage you to share this article and my earlier remarks widely in the hopes that, by doing so, we can join forces and help NIST change the trajectory of software security forever.
About the Author
You May Also Like