The Fall of the National Vulnerability Database

Since its inception, three key factors have affected the NVD's ability to classify security concerns — and what we're experiencing now is the result.

Brian Fox, CTO & Co-Founder, Sonatype

May 16, 2024

5 Min Read
Green 1s and 0s pouring out of a padlock
Source: Stu Gray via Alamy Stock Photo

COMMENTARY

In the realm of cybersecurity, understanding your biggest vulnerabilities is essential. The National Institute of Standards and Technology (NIST) initially established the National Vulnerability Database (NVD) to provide a centralized hub for cybersecurity vulnerability intelligence — but did so under the assumption of rational actors making rational decisions and coming to rational conclusions. 

While it was never meant to be the end-all-be-all solution, the NVD currently is the most widely used software vulnerability database in the world, with many scanners, analysts, and vendors depending on it daily to determine what software has been affected by a vulnerability. Yet, it recently was revealed that NIST has not enriched vulnerabilities listed in the NVD since Feb. 12 — meaning anyone relying on these reports potentially has been at risk for months.

While it seems abrupt on the surface, this disruption is actually a systemic issue that has evolved over time. Since its inception nearly 25 years ago, three key factors have impacted the NVD's ability to sufficiently classify security concerns that help the industry prioritize vulnerabilities — and what we're experiencing now is the result. 

Three Factors Affecting the NVD

1. Credit-Seeking Contributors

Originally, vulnerabilities listed in the NVD hailed from seasoned researchers or well-established practitioners, and the assignment of a CVE (common vulnerabilities and exposures) served as acknowledgment for a job well done. However, as software security gained importance over time, an influx of aspiring researchers, often with scant experience, sought to leverage the NVD and CVE as springboards into the industry.

They wanted the credit for new findings as an accolade of their contributions to the industry — similar to how a budding developer contributes to prominent open source projects. In its initial stages, this trend served as a viable résumé-building strategy. But, as more inexperienced researchers flooded the world with vulnerabilities, the quality of reports started to decline.  

2. Widespread Accessibility 

At the same time, the globalization of the Internet enabled researchers worldwide to partake in, and potentially impact, the industry in a meaningful way. It was no longer just a handful of seasoned researchers from select regions being credited with CVEs, and this second wave of people seeking recognition further increased the number of low-quality reports. 

Along with the rise of inexperienced researchers, widespread accessibility opened the doors for security vulnerabilities to be monetized on the Dark Web. While the payout might not be worth the risk for someone in an industrialized economy, it could be life-altering for someone in another part of the world. Rather than being credited for findings, some contributors opted to use vulnerabilities to commit a crime or sell the information to actors who would.  

3. Monetary Incentives

In response to the above, bug bounties emerged as an incentive for researchers to disclose vulnerabilities to vendors rather than use them to do harm. The theory was that this would balance out the market and stop people from going over to the "dark side" of vulnerability detection. 

Reporting vulnerabilities quickly became a numbers game. Rather than focusing on doing good work and gaining credit for it, this third cohort focused on pushing out as many reports as possible with as little effort as possible, hoping a few would hit a bounty payout so they could cash the check and move on.

Impact to Vendors 

Now, vendors face an onslaught of security disclosures stemming from the basic usage of free security tools that produce false positives and inaccurate, or irrelevant, findings. All of this noise has significantly increased the number of reports vendors must sift through daily, and the vast majority of them fail to provide any meaningful insight or exploitability. When everyone is spending so much time dealing with junk, there is less time to focus on quality research

While this surge mirrors the proliferation of email scams in the late 1990s and early 2000s, evolving from sophisticated schemes to boilerplate tactics as opportunists worldwide sought to capitalize on the financial gains, it's crucial to acknowledge that this isn't an indictment of individuals with limited access to education or technology. Everyone deserves an opportunity to carve out their niche and be duly compensated for their contributions, but the current state of affairs is a predictable outcome of the structured "rules of the game" we established. 

The Aftermath

As the number of CVEs being reported has dramatically increased, the CVE program worked toward a federated model by introducing a new program called Central Naming Authorities (CNA). This allowed organizations to to work through a process to become certified and trusted to issue CVEs directly. This allowed the program to scale to handle the new load.

In contrast, the NVD continued to be essentially a single threaded system where hired researchers would do extra research on each CVE to assign it a score (CVSS) and assign the affected software identification (Common Platform Enumeration, or CPE). 

The convergence of these factors created a flood of low-quality reports that has exacerbated researcher scaling challenges within the NVD program. The recent halt on enriched vulnerabilities underscores the imperative for refining existing frameworks to foster an environment where genuine contributions are recognized and noise is minimized. This is also an opportunity to rethink the structure of these systems. A federated model such as the CNA is designed to scale, and adding scoring and software identification to the CVEs they assign shouldn't be a heavy lift.

If we want to ensure the integrity and efficacy of our collective security efforts, the cybersecurity community must reassess its reliance on the NVD and adapt its processes to meet the evolving dynamics of vulnerability management.

About the Author

Brian Fox

CTO & Co-Founder, Sonatype

Brian Fox, co-founder and chief technology officer at Sonatype, brings over 20 years of hands-on experience driving software development for organizations of all sizes. A recognized figure in the Apache Maven ecosystem and a longstanding member of the Apache Software Foundation, Brian has played a crucial role in creating popular plug-ins like the Maven dependency plug-in and Maven enforcer plug-in. As a governing board member for the Open Source Security Foundation, Brian actively contributes to advancing cybersecurity.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights