The Evolution of Patch Management: How and When It Got So Complicated

In the wake of WannaCry and its ilk, the National Vulnerability Database arose to help security organizations track and prioritize vulnerabilities to patch. Part 1 of 3.

Srinivas Mukkamala, Chief Product Officer, Ivanti

January 10, 2022

4 Min Read
Graphic showing something being updated
Source: Aleksey Funtap via Alamy Stock Photo

If you look at the past, patch management was not a cybersecurity issue; rather, it was an IT issue. And it wasn't until the emergence of Code Red in 2001 when Microsoft started issuing patches to plug security vulnerabilities in its software. Patch management as security came to prominence again with the massive Internet worms of 2009, 2011, and 2012, including WannaCry in 2017, which would shock entire enterprise networks. These incidents would set the stage for widespread adoption of regular patch management cycles among enterprises. Until then, there were only sporadic security incidents, but nothing large in scale where you would see viruses and malware spreading across geographies.

As these large-scale attacks that infected entire networks across geographies became more prevalent, the industry moved toward developing a system to catalogue and track these vulnerabilities. The first, created back in 1999, was first used by US federal agencies on the recommendation of the National Institute of Standards and Technology, which published the "Use of the Common Vulnerabilities and Exposures (CVE) Vulnerability Naming Scheme'' in 2002 and then updated it in 2011. However, its widescale use wasn't until 2011, with the development of the first National Vulnerability Database (NVD).

NVD, which serves as a comprehensive cybersecurity vulnerability database that integrates all publicly available US government vulnerability resources, provides references to industry resources. It is synchronized with, and based on, the CVE List, which uses a scoring system to rate the severity of risk. The NVD became an effective tool for security organizations to track vulnerabilities and determine which ones to prioritize based on their risk score.

From 2011 on is when patch management started to evolve into a security best practice throughout the industry. However, as the volume of vulnerabilities in the database continued to grow, and the complexity of the IT infrastructure increased, patch management would become not such an easy task. It's not always as simple as updating a piece of software. Some systems are mission-critical and cannot afford disruption. Some organizations don't have the dedicated resources in either budget or talent to apply a test, deploy, and install patches on a regular basis.

The creation of the NVD was a huge first step in vulnerability and patch management for the industry. Yet two emerging issues would lead to the complications that the industry is experiencing today with patch management. The first issue is time. There will always be latency. Once an attacker, researcher, or company identifies a vulnerability, the clock starts ticking. It's a race against time, from the moment a vulnerability is disclosed to when a patch is issued and then applied, to ensure that the vulnerability will not be exploited by a bad actor. The latency used to be 15 to 60 days in the past. Today, we're down to a couple of weeks.

But not every vulnerability has a solution. There is a common misconception that every vulnerability can be fixed by a patch, but this is not the case. Data shows that only 10% of the known vulnerabilities can be covered by patch management. That means that the other 90% of known vulnerabilities can't be patched, leaving organizations with two choices — to either change the compensating control or fix the code.

The second issue is the fact that NVD essentially became weaponized by bad actors. While it was designed to help organizations defend against threat actors, the very same tool within a short time frame would be used to launch offensive attacks. Just in the last five years, threat actors have enhanced their offensive skills with the use of automation and machine learning. Today, they can quickly and easily scan for unpatched systems, based on the vulnerability data in the NVD. The rise of automation and machine learning has enabled threat actors to quickly determine which software versions are being used by an organization to determine what has yet to be patched by cross-checking with the NVD.

Now we have an asymmetric war: organizations trying to stay on top of patch management to ensure that every single vulnerability is fixed, and bad actors looking for the one vulnerability that has not been patched yet. It all boils down to one missing patch. That is all it takes for a security incident. This is why patch management is now a mandatory part of the security fold in an organization, not just a responsibility of the IT department.

Today, patch management is a mandatory practice to demonstrate compliance with security regulations. It's also a requirement for cyber insurance. With ransomware on the rise, including mission-critical hospital systems that could mean life or death, patch management is under tight scrutiny, and rightfully so. Yet IT and security teams are stretched thin and cannot keep up with the task. It's not humanly possible. The industry needs to find a new approach — automating patch management — which will be discussed in Part 2 of this series on patch management.

Part 2 of this series is here; Part 3 is here.

About the Author

Srinivas Mukkamala

Chief Product Officer, Ivanti

Dr. Srinivas Mukkamala is Chief Product Officer at Ivanti. Prior to Ivanti, he was a co-founder and CEO of RiskSense, a risk-based vulnerability management company. Srinivas is a recognized expert on cybersecurity and artificial intelligence (AI), one of the early researchers to introduce support vector machines for intrusion detection and exploit labeling.

He was part of a think tank that collaborated with the US Department of Defense and US Intelligence Community on applying these concepts to cybersecurity problems. Dr. Mukkamala was also a lead researcher for CACTUS (Computational Analysis of Cyber Terrorism against the US) and holds a patent on Intelligent Agents for Distributed Intrusion Detection System and Method of Practicing.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights