The Case Against Abandoning CrowdStrike Post-Outage

Knee-jerk reactions to major vendor outages could do more harm than good.

Vishaal "V8" Hariprasad, CEO & Co-Founder, Resilience

October 31, 2024

5 Min Read
CrowdStrike logo on a cellphone screen
Source: SOPA Images Limited

COMMENTARY

The now-infamous July CrowdStrike outage sparked global chaos and countless conversations about vendor security. Despite the industry noise and myriad headlines about the outage — and its potential cost of more than $5 billion to Fortune 500 companies alone — there is a bigger picture we must consider. And that is how, as an industry, we understand the risks involved and respond to major outages and other cybersecurity crises. 

While the CrowdStrike outage was a freak accident, it's likely not the last time we'll witness this kind of meltdown from a major technology provider. As our digital ecosystems become further interconnected and enterprises increasingly put their trust in singular vendors, business leaders will have to grapple with the inevitability of similar events, whether from a simple outage or a malicious hack. In every case along that spectrum, however, these leaders must avoid knee-jerk reactions that could ultimately do more harm than good. Blindly switching to a new vendor or making drastic changes to IT and security processes could introduce security holes that exacerbate vulnerabilities in an organization's overall security posture, rather than improving it.  

Instead of immediately jumping ship, here are the three things companies should do in the wake of a CrowdStrike-like event to ensure business continuity and high operational standards.  

1. Assess Vendors' Overall Reliability and Risk

Switching vendors after an incident isn't as simple — or as beneficial — as it may seem. Before switching, businesses must evaluate both the existing and potential vendor, and assess each vendor's overall reliability and risk. For instance, Resilience customer data shows that despite the July incident, CrowdStrike Falcon is highly effective. It has the lowest percentage of material claims of endpoint detection and response (EDR) vendors in Resilience's portfolio, with fewer than 3% of clients with Falcon experiencing a cyber-insurance claim with losses. Of course, no company can — or should — claim the ability to avoid 100% of incidents, but in these kinds of deliberations, it's critical to put a vendor's long-term track record into perspective. On the flip side, however, if a vendor shows a consistent history of outages or vulnerabilities (as has VPN provider Ivanti, of late), poor performance or communication, and long delays in remediation, an incident could be the helpful forcing factor in considering the benefits of trying something new.  

It's also important to note the costs of switching vendors that go beyond sticker price, such as implementation time, staff training costs, and adjusting workflows to incorporate the new system and meet business needs. These third-party risk considerations must be considered, with leadership weighing the business interruption costs of an outage with the existing vendor against the total costs that come with making a switch.  

2. Avoid Radical Changes to the Update Process

This particular outage raised the issue of update cadence and testing frequency, with many arguing that the affected organizations should have taken a more cautious, thorough approach to testing the update before rolling it out. But delaying updates is a calculated risk, and not necessarily one that most companies should be taking. Antivirus and EDR signatures are designed to counter quick-breaking, emerging threats toward existing systems, so while you can decide to wait to allow the days or weeks it may take to fully test the update, you may be leaving your systems vulnerable to new exploits in the process. Threat actors only grow faster and more sophisticated in their approaches, so quick security updates are more crucial than ever. The risk of taking additional precautions in testing may not be worth it for every company, especially since most updates happen seamlessly. After all, part of the reason CrowdStrike made headlines was because the outage was so out of the ordinary. 

In a perfect world, where bad actors weren't a consistent threat and update processes took no extra time, following the typical best practice of testing each and every update that comes from each and every vendor might be feasible. But in the real world, changes to the update process are likely to slow down your defenses. Ultimately, there is no one-size-fits-all answer, and the best approach will depend on the specific organization and its risk tolerance.  

3. Don't Panic

It's tempting to liken incidents like CrowdStrike to natural disasters (such as catastrophic hurricanes), but that oversimplification distracts from the heart of the issue. While many insurers use this analogy to describe cyber events, it's neither accurate nor useful, as it becomes an apples-to-oranges comparison that frames cyber incidents or outages as one-sided and world-ending. But the reality is far different. There's no tangible way for individuals to prevent or mitigate the intensity of category hurricanes or tornadoes, but there are certainly simple, actionable steps we can take to mitigate the financial impact of an outage or counteract the negative effects of a potential attack. This includes implementing proper cyber hygiene, transferring financial risk through cyber insurance, and having a detailed cybersecurity action plan in the event of an attack or outage. It's not only feasible but essential that companies take these steps to remain operable and functioning in the midst of an incident.  

In short, decision-makers across the board can't allow themselves to make reactive or fear-based decisions on their security posture directly following a cyber incident. These knee-jerk reactions could lead to greater complications and introduce a host of new vulnerabilities just waiting to be exploited. Instead, leaders must focus on understanding the root cause of the incident, learning from it, and making risk-driven decisions that mitigate the greatest financial loss and improve their organization's overall cyber resilience. This includes taking a proactive approach and incorporating third-party risk management into business continuity planning. That way, you'll be able to avoid catastrophic interruptions to your day-to-day business and maintain continuity and resilience in the face of a cyber incident. 

About the Author

Vishaal "V8" Hariprasad

CEO & Co-Founder, Resilience

Vishaal Hariprasad, best known as "V8," co-founded what is now known as Resilience in 2016 to bridge the divide between cyber insurance and cybersecurity. As a licensed insurance broker and producer, as well as a veteran of both the US Air Force and the cybersecurity industry, Vishaal brings the leadership skills he honed in his years with the military to his position as CEO for Resilience. After graduating from the United States Air Force Academy, V8 was commissioned to military service as a Cyber Operations Officer for the Air Force. Hariprasad is an Iraq War veteran and a recipient of a Bronze Star Medal. In 2012, he co-founded Morta Security, which was acquired by Palo Alto Networks, where he then served as a threat intelligence architect. In 2015, V8 was tapped to serve as a founding partner at the Pentagon's newly established Defense Innovation Unit Experimental (DIUx) in Mountain View, California, an office under the Secretary of Defense charged with leveraging commercial technology to solve defense challenges. V8 holds a B.A. in Mathematics from the US Air Force Academy and an M.S. in Information Technology from Virginia Polytechnic Institute and State University (Virginia Tech).

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights