In Security, Programmers Aren't Perfect
Software developers and their managers must change their perception of secure coding from being an optional feature to being a requirement that is factored into design from the beginning.
April 3, 2019
Fifth in a continuing series about the human element in cybersecurity.
Programmers are responsible for developing and releasing new systems and applications, and subsequently announcing vulnerabilities and developing updates and patches as vulnerabilities and bugs are discovered. It can take organizations months to apply patches which creates a window of opportunity for hackers. What steps can programmers take to minimize security flaws, reduce impediments to the patching process, and shrink this window?
Programmers — sometimes called software engineers, software developers, or coders — are the individuals who write code to build operating systems, applications, and software. They are also responsible for debugging programs and releasing patches to address code vulnerabilities after initial release. In this column, we consider programmers at commercial manufacturers and application/software providers, such as Microsoft or Adobe, and programmers responsible for custom internal applications.
Common Mistakes
Programmers frequently operate under tight deadlines. This pressure to perform on schedule can lead to the neglect of security issues. While they may try to follow best practices to avoid functional bugs and prevent exploitation, programmers may not have time to test all the possible attack scenarios before their deadline, thinking that a patch or security update can be released to address the problem at a later date. But this leaves organizations vulnerable until patch deployment.
The reality is that every code has bugs, but management decisions made during development can significantly influence the severity of these programmer errors. Too often, secure coding is not a foundational element incorporated from the start. Instead, it is bolted on after the fact or — even worse — neglected completely. Additionally, the process for utilizing open source libraries may not be well defined or followed, so open source dependencies and vulnerabilities may not be tracked or documented, resulting in vulnerable code that is not readily identifiable. Moreover, the prioritization and speed of addressing known vulnerabilities in commercial software may not match the severity of risk to the customer.
Repercussions
Software is ripe for exploitation, and attackers can capitalize on that by creating zero-days for which there is no patch, or by taking advantage of the inefficiencies of the vulnerability discovery and patching process. The issue is exacerbated because programmers often disregard the patch deployment process. Many organizations do not apply patches without proper testing and approval or hesitate to apply patches that require a reboot that can take critical servers offline.
Potential disruptions, added complexity, and significant windows of time needed to download resources, secure approvals, and implement patches all discourage and delay organizations from applying patches, leaving systems vulnerable for longer. To avoid some of the patching work, organizations may choose to stick to older, more stable versions of the programmer's software.
Although commercial software vendors inform their customers of existing vulnerabilities (as they should), cybercriminals only need to wait for patching announcements or vulnerability disclosures to identify their next easy target. Vulnerability disclosure of widely used commercial applications serves as a how-to for hackers, describing how the vulnerability can be exploited. Hackers often have a golden window of two to 90 days (the average time it takes companies to complete a patch) to take advantage of these vulnerabilities, a "Patch Tuesday, Exploit Wednesday" scenario. Two painful examples of the drastic consequences of delayed patching are the proliferation of WannaCry and the Equifax breach.
Minimize Mistakes
Vulnerabilities can be minimized in the development process by training programmers and teams on security, incorporating application security capabilities from the beginning, and breaking through silos to increase open communication between programmers and the security team. By detecting vulnerabilities as early as possible in the application's development stages, the need for patching later — as well as the length of downtime and the window of vulnerability — can be reduced. Additionally, bug bounty programs that incentivize hackers with a legal way to make money from these vulnerabilities instead of exploiting them can support programmers by allowing outside individuals to proactively identify bugs and vulnerabilities.
When it comes to creating patches for known vulnerabilities, time is of the essence. The longer a patch is unavailable, the more opportunity cybercriminals have. When researchers identify vulnerabilities, vendors need to address them — and not wait until the researchers present their findings at Black Hat before taking action. Additionally, programmers can design patches to be user-friendly, easy to deploy, and compatible so that they don't cause disruption, allowing organizations to implement them quickly and effectively, not worried that the patch will cause more harm than good.
Change the Paradigm
Software developers and their managers need to change their perception of secure coding from being an optional feature that can be pushed to the back burner and added after release, or ignored completely (as is the case for many Internet of Things products), to being a requirement that is factored into the design from the beginning. Programmers should focus on releasing applications with security baked in rather than on pushing out the latest developments as fast as possible and relying on post-release patching.
Security is often seen as a roadblock and an expense in the development process, when in fact it enables properly functioning software. Organizations must hold vendors accountable for addressing security issues by demanding that security become required functionality and that programmers will be diligent about fixing their inevitable mistakes.
And that same accountability must be carried over to the organizations that are responsible for patching. We know that programmers will make mistakes and have vulnerabilities in their code. As cybersecurity practitioners, we need to accept that and work with them when they correct their mistakes by promptly applying patches. Organizations that do not have efficient vulnerability and patch management programs should start by automating patching of end user systems and prioritizing patching of the "notorious five" (Windows, Office, browsers, Adobe, and Java).
Previously in our series, we covered end users, security leaders, security analysts, and IT security administrators. Coming up next: attackers.
Related Content:
Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry's most knowledgeable IT security experts. Check out the Interop agenda here.
About the Author
You May Also Like