A Look Inside Responsible Vulnerability Disclosure
It's time for security researchers and vendors to agree on a standard responsible disclosure timeline.
Animal Man, Dolphin, Rip Hunter, Dane Dorrance, the Ray. Ring any bells? Probably not, but these characters fought fictitious battles on the pages of DC Comics in the 1940s, '50s, and '60s. As part of the Forgotten Heroes series, they were opposed by the likes of Atom-Master, Enchantress, Ultivac, and other Forgotten Villains.
Cool names aside, the idea of forgotten heroes seems apropos at a time when high-profile cybersecurity incidents continue to rock the headlines and black hats bask in veiled glory. But what about the good guys? What about the white hats, these forgotten heroes? For every cybercriminal looking to make a quick buck exploiting or selling a zero-day vulnerability, there's a white hat reporting the same vulnerabilities directly to the manufacturers. Their goal is to expose dangerous exploits, keep users protected, and perhaps receive a little well-earned glory for themselves along the way. This process is called "responsible disclosure."
Although responsible disclosure has been going on for years, there's no formal industry standard for reporting vulnerabilities. However, most responsible disclosures follow the same basic steps. First, the researcher identifies a security vulnerability and its potential impact. During this step, the researcher documents the location of the vulnerability using screenshots or pieces of code. They may also create a repeatable proof-of-concept attack to help the vendor find and test a resolution.
Next, the researcher creates a vulnerability advisory report including a detailed description of the vulnerability, supporting evidence, and a full disclosure timeline. The researcher submits this report to the vendor using the most secure means possible, usually as an email encrypted with the vendor's public PGP key. Most vendors reserve the [email protected] email alias for security advisory submissions, but it could differ depending on the organization.
After submitting the advisory to the vendor, the researcher typically allows the vendor a reasonable amount of time to investigate and fix the exploit, per the advisory full disclosure timeline. Finally, once a patch is available or the disclosure timeline (including any extensions) has elapsed, the researcher publishes a full disclosure analysis of the vulnerability. This full disclosure analysis includes a detailed explanation of the vulnerability, its impact, and the resolution or mitigation steps. For example, see this full disclosure analysis of a cross-site scripting vulnerability in Yahoo Mail by researcher Jouko Pynnönen.
How Much Time?
Security researchers haven't reached a consensus on exactly what "a reasonable amount of time" means to allow a vendor to fix a vulnerability before full public disclosure. Google recommends 60 days for a fix or public disclosure of critical security vulnerabilities, and an even shorter seven days for critical vulnerabilities under active exploitation. HackerOne, a platform for vulnerability and bug bounty programs, defaults to a 30-day disclosure period, which can be extended to 180 days as a last resort. Other security researchers, such as myself, opt for 60 days with the possibility of extensions if a good-faith effort is being made to patch the issue.
I believe that full disclosure of security vulnerabilities benefits the industry as a whole and ultimately serves to protect consumers. In the early 2000s, before full disclosure and responsible disclosure were the norm, vendors had incentives to hide and downplay security issues to avoid PR problems instead of working to fix the issues immediately. While vendors attempted to hide the issues, bad guys were exploiting these same vulnerabilities against unprotected consumers and businesses. With full disclosure, even if a patch for the issue is unavailable, consumers have the same knowledge as the attackers and can defend themselves with workarounds and other mitigation techniques. As security expert Bruce Schneier puts it, full disclosure of security vulnerabilities is "a damned good idea."
I've been on both ends of the responsible disclosure process, as a security researcher reporting issues to third-party vendors and as an employee receiving vulnerability reports for my employer's own products. I can comfortably say responsible disclosure is mutually beneficial to all parties involved. Vendors get a chance to resolve security issues they may otherwise have been unaware of, and security researchers can increase public awareness of different attack methods and make a name for themselves by publishing their findings.
My one frustration as a security researcher is that the industry lacks a standard responsible disclosure timeline. We already have a widely accepted system for ranking the severity of vulnerabilities in the form of the Common Vulnerability Scoring System (CVSS). Perhaps it's time to agree on responsible disclosure time periods based on CVSS scores?
Even without an industry standard for responsible disclosure timelines, I would call for all technology vendors to fully cooperate with security researchers. While working together, vendors should be allowed a reasonable amount of time to resolve security issues and white-hat hackers should be supported and recognized for their continued efforts to improve security for consumers. If you're a comic book fan, then you'll know even a vigilante can be a forgotten hero.
Related Content:
About the Author
You May Also Like