Mad World: The Truth About Bug Bounties
What Oracle CSO Mary Ann Davidson doesn’t get about modern security vulnerability disclosure.
The security industry, still recovering from a week at “hacker summer camp” in Vegas that included BSidesLV, BlackHat, and DEFCON, has a new cause to rally behind. They are alarmed and outraged about a bombastic blog post (since removed but still retrievable) by Oracle CSO Mary Ann Davidson who blasts bug bounties, among other common security practices.
[Editor’s Note: In a statement released today Edward Screven, Oracle executive VP and chief corporate architect, said the Davidson blog post was removed because it “does not reflect our beliefs or our relationship with our customers." He added, "The security of our products and services has always been critically important to Oracle. Oracle has a robust program of product security assurance and works with third-party researchers and customers to jointly ensure that applications built with Oracle technology are secure.”]
Among Davidson’s points were some truths, like the fact that companies often waste a lot of time responding to false positives from unverified tool reports, and that there is a relatively huge return on investment in having proactive security assurance processes, otherwise known as “Security Development Lifecycle” programs. Internal testing should find the majority of vulnerabilities in your code, and if you're not investing in securing your products early in the development lifecycle, you are just playing whack-a-bug. I don’t think anyone can disagree with those points.
Beyond that, the Oracle blog post is less factually grounded, arguing that Oracle doesn’t need anyone’s help with security and threatening customers and others if they report bugs, even zero-day vulnerabilities, to the company. Here is where we will agree to disagree.
“I do not need you to analyze the code since we already do that, it’s our job to do that, we are pretty good at it...”
“I want to explain what Oracle’s purpose is in enforcing our license agreement (as it pertains to reverse engineering) and, in a reasonably precise yet hand-wavy way, explain 'where the line is you can’t cross or you will get a strongly-worded letter from us.'”
“We find 87% of security vulnerabilities ourselves, security researchers find about 3% and the rest are found by customers.”
“We will also not provide credit in any advisories we might issue. You can’t really expect us to say ‘thank you for breaking the license agreement.”
I’m familiar with all these points intimately, as I’ve heard them all before — and more — during my run as a security strategist for Microsoft before I joined HackerOne. For years, Microsoft publicly vowed never to pay for vulnerability information -- until I showed them how to do so in a manner that was unique to their customer base and their threats, and fully aligned with their goals. I demonstrated to them it was possible to bounty smarter, not harder, and to improve the security of their products while also strengthening relations with the hacker community.
No one can handle security alone -- defenders need all hands on deck, and hackers are among them. Creating incentives for security research should augment and direct the proactive security assurance efforts that companies invest most heavily in to protect their users. It’s not an either/or type of investment, but rather an intelligent harmony between the orchestra of a vendor’s internal security efforts and the rebel music of the hacker community. Like the legendary Aerosmith and Run-DMC collaboration of two different musical styles, vendors and hackers can learn to Walk This Way to better security.
Same argument used to apply to pen testing
I remember the early days of being an application penetration tester, first independently, and then as a penetration tester for @Stake. We spent a lot of time trying to convince potential clients that they would benefit from having consultants (AKA hackers for hire) like us trying to break their defenses. Some forward-thinking companies got it, and embraced us as valuable members of their extended security team, helping them improve their security and protect their customers. Others wanted nothing to do with us, mistook us for extortionists, and were convinced that they could and were doing a better job themselves. Sound familiar? More than 15 years later, external penetration testing is an accepted and everyday security practice, even incorporated into many industry regulatory compliance steps.
Both varieties of external testing — bug bounties and professional penetration testing — are needed, and pen testing should not by any means go extinct just because there are potentially thousands of independent researchers at a company’s disposal. Instead, penetration testing and its newly evolved variety known as bug bounties should cross-breed and make stronger security offspring.
People who tell you that bug bounties are a cost-effective replacement for professional penetration testing are missing the point -- which is why the company I work for partners with penetration testing companies for our customers who need help with incoming issue triage. Not only is the result higher-quality, the penetration testing partner works with the customer on honing the next deep-dive penetration test. Taking the best of the bug bounty data and feeding it into ongoing internal and external testing, and ultimately into improving future products, is the best of all worlds.
Signal-to-noise can be improved & automated
Some vendors like Oracle are worried about the noise of sorting the top reports from the spam or non-issues distracting them from the real security work at hand. This was one reason we built the HackerOne platform with a researcher reputation system, so security response teams can prioritize the most likely real reports that come from the highest quality security researchers.
The signal rate of valid, unique vulnerability reports on our platform for public bug bounty programs is 23% -- more than triple of any other major independent bounty program (Google’s is 7% signal). For invitation-only programs on our platform, where only top-tier hackers are invited to participate, the signal rate jumps to 44%. We have a whole blog on increasing signal- to-noise, but the point is that incoming vulnerability reports can be managed with platform tools as well as expert partners to reduce noise and distractions.
About those legal threats
I’ve made my position of protecting security research clear in a few different mediums -- some serious, and some humorous. Prosecuting security researchers who try to report vulnerabilities to you is akin to calling the cops on a person for being a peeping Tom when they were just trying to tell you that your fly is down. You're not doing yourself or your customers any favors, and you’ll likely wander around with your fly down with nobody daring to tell you in the future. Meanwhile, the real criminals are seeing the same weaknesses, and are likely exploiting them.
Saying that Oracle won’t thank security researchers who break the EULA and reverse engineer their code in order to find and report vulnerabilities seems counterproductive for security, an extreme position of hostility that even other conservative industry giants wouldn’t take. It’s certainly their right to thank whomever they choose under whatever circumstances they choose, as I’ve written about in the past when working on the ISO standards for vulnerability disclosure, and vulnerability handling. However, the hostile legal stance focused on the violation of the EULA alone seems openly designed to discourage even privately coordinated vulnerability disclosure if the method of finding the bug was not to Oracle’s liking. Gratitude is up to the vendor, of course. But customers will be left exposed to vulnerabilities that would have otherwise been reported privately and fixed.
Customers should definitely expect security assurance programs from their vendors; Oracle is spot on about that. But customers should also expect their vendors to encourage reporting of security issues from those who prove they can find them, instead of threatening valid vulnerability finders with legal liability.
All software contains bugs
From the infamous blog: “No tool finds everything. No two tools find everything. We don’t claim to find everything.”
I couldn’t agree more. This is why modern security needs to take the best approaches as part of a holistic, comprehensive, and mature security plan. There is no one-size-fits-all bounty, and no security program should start with bug bounties as their primary investment in security. There is always room to improve security investments made on the proactive internal security assurance side by opening the doors to penetration testing by companies and independent security researchers.
Probably the most humorous quote from the redacted blog was this: “Bug bounties are the new boy band.” Looking back on early media coverage of The Beatles (a band named after a bug, ironically enough), they were considered a passing fad as well. But their contribution to the musical lexicon and genius-level influence on music is felt throughout the world over half a century later. Given time to develop, integrate with, and enhance existing security practices, bug bounties will be remembered as the way modern vendors learned to get by with a little help from their friends.
Read more about:
Black Hat NewsAbout the Author
You May Also Like