Like more than a few others, I experienced the infosec outrage against Mary Ann Davidson, Oracle's Chief Security Officer, before I actually read the now-redacted blog post. After taking the time to read what she actually wrote (still available through Google's web cache), I think there’s more discussion to be had than I’ve seen so far. First, it seems clear to me that the reaction is as much to the condescending tone of the blog post as to the content. Oracle’s CSO manages to talk down to customers and the security research community alike, from the title “No, You Really Can’t” to implications that security researchers are like ‘boy bands.’ All of this language and tone generates page views, and emotional reactions, but I’m more interested in the actual content of what’s being said. After reading the post, I found myself wanting to re-write it in a more professional tone and see what it looked like. I haven’t done that, however, because it’s a lot of work to produce content that’s not really mine. Instead, I’ll try to distill the actual content for discussion.
Salient Statements (no judgement yet)
- Customers are worried about breaches.
- Reverse engineering vendor code to find vulnerabilities is not the most effective way to protect your organization.
- The Oracle EULA prohibits reverse engineering.
- Customers break this provision in the EULA enough for the CSO to be aware of it.
- Oracle has a “robust” software assurance program that obviates the need for third-parties to find vulnerabilities.
- A high percentage of the reported issues are false positives or already discovered internally.
- The reverse engineering clause in the EULA isn’t about keeping security researchers out; it’s about intellectual property.
- Bug bounty programs are not an economically advantageous vendor investment.
There are a few things in here that we can take at face value. Yes, customers (and vendors, I’ll add) are worried about breaches, and it’s a fact that Oracle’s EULA, along with many others, prohibits reverse engineering of code. On to more debatable points...
Good Risk Management
Jennifer Granick (@granick) recently pointed out in her Black Hat keynote that human beings are pretty bad at risk management, citing that we’re afraid of sharks but not cows, while cows actually kill more people than sharks. Davidson’s point that spending time finding vulnerabilities in Oracle’s code isn’t the most effective means of securing your organization seems demonstrably accurate at face value, but implies that there’s a real-world tradeoff going on, i.e. researchers are spending time on reverse engineering instead of other security measures that are more effective. I won’t declare that the opposite is categorically true, but it seems dubious that there’s a one-for-one tradeoff here. It’s more likely that this reverse engineering is being done by dedicated security researchers paid to do just this kind of work, or as an extra-curricular activity. That, frankly, draws out the question of how this time is being funded by the market and why. We might legitimately ask ourselves as an industry why we have so much third-party security defect discovery going on.
Vendor Software Assurance and Transparency
There’s a common claim that this research is required to ensure secure software in the market. The counter-argument put forth by Davidson is that Oracle does this work already, and that only Oracle can do it effectively because of their inside knowledge of the code. I actually really like this point. It’s true that the original vendor can perform more effective software assurance than an outsider. The problem is that most simply don’t. While Oracle may claim they are an exception, the reality of this fact is demonstrated by the continuous publication of newly discovered vulnerabilities. Customers can actually make a difference here by following Davidson’s advice and asking about software assurance programs when they make a purchase. Sticking a question about how the vendor ensures security of their developed code in every RFP you issue will make a material impact. Still, if we assume that Oracle does actually do an exceptional job on software assurance, then the impact of security defects found by reverse engineering should be minimal, right? It’s not, and that, in part is because of ....
Low Quality Defect Reporting
Steve Christey Coley (@SushiDude) gets the credit for sparking this paragraph:
A hidden story behind recent Oracle news is the avalanche of low-quality vuln reports. Growing pains of an entire discipline?
— Steve Christey Coley (@SushiDude) August 11, 2015
If you’ve ever worked in a tech support or QA role, you are intimately familiar with the impact of low quality defect reporting. “It’s broken” simply doesn’t cut it, and neither does cutting and pasting an error message with no context or reproduction. With a customer-base the size of Oracle’s, triaging and responding to security defect reports might very well be a significant time-suck. In fact, it really must be for the CSO to spend time penning individual letters out to customers. Is it appropriate for a vendor to reject a low quality defect report and ask for more details? Doesn’t that apply to security defects as well, or is there a requirement for a higher standard of care?
The Value of Bug Bounties
And how does someone learn what a good defect report is anyway? This is, perhaps, a side benefit of the economic model of bug bounties. When someone is looking to get paid, they’re more likely to read and follow the guidelines for submission. It’s worth drawing a distinction between community bug bounty efforts, like Google’s Project Zero and Tipping Points Zero Day Initiative, which aren’t specific to one vendor, and a vendor-driven program. Davidson’s argument on economics seems targeted at vendor-driven programs. Her point is that her money is more effectively spent hiring additional internal staff, which is consistent with her point that only internal staff can be effective at security testing. Still, I can’t help wondering about the economics of off-loading all those low-quality security defects weighing down customer support into a ‘paid-to-play’ bug bounty program where you can more effectively enforce standards. Even if that works out to be more economically sound, and drive higher customer satisfaction with support, there’s still the pesky problem of prohibited reverse engineering in the EULA.
Protection of Intellectual Property
And this is really the point. That clause in the EULA that inhibits security research is there for an entirely different purpose: to protect intellectual property. In the threat model that drives the inclusion of that clause, security research isn’t present. Oracle, and many, many other vendors, are worried about competitors stealing their capabilities, more than they are about researchers finding vulnerabilities. That point is also where there’s room for improvement. It won’t be an overnight change, but there’s no reason that security conscious vendors can’t move in a direction that supports security research while maintaining the protections for intellectual property. Title image courtesy of ShutterStock.com
Meet Fortra™ Your Cybersecurity Ally™
Fortra is creating a simpler, stronger, and more straightforward future for cybersecurity by offering a portfolio of integrated and scalable solutions. Learn more about how Fortra’s portfolio of solutions can benefit your business.