Cloudbleed Lessons: What If There's No Lesson?

'There's nothing to be done,' isn't an encouraging lesson from a security disaster, but that may be the biggest takeaway from Cloudbleed.

Dark Reading logo in a gray background | Dark Reading

Whenever a serious computer or network security issue becomes public, one of the first questions IT professionals ask is, "What lessons can we learn?" It's a polite way of phrasing the real question: "How do I keep my company out of the news for something like this?" But what's a professional to do when the best answer to the question might well be that absolutely nothing any reasonable company might have done would stop the problem? That's a very different question.

2017 has seen a spate of news generated by errant keystrokes, from the "Cloudbleed" vulnerability that exposed millions of pieces of personally identifiable information to the AWS outage that brought large portions of the Internet to its knees. Finding a single keystroke going awry makes the classic "needle in a haystack" analogy insufficient. Finding a needle in a thousand-acre field of haystacks might be more like it -- and that's something that may simply go beyond reasonable.

Don't get left in the dark by a DDoS attack -- learn best practices to strengthen the security of your network. Join us in Austin at the fourth-annual Big Communications Event. BCE brings you face-to-face with hundreds of speakers and thousands of industry thought leaders. There's still time to register and communications service providers get in free.

Bill Curtis is someone who seems well suited to answer questions involving command and software quality -- especially software quality. A founder of the Consortium for IT Software Quality (CISQ), Curtis was the leader of the project that created the Capability Maturity Model (CMM) for both software and people. A long-time university professor, Curtis is now senior vice president and chief scientist at CAST and remains a member of the CISQ board of directors.

In a telephone interview with Light Reading, Curtis was reluctant to criticize the software developers at Cloudflare for the incident that became known as Cloudbleed. "There are things that are humanly possible in terms of testing and detection, and then there are things that are just so far out there, they can happen and it's a tragedy when they do, but it's hard to say that they were negligent in their work because it really would have taken some bizarre thinking into the conditions that could occur," he said.

Curtis said that part of the problem of finding the vulnerability is that it did not, in all likelihood, involve a programming mistake. Instead, it was the result of using a parser built on Ragel (not developed in-house by CloudFlare Inc. ) in a very particular, very specific set of circumstances. Within those circumstances, a buffer overflow could occur, and personal information could be released.

The buffer overflow was, according to Curtis, part of what made early detection of a problem so difficult. "Here's the thing about buffer overflows; we don't really do a lot of analysis on buffer overflows and the reason is that there's a zillion false positives -- it just creates havoc," Curtis said. "Some of our competitors go after buffer overflows and they get flooded with false positives."

"For most of these buffer overflows it's really the context that makes that code cause an overflow. And you've got to understand the context, which is not easy. That's a whole 'nother level of analysis and if you read the piece that Cloudflare wrote they listed all the conditions that had to occur," Curtis explained.

"That's a nightmare to go find through static analysis, or even if you're a smart guy," Curtis said, pointing out that there is no reasonable testing regimen that can be expected to find all the issues in complex, modern software systems. "That's the problem we have in software; the incredible complexity that we've gotten into now and the difficulty of detecting these [issues]," Curtis said.

He pointed to a software quality regimen that found an extraordinary number of issues, but went beyond the effort that most organizations can afford -- the detection and testing regiment for the avionics systems on the Space Shuttle. "These guys were at a point where the defects they were detecting were all over ten years old in the code. They weren't generating new defects," Curtis said. "And their analysis, detection, and testing were so thorough, in fact two-thirds of all their effort was in testing."

The professionals in the software development group on Space Shuttle avionics spent much of their time coming up with bizarre scenarios involving anomalies that no one had ever seen, but that were not impossible according to the laws of physics. Commercial developers would have to go into the same sort of imagination exercise to find interactions like the one that led to Cloudbleed. "You'd really have to be thinking, 'What really isn't probably going to happen but possibly could?' If all these different conditions occurred, you'd say that there were all these bizarre little things that had to happen in order for buffer overflow to occur," Curtis explained.

Curtis thinks that the best prospect for avoiding Cloudbleed-like future problems may lie with the computers themselves. "For these things that are context dependent and very tricky, I'm hoping that we can apply machine learning techniques, that maybe the machine learning can go out and begin to understand some of these bizarre contexts and find some of the things that might have been innocent but go on to create some serious problems," he said.

Until machine learning becomes the norm, Curtis believes that rapid response to revealed issues is the practical model for the future, especially since there's no blame to be placed on the development program at Cloudflare. "If this could have occurred frequently then, yeah, they screwed up. But if they couldn't have anticipated the complex set of circumstances required for it to occur then they weren't negligent," he said. "You know, we get a lot of this in complex systems, where people just couldn't have imagined all the interactions that led to the problem. And that's something we're going to live with more and more as these systems get more complex and we have different pieces coming from different vendors."

— Curtis Franklin, Security Editor, Light Reading

Read more about:

Security Now

About the Author

Curtis Franklin, Principal Analyst, Omdia

Curtis Franklin Jr. is Principal Analyst at Omdia, focusing on enterprise security management. Previously, he was senior editor of Dark Reading, editor of Light Reading's Security Now, and executive editor, technology, at InformationWeek, where he was also executive producer of InformationWeek's online radio and podcast episodes

Curtis has been writing about technologies and products in computing and networking since the early 1980s. He has been on staff and contributed to technology-industry publications including BYTE, ComputerWorld, CEO, Enterprise Efficiency, ChannelWeb, Network Computing, InfoWorld, PCWorld, Dark Reading, and ITWorld.com on subjects ranging from mobile enterprise computing to enterprise security and wireless networking.

Curtis is the author of thousands of articles, the co-author of five books, and has been a frequent speaker at computer and networking industry conferences across North America and Europe. His most recent books, Cloud Computing: Technologies and Strategies of the Ubiquitous Data Center, and Securing the Cloud: Security Strategies for the Ubiquitous Data Center, with co-author Brian Chee, are published by Taylor and Francis.

When he's not writing, Curtis is a painter, photographer, cook, and multi-instrumentalist musician. He is active in running, amateur radio (KG4GWA), the MakerFX maker space in Orlando, FL, and is a certified Florida Master Naturalist.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights