Busting the Open Source Security Myth
Cloud training expert says too many developers assume that open source software is inherently more secure but that's not always the case and it can lead to security issues.
An IT training expert is sounding the alarm on attitudes toward the security of open source software, saying too many developers are assuming open source is inherently secure and that could produce major security problems.
Stuart Scott is AWS content lead and trainer at Cloud Academy, which offers a broad range of training, testing and certifications on how to use a wide range of cloud services, including Amazon Web Services, for IT professionals. He's not interested in disparaging open source software or even limiting its use, but he does think developers and other IT folks need to be aware of potential security issues.
"There is an assumption that it is okay to grab open source software because it will save me a lot of time developing and it must be okay because it is open source, but it's not," Scott says in an interview. "And a lot of the dependencies of that open source software might have a lot of libraries being pulled in from other sources and when you start having these dependencies on other libraries of code, then you need to think, where did that source code come from? It's kind of a daisy chain effect of vulnerabilities when you look at the source code."
At issue is an ongoing debate about whether open source software is inherently safer because it is constantly being scrutinized by many different people and software developers around the globe, he adds. "There is often the assumption that it is more secure because there are so many more people who can look at the code, look at the software and see if there is any vulnerabilities or bugs, but that assumption is only valid if those people that are looking at that source code have the training and skills needed to be able to do so."
Otherwise, just having more sets of eyes on a piece of code won't make it inherently safer, Scott says.
He argues that an enterprise needs to have a strong security policy in place that pervades the organization -- including developers -- to prevent inherent assumptions from producing inherent security vulnerabilities and threats. That is even more true today, when so many different departments within an organization are involved with applications and software, including their own development.
"This is where it comes into a new culture within an organization where you need to have a security-focused culture," Scott says. "Applications and software have changed significantly over the past few years. A few years ago, maybe a couple of departments within an organization were involved. Now, there needs to be a change, to where a security culture is everyone's responsibility. Every department needs to have an understanding of security and the part that they play, and that includes developers."
That's actually a point that doesn't even get an argument from Linux Foundation Executive Director Jim Zemlin, as noted in this email response.
"Thousands of professional developers and volunteers contribute improvements and fixes to open source software constantly, allowing release updates as often as every few days and enabling the community to fix vulnerabilities and release fixes faster," Zemlin writes. "At the same time, no software will ever be bug free and all systems need to be updated and maintained regularly, so it is essential to have strong security policies and training in place."
So what do developers need to do to make sure they can take advantage of open source software benefits without incurring security risks?
Deep dive into real-world issues and virtualization deployment challenges with industry leaders. Join Light Reading at the NFV & Carrier SDN event in Denver, September 24-26. Register now for this exclusive opportunity to learn from and network with industry experts – communications service providers get in free!
Scott's advice is largely two-fold. First, he says a developer should look carefully at the origins of any open source code.
"How reliable is that code, who has created that code, where have they come from, what is their background, did they build it with security in mind?" he says "Are there people out there who are trying to use that software as a means to install malicious software on your own systems such as malware, etc.? Without having a full understanding of where that source code came from and who created that, it is difficult to 100% rely on security of code without performing your own tests on it and without going through the source code itself looking for known bugs and vulnerabilities."
Secondly, even in the dev-ops world of today, testing code before it goes into production remains a priority, Scott says.
"There is a huge culture around dev-ops at the minute," he says. "That includes assuring that security is built in at the development stage. What developers can do is test this software before putting it into production. They need to test, test, test and perform their own security testing during the development process and they can also use tools and software as well," including software composition analysis, other tools and public lists of vulnerabilities.
— Carol Wilson, Editor-at-Large, Light Reading
Read more about:
Security NowAbout the Author
You May Also Like