Developers Need More Usable Static Code Scanners to Head Off Security Bugs

As companies "shift left" -- pushing more responsibility for security onto developers -- the tools that are available are falling short, usability researchers say.

4 Min Read
Dark Reading logo in a gray background | Dark Reading

While programmers are increasingly being asked to fix security issues during development, common software security scanners — known as static application security testing (SAST) tools — have a variety of usability issues that make them less accessible to developers, according to research presented at the USENIX Symposium on Usable Privacy and Security (SOUPS) on August 11.

The research, conducted by researchers at Lafayette College and Google, found that the tools failed to provide obvious actions to manage the results of a scan or to fix vulnerabilities, failed to provide recommendations to prioritize vulnerability remediation, and had significant issues scaling to represent a large number of results. Other issues included inaccurate results and problems accurately identifying where in the software the defect occurred.

The research is not meant to call out those specific tools but to identify common problems in static code scanners that could prevent them from being accessible to developers, Justin Smith, assistant professor of computer science at Lafayette College, said during his virtual session at the symposium.

"We are trying to understand the specific usability issues that detract from static analysis tools so we can eventually build easier-to-use tools that will lower the barriers to entry and enable more developers to contribute to security," he said. 

The research comes as developers are increasingly being tasked with taking responsibility for the security of their code, often by getting earlier results of security analyses as they write their code. The simplest form of such tools are linters — named after "lint," a Unix-based code scanner — that use a variety of pattern matching and simple analysis to highlight potential code defects. More extensive static application security testing (SAST) tools perform a variety of analyses using source code to identify potential security vulnerabilities.

However, irrespective of how the tools perform, their usability is often an afterthought, said Smith, who conducted the research while a PhD student at North Carolina State University. The paper, titled "Why Can't Johnny Fix Vulnerabilities: A Usability Evaluation of Static Analysis Tools for Security," describes a heuristic walkthrough approach to analyzing the software as well as a survey of users.

"A lot of the times when people are designing these static analysis tools, their priority is to help them find an issue," he said. "But that mindset leads them to build the scanners that find hundreds of issues and don't make it very easy to actually fix those issues. So, by encouraging people to conduct these inexpensive heuristic walkthroughs or usability evaluations, they can look at their tools from a different perspective."

The researchers focused on four tools: three open source SAST tools and one commercial tool. They selected tools that had a significant user base but are different enough from each other to provide a variety of potential issues to study. The researchers considered 61 different tools, but they decided on the four SAST programs to allow a variety of issues to be found and because they had complex-enough user interfaces to have potential issues.

The open source tools included Find Security Bugs (FSB), a Java code scanner that can find 125 different types of vulnerabilities; RIPS, a PHP application scanner that can find more than 80 different types of vulnerabilities; and Flawfinder, a command-line tool for discovering dangerous code in C and C++. The single commercial tool was not named in the research because of license agreement requirements.

"We were trying to select different types of user interfaces," Smith said. "We could have chosen four linter tools that all had the same interfaces, but then once we got to the fourth [linter], we would likely not find any new usability issues."

The two most common usability issues for the code scanners were a lack of information about results and next steps and a failure to provide intuitive interface elements, or "affordances," to convey information simply and efficiently. The concept of "affordances" is a significant part of usability design, providing elements that simply show a user what potential actions they can take without needing advanced knowledge of the tool or application. A common scenario is a checkbox user interface element that can turn on a feature. 

Several of the tools failed to provide good interface elements for managing the list of vulnerability results. Most of the tools also failed to provide simple point-and-click actions to perform a recommended fix of the code.

To better serve developers, security tools should better communication the discovered issues and how to fix the vulnerabilities, as well as place such alerts within the code editor, to avoid switching between two contexts, the researchers stated in their paper. In addition, the tools should add more context — such as using actual variable names — to more clearly flag problems for developers.

Related Content

About the Author

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights