Tech Insight: The Buzz Around Fuzzing
Fuzzing tools can help identify vulnerabilities before the bad guys do
Security researchers long have sworn by it, and now many enterprises, developers, and service providers are turning to an increasingly popular method of identifying security vulnerabilities: fuzzing.
Fuzzing -- also known as fuzz testing, fault injection, or input validation -- is a vulnerability testing process where the inputs of an application, such as the username, file, or HTML field, are tested with random (or semi-random) data to identify bugs. And fuzzing is catching on with the corporate crowd as an automated, faster way to find holes in software and plug them before the attackers do. The target being tested can be anything from Microsoft Word or Mozilla Firefox to a VOIP phone or network router.
In its most basic form, fuzzing can entail merely running a command line program over and over, with varying amounts of random characters, in hopes the program will crash. A crash indicates a vulnerability, which could lead to an exploitable bug. And the bug could result in privilege escalation or arbitrary code execution by a remote attacker.
Today, fuzzing's usefulness has become apparent, with companies like Microsoft and Juniper now using fuzzing tools, and the availability of many commercial and open source fuzzing tools, as well as a couple of books written on the subject.
Developers now deploy fuzzing as part of their software development lifecycle in order to find bugs before their software gets shipped. Companies burned by exploitable vulnerabilities in commercial software now include fuzzing in the procurement process to ensure they’re buying a robust product. And service providers use it to test network equipment and embedded devices before deployment.
For fuzzing tools to be effective in these environments, they need several key characteristics. First, they should understand the inner workings of specific applications and support a large number of protocols and file formats. Second, all tests they perform should be recorded in some way that they can easily be reproduced when a problem is found. Finally, they need to detect when an error occurs in the application or device, and correlate that error with a particular test.
Early on, fuzzing involved sending random data to the inputs of an application with no regard to what the application would normally expect (having MS Paint open a file of random data with the extension .jpg, for example, that didn’t at least have the standard JPG header and footer). As fuzzing evolved, model-based fuzzing tools were developed that understood how different protocols and file formats were structured, so they were able to find more flaws deeper within the application.
Commercial fuzzing solutions from Codenomicon and Mu Security are model-based, with a wide range of support for various network protocols and file formats. Companies need to check with each vendor to make sure the support is there for what they want to test. Commercial solutions tend to focus more on standards-based protocols and highly popular file formats. So developers using proprietary protocols are unlikely to find the commercial vendors rushing to create custom fuzzing tests for them, and are better off using a framework like SPIKE, Peach, or Sulley, which let you build fuzzing tools specific to your environment.
Repeatability is another key element to a fuzzing solution. It also must be able to document and replay all previously run-test cases in order to determine which test caused an application to crash, or a network device to stop responding.
Mu Security’s Mu-4000 Security Analyzer, for instance, records all network traffic in addition to keeping track of each test run, says Thomas Maufer, Mu Security’s director of technical marketing. When a flaw is found, analysts can examine the network capture using standard PCAP tools like Wireshark. The Mu-4000 can also create a Linux executable that reproduces the test cases that found the vulnerability. That helps the vendor or in-house developer with any necessary remediation.
Automated error detection is another critical feature. Depending on what is being tested, this works in different ways. As a vulnerability is encountered, it may crash an application, make a network device unreachable, or cause latency in the target as it responds to new requests. Fuzzing tools must be capable of detecting when the target behaves abnormally, and to correlate its respective test case. Some solutions, such as the Mu-4000, can even restart the application or reset the device and then run the test again to see if the error was a fluke or is reproducible.
“If you don't have a good way of detecting when a fault occurs, you're wasting your time,” says Jared DeMott, security researcher and creator of the GPF fuzzing tool. “Debuggers, time stamped logs, and network sniffing are all good sources of information for error detection.”
Companies interested in fuzzing should take at look at a blog post from Scott Lambert, a member Microsoft’s Security Engineering Tools Team, for Microsoft's fuzzing process and the different tasks it associates with each stage of testing.
Additionally, a new book from Jared DeMott, Ari Takanen (founder and CTO of Codenomicon) and Charlie Miller (research at Independent Security Evaluators) will be released in May with the title, “Fuzzing for Software Security: Robustness Testing for Quality Assurance and Vulnerability.”
Have a comment on this story? Please click "Discuss" below. If you'd like to contact Dark Reading's editors directly, send us a message.
Microsoft Corp. (Nasdaq: MSFT)
Juniper Networks Inc. (Nasdaq: JNPR)
Read more about:
2008About the Author
You May Also Like