Hacked Robots Present a New Insider Threat
Robots and their control software are rife with critical and painfully obvious security flaws that make them easily hackable, new research shows.
March 1, 2017
Popular robotics products contain glaring and serious security vulnerabilities that could easily be exploited to hack and take control of a robot's movements and operations for spying or causing physical damage - and even posing a danger to humans.
More on Security
Live at Interop ITX
Call it the new insider threat: IOActive researchers Cesar Cerrudo and Lucas Apa have discovered some 50 flaws in popular robots and robot-control software used in businesses, industrial sites, and homes that could allow a hacker to remotely manipulate a robot moving about the office, plant floor, or home, to infiltrate other networks there, spy and steal information, and even wreak physical destruction.
Robots are getting "smarter" and in some cases, with more human-like qualities such as facial recognition features, all of which is helping propel their popularity and usability. IDC estimates that in 2020, worldwide spending on robotics will be at $188 billion. Robots today are mostly in the manufacturing industry, but the consumer and healthcare sectors are up-and-coming in their robotics adoption, according to IDC.
"A robot being inside [an organization] is actually a reality" today, notes IOActive's Apa, pointing to the rise of use in smart robotics technology. "And it's very difficult to distinguish between a robot that's been hacked" and one that's not, he says.
A hacked robot could silently be used to go rogue and hack other networks within the office, or even other robots, according to the researchers, who say robots indeed could be the next-generation insider threat.
Apa, who is a senior security consultant with IOActive, and Cerrudo, IOActive's CTO, in their new research, studied robots and robotics control software products from Softbank Robotics, UBTECH Robotics, Robotis, Universal Robots, Rethink Robotics, and Asratec Corp. The researchers say they wanted to drill down on the security issues now, before robots become mainstream.
The robots and their control software were rife with some of the same security flaws common in notoriously insecure Internet of Things devices: insecure communications weaknesses such as cleartext or weak encryption between the robot and its components that provide its commands and software updates; a lack of authentication (no credentials required to access a robot's services, for example); and lack of authorization measures, which could leave a robot at the mercy of a nefarious attacker.
In addition, they found weak cryptography in the devices and their software that leave sensitive data and information stored in the robots at risk, such as passwords, crypto keys, and vendor service credentials, for example. Some of the devices also come with weak default configurations that don't properly lock down the robots and their operations, and Cerrudo and Apa found that some of these devices couldn't even be properly retrofitted with new passwords, nor even fixed once they had been hacked.
"It can be hard to restore a robot to its original [uncompromised] state," Apa says. "With some vendors' products we analyzed, it was impossible," so the customer is stuck with a hacked robotic system, he says.
Turns out robots also suffer from some of the same open-source framework and library vulnerabilities of other software systems. Many robots run on the the Robot Operating System (ROS), which comes with cleartext communication, authentication, and weak authorization features, according to IOActive. "In the robotics community, it seems common to share software frameworks, libraries, operating systems, etc., for robot development and programming. This isn’t bad if the software is secure; unfortunately, this isn't the case here," the researchers wrote in their report published today.
Don Bailey, founder and CEO of Lab Mouse Security, says robot vulnerabilities are another example of the flaws found in embedded, IoT devices. "They're all embedded systems. You're going to keep seeing the same threats, over and over," says Bailey, an IoT security expert.
The bigger risk of today's robotics-type devices, he says, is data leaking and privacy breaches. The Amazon Alexa and Apple Siri-style smart devices and others can be used more for espionage, he says. "As they [robots] grow into more substantial technologies, we'll see more [physical] danger to humans," Bailey says.
A serious concern today is the provisioning and sunsetting of robotics products, he says. "How a robot associates itself with its owner" and what happens when that owner hands it over to another owner or user, pose security and privacy risks, he says. It's unclear how a new "owner" could be protected from the previous one still having access to the robot, for example.
IOActive's Apa and Cerrudo aren't releasing vulnerability details at this time, as they await responses from the vendors. So far, they've only heard back from four of them. "Only two said they are going to fix" the flaws, Cerrudo says. The other two indicated they understood they should "do something about it," he says.
They weren't able to actually test all of the robots, due to the expense of some of the devices as well as global shipping restrictions, so they mainly analyzed robot software, including mobile apps, operating systems, and firmware images. Those are core elements of robotic systems, they say, so they could get a good take on the security from them as well as from the physical robots they did have in hand.
Interestingly, the researchers say they easily found the flaws without drilling down too deeply in their security audit of the products, since their aim was to get a more high-level sense of robot security today. They aren't finished, though, and plan to do some deeper dives, they say.
"We consider many of the vulnerabilities we found simple to exploit," Apa says. "Anyone with a phone and app can remotely control the robot [via these bugs]. They don't need to develop an exploit."
Among the products with flaws were SoftBank Robotics' NAO and Pepper robots; UBTECH Robotics' Alpha 1S and Alpha 2 robots; ROBOTIS's OP2 and THORMANG3 robots; Universal Robots' UR3, UR5, and UR10 robots; Rethink Robotics's Baxter and Sawyer robots; and Asratec Corp.'s robots using V-Sido.
In one especially creepy scenario, the researchers say robots with face-recognition features in order to work with humans could be hacked and even manipulate their co-workers. Robots often come with microphones and cameras, so an attacker could employ the robot like a spy to get information, for example. "If an attacker can control this, they can use the built-in features to get information about the faces the robot recognizes," Apa says.
IOActive isn't the first to explore robot security: Researchers at the University of Washington in 2015 hacked a surgical robot to demonstrate how a bad guy could hijack and take control of a robot during surgery.
For now, business and home robotics users are basically at the mercy of their insecure robots, the researchers say. What can they do to protect themselves: "Pray," Cerrudo quips. "If I was a robot user, I would unplug it when I'm away at night," for example, he says.
Related Content:
About the Author
You May Also Like
Applying the Principle of Least Privilege to the Cloud
Nov 18, 2024The Right Way to Use Artificial Intelligence and Machine Learning in Incident Response
Nov 20, 2024Safeguarding GitHub Data to Fuel Web Innovation
Nov 21, 2024The Unreasonable Effectiveness of Inside Out Attack Surface Management
Dec 4, 2024