Scores of Biometrics Bugs Emerge, Highlighting Authentication Risks
Face scans stored like passwords inevitably will be compromised, like passwords are. But there's a crucial difference between the two that organizations can rely on when their manufacturers fail.
June 12, 2024
Two dozen vulnerabilities in a biometric terminal used in critical facilities worldwide could allow hackers to gain unauthorized access, manipulate the device, deploy malware, and steal biometric data. And yet, exactly how damaging this could be for organizations is up for debate.
Biometric security is more popular today than ever, with widespread adoption in the public sector — law enforcement, national ID systems, etc. — as well as for commercial industries like travel and personal computing. In Japan, subway riders can "pay by face," and Singapore's immigration system relies on face scans and thumbprints to allow travelers into the country. The fact that even burger places are experimenting with face scans suggests something's brewing here.
In short order, though, hackers have found their way around and, sometimes, inside of these purportedly secure systems.
Recently, a Kaspersky researcher tore open terminals sold by the Chinese manufacturer ZKTeco. These white-label devices are used to guard corporate and critical premises worldwide, using face scans and QR codes. The research ended up yielding a couple of dozen garden-variety bugs, split into a number of categories, such as SQL injections, improper verifications of user input, and the like.
The risks to physical security are significant, but experts point out that a biometric data leak isn't necessarily as severe as a leak of other forms of personal data. Anyone worried about their face being stolen need not cry foul.
Vulnerabilities in Biometric Terminals
Exploiting a ZKTeco terminal could look like any other cyberattack, or it could involve rather inventive physical compromises.
On that first end of the spectrum are bugs like CVE-2023-3940 and CVE-2023-3942 — a path traversal and SQL injection flaw, respectively — which allow for viewing and extracting files, including users' biometric data and password hashes. Then there are CVE-2023-3939 and CVE-2023-3943, which allow for privileged command execution.
"It was quite astonishing to find a substantial number of SQL-injection vulnerabilities in the binary protocol used for transmitting control commands to the device," Georgy Kiguradze, senior application security specialist at Kaspersky, says. "Also, similar vulnerabilities were discovered in the QR code reader embedded within the device’s camera — a location where one would not typically expect to find such a vulnerability, as it is generally associated with remote attacks."
He's referring to CVE-2023-3938, where an attacker injects malicious data into a QR code to perform a SQL injection. When the terminal reads the code, it mistakes it as belonging to the most recently authorized legitimate user. In practice, then, an on-site attacker could trick a terminal into allowing them access to an otherwise restricted area. A modified version of this exploit with extra malicious data could also cause overflows and trigger a machine restart.
Kiguradze also found a means of physical attack via facial recognition. With a bug like CVE-2023-3941 — an issue with verification of user input — an intruder can access and remotely alter the machine's biometric database. At that point, they could upload their own face to the system alongside legitimate entries.
It's unclear yet whether ZKTeco has patched any of these vulnerabilities. Dark Reading has reached out to the manufacturer for more information.
Securing Biometric Systems
Biometrics generally are regarded as a step above typical authentication mechanisms — that extra James Bond-level of security necessary for the most sensitive devices and the most serious environments.
ZKTeco terminals, for example, are deployed around the globe at nuclear and chemical plants, hospitals, and the like. They guard server rooms, executive suites, and sensitive equipment. Vulnerabilities such as those described above might be ill-fitted for financially motivated cybercriminals, but devilishly useful to an insider or advanced nation-state threat actor intent on stealing data or even manipulating safety-critical processes.
The critical nature of the environments in which these systems are so often deployed necessitates that organizations go above and beyond to ensure their integrity. And that job takes much more than just patching newly discovered vulnerabilities.
"First, isolate a biometric reader on a separate network segment to limit potential attack vectors," Kiguradze recommends. Then, "implement robust administrator passwords and replace any default credentials. In general, it is advisable to conduct thorough audits of the device’s security settings and change any default configurations, as they are usually easier to exploit in a cyberattack."
"There have been recent security breaches — you've probably read about them," acknowledges Rohan Ramesh, director of product marketing at Entrust. But in general, he says, there are ways to protect databases with hardware security modules and other advanced encryption technologies.
Alternatively, organizations unsure about biometrics could focus on scaling them back where possible, or ensuring that they aren't the only protection in place. The trick is making sure those additional safeguards are invisible to the user, given that part of the appeal of biometrics is its frictionlessness.
"If I want to reset my multifactor authentication (MFA), or add a user to the system — if I want to change a server that hosts personally identifiable information (PII) or other critical data, or if I'm doing a banking transaction — I should need to go through extra verification through biometrics. You want biometrics to be a seamless option for certain situations," Ramesh says.
The Big, Fat Silver Lining
The bottom-line question for security teams is: Are biometrics materially safer than other forms of authentication if, in the end, the data is stored and protected the same way?
Well, yes, mostly, experts say.
"I want to address a common misconception," says iProov founder and CEO Andrew Bud, "which is that somehow a biometric is like a password and, therefore, like a password, if it were stolen or compromised, then it would become worthless. That is a fundamental conceptual error, because a biometric — like a face — is not a secret."
He explains, "A password is good because it's secret. But a face is not a secret in the modern world. It's enough to look on LinkedIn or Facebook to grab people's faces. What makes a face or any other kind of biometric so very valuable is not that it's confidential, but that the genuine article is unique. You can steal my password, but you cannot steal my face."
In practice, then, leaked photographs, fingerprints, or iris scans from a biometric scanner aren't the end of the world. One might instinctively cringe at the notion of a hacker holding a photograph of them, but facsimiles of the real thing shouldn't fool cutting-edge recognition technologies today. ZKTeco terminals have a temperature detection mechanism, for instance, that verifies personhood, preventing intruders from using, say, printed photographs to fool a facial-recognition terminal.
Alternatively, "You can do a bio-to-bio permutation," Ramesh suggests. "If I take a picture of you in May 2024, in May 2025, based on AI-based calculations and predictions, we could predict how you could [physically] change."
Or, Bud says, "When you check a person's face, you can introduce something unpredictable into the scene which causes the face to react in ways that are unique compared to a deepfake or copy."
He adds, "What we do is use the screen of the user's device to flash an unpredictable and unique sequence of colors that illuminates the user's face, and we stream video of the face back to our servers. The way that light reflects off a human's face, and the way that reflection interacts with the ambient light … that's a very, very, very peculiar, unusual, and unpredictable challenge, which is extremely difficult to forge."
If the facial recognition mechanism is robust to copies, he explains, "In principle, you don't have to rely upon the security of the device that it's collected on. In fact, we start out with the assumption that the device cannot be trusted at all."
Unfortunately, there is one caveat. Unlike with physical features like faces, eyes, and fingerprints, "It's extremely hard, if not impossible, to detect a deepfake voice," Bud says. "There is so little information in voiceprints that there aren't many signals of fakeness to find" — so, advanced versions of biometrics are the way to go.
About the Author
You May Also Like