News, news analysis, and commentary on the latest trends in cybersecurity technology.
Ambient.ai Expands Computer Vision Capabilities for Better Building Security
The AI startup releases new threat signatures to expand the computer vision platform’s ability to identify potential physical security incidents from camera feeds.
A comprehensive cybersecurity strategy should include physical security. Adversaries don't need to worry about compromising a corporate device or breaching the network if they can just walk into the office and connect directly into the network.
CISOs are increasingly including physical security as part of their strategic investments, says Stephanie McReynolds, head of marketing at Ambient.ai. Organizations are spending a lot of money and effort to lock down cybersecurity, but all of those security controls are useless if the adversary can just enter a restricted space and leave with equipment.
"The last mile of cybersecurity is physical location," McReynolds says.
Ambient.ai uses computer vision technology to solve physical security problems, such as monitoring who is entering the building or a restricted area and monitoring all the video feeds coming from the camera network. Computer vision is a subcategory of artificial intelligence dealing with how computers can process images and videos and derive an understanding of what they are seeing. The idea behind computer vision is to provide computers with eyes to see the same things humans see, and training the algorithm to think about what the eyes saw.
In the case of Ambient.ai, the company's computer vision intelligence platform serves as "the brain" behind physical access control systems, such as security cameras and physical sensors (such as door locks and entry pads). This week, the company expanded the catalog of behaviors the computer vision platform can recognize with 25 threat signatures.
Computers Help Humans See
Traditionally, physical security involves staff in the security center monitoring alerts from the sensors and watching video feeds to try to detect when something untoward is happening. They may receive alerts that a door is open, or that a person swiped the access card to get into the building after-hours. There might be camera footage of someone loitering for quite some time in the building lobby, or a person entering a restricted area carrying an unauthorized laptop. Humans are expected to detect and respond to security incidents, but between fatigue and too much information to process, things can get missed.
"One individual is trying to watch 50 camera feeds at once. This doesn't work," McReynolds notes.
There have been three waves in computer vision, McReynolds says. The first wave was basic detection — that there was an object there, but no insight into what it was. The second wave added recognition, so it knew what it was looking at, such as whether it was a person or a dog. But it was a limited form of recognition, and there was a lot that was still unknown about the object it was looking at. The third wave, the current one, takes in context clues from the broader scene to understand what is happening. Just as a human would look at details around the object to understand what is happening, such as whether the person is sitting or if the person is outside, computer vision technology is now capable of collecting those details.
Ambient.ai breaks down the image or video into "primitives" — which refers to components such as interactions, locations, and objects seen — and constructs a signature to understand what is happening. A signature may be something like a person standing in the lobby for a long time not interacting with anyone, for example.
The new threat signatures expand the platform's ability to catalog over 100 behaviors, McReynolds says.
Recognizing What Is an Incident
The Ambient.ai Context Graph assesses three risk factors to determine next steps: the context of the location, the movements that create behavior signatures, and the type of objects interacting in a scene. Based on these factors, the platform can dispatch security personnel to handle the incident, validate risks, or trigger proactive alerts. With the Context Graph, analysts can also tell which alerts are not security incidents, such as a door that didn't latch properly, and close the ones that don't require any action.
"A person holding a knife running in the kitchen isn't a security incident," McReynolds says. "A person holding a knife running in the lobby, on the other hand, is a security incident."
VMware, an Ambient.ai customer, claims that 93% of its alerts each year were false positives. By integrating Ambient.ai's platform with its physical access control systems, VMware's security teams didn't have to deal with those alerts and were able to focus their attention on dealing with the remaining 7% of alerts to stop security incidents on its campus.
McReynolds described a potential workplace violence scenario, where a former employee tried to use their badge to enter the building. The invalid badge in and of itself is not a security threat, but paired with security footage of the former employees sitting in the lobby and not interacting with anyone, there are enough reasons to be concerned. The alert would then be prioritized to send a guard to approach the individual.
"Sometimes it takes just a conversation and the person will stand down," McReynolds says.
All that is accomplished without resorting to facial recognition, which brings a host of privacy implications. Ambient.ai uses machine learning, pattern-matching, and computer vision to make decisions about what is important.
Computer Vision in Security
Computer vision technology is useful in several security contexts because it can be used to detect manipulations that are less visible to the human eye, says Fernando Montenegro, senior principal analyst at Omdia. For example, the technology can be used to identify spoofed logos and websites used in account takeovers and ecommerce fraud. Another interesting use case is to represent binary samples as images, and then using imaging classification techniques to classify them as malicious or not, he says.
One aspect of computer vision is the capability to analyze "datasets that are not originally 'images' themselves, but can be encoded as such," Montenegro says.
Humans have the capacity to say something doesn't look right, even if they can't specifically point to something that is wrong, says Gunter Ollmann, CSO of Devo. An intriguing application of computer vision research is to train the algorithm to be able to detect something is wrong because of the way it looks, he says. By turning source code into an image, the machine can analyze the structure and other patterns to detect potential issues without having to analyze the code line by line. This kind of analysis can be used for malware analysis, by color-coding different categories of function and analyzing the image to get an understanding of what the application is doing.
There are several computer vision startups tackling cybersecurity issues. Hummingbirds AI uses facial biometrics to authenticate users and grant access to the device. When the computer "sees" a person who is not authorized that is close to the screen, the tool blocks access. Pixm relies on computer vision to identify and stop spear-phishing attacks. The platform runs in the browser window and is available from the moment the user clicks on a link until the campaign is disrupted.
"We are now in an exciting era where [the machine] can collaborate with the human," Ambient.ai's McReynolds says, regarding advancements in computer vision.
About the Author
You May Also Like