DeepPhish: Simulating Malicious AI to Act Like an Adversary
How researchers developed an algorithm to simulate cybercriminals' use of artificial intelligence and explore the future of phishing.
An idea applies in physical space and cyberspace: When you're plotting against an adversary, you want all the intel you can get on which weapons they're using and how they're using them.
The same idea drove researchers at Cyxtera Technologies to explore the weaponization of artificial intelligence (AI) in phishing attacks, which continue to evolve as cybercriminals employ more sophisticated techniques. Encryption and Web certificates, for example, have become go-to phishing tactics as attackers alter their threats to evade security defenses.
Web certificates provide a low-cost means for attackers to convince victims their malicious sites are legitimate, explains Alejandro Correa, vice president of research at Cyxtera. It doesn't take much to get a browser to display a "secure" icon – and that little green lock can make a big difference in whether a phishing scam is successful, he says. People trust it.
By the end of 2016, less than 1% of phishing attacks leveraged Web certificates, he continues. By the end of 2017, that number had spiked to 30%. It's a telling sign for the future: If attackers can find a means to easily increase their success, they're going to take it.
"We expect by the end of this year more than half of attacks are [going to be] done using Web certificates," Correa says. "There is no challenge at all for the attacker to just include a Web certificate in their websites … but it does carry a lot of effectiveness improvements."
So far, there is no standard approach for detecting malicious TLS certificates in the wild. As attackers become more advanced, defenders must learn how they operate. Correa points to the emergence of AI and machine learning in security tools, and explains how this inspired researchers at Cyxtera to learn more about how attackers might use this tech in cybercrime.
"Nowadays, in order for us to analyze the hundreds of thousands of alerts we receive every day, we have to rely on machine-learning models in order to be more productive," he says. "There is simply not enough manpower to monitor all the possible threats."
At this year's Black Hat Europe event, taking place in London in December, Correa will present the team's findings in a session entitled "DeepPhish: Simulating Malicious AI."
As part of his presentation, Correa will demo an algorithm they developed called DeepPhish, which simulates the results of the weaponization of AI by cybercriminals.
The goal was to figure out how attackers could improve their effectiveness using open source AI and machine-learning tools available to them online. "We wanted to figure out what is the best way, from an attacker's perspective, to bypass these detection algorithms," Correa says.
Researchers collected sets of URLs manually created by attackers and built algorithms to learn which patterns make them effective, meaning the URL wasn't blocked by a blacklist or defensive machine-learning algorithm. Using these URLs as a foundation, the team created a neural network designed to learn these patterns and use them to generate new URLs, which would then have a higher chance of being effective.
To test their work, they modeled the behavior of specific threat actors. In one scenario, an actor with a 0.7% effectiveness rate jumped to 20.9% effectiveness with DeepPhish applied.
"If we're going to effectively differentiate ourselves, we need to understand how that is going to be done," Correa says. He calls the results a motivation: "[It will] enhance how we may start combatting and figuring out how to defend ourselves against attackers using AI."
Related Content:
Black Hat Europe returns to London Dec 3-6 2018 with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.
Read more about:
Black Hat NewsAbout the Author
You May Also Like