Microsoft Uses Machine Learning to Predict Attackers' Next Steps
Researchers build a model to attribute attacks to specific groups based on tactics, techniques, and procedures, and then figure out their next move.
April 12, 2021
Microsoft is developing ways to use machine learning to turn attackers' specific approaches to compromising targeted systems into models of behavior that can be used to automate the attribution of attacks to specific actors and predict the most likely next attack steps.
In a research blog published earlier this month, the software giant stated it has used data collected on threat actors through its endpoint and cloud security products to train a large, probabilistic machine-learning model that can associate a series of tactics, techniques and procedures (TTPs) — the signals defenders can glean from an ongoing cyberattack — with a specific group. The model can also reverse the association: Once an attack is attributed to a specific group, the machine-learning system can uses its knowledge to predict the most likely next attack step that defenders will observe.
The machine-learning approach could lead to quicker response times to active threats, better attribution of attacks, and more context on ongoing attacks, says Tanmay Ganacharya, partner director for security research at Microsoft.
"It's critical to detect an attack as early as possible, determine the scope of the compromise, and predict how it will progress," he says. "How an attack proceeds depends on the attacker's goals and the set of tactics, techniques, and procedures that they utilize, [and we focus] on quickly associating observed behaviors and characteristics to threat actors and providing important insights to respond to attacks."
In the early April blog post, Microsoft described the research into machine learning and threat intelligence that uses TTPs from the MITRE ATT&CK framework, the attack chain, and the massive data set of trillions of daily security signals from its 400,000 customers to model threat actors. Just as defenders use playbooks to respond to attacks and not forget important steps in the heat of the moment, attackers typically have a standard way of conducting attacks. The machine learning approach attempts to model their behavior.
Companies are early in the process of adopting machine learning for threat intelligence processing and enrichment. While about 70% of companies are using machine learning with threat intelligence in some way, 54% of those companies are currently dissatisfied with the technology, according to the SANS Institute's "2021 SANS Cyber Threat Intelligence Survey."
Providing useful information using machine learning could help, the Microsoft 365 Defender Research team stated in its blog.
"We are still in the early stages of realizing the value of this approach, yet we already have had much success, especially in detecting and informing customers about human-operated attacks, which are some of the most prevalent and impactful threats today," the company wrote.
To enable its research, the company consumes data from its Microsoft Defender anti-malware software and services to create collections of TTPs. Using those signals, the company's researchers implemented a Bayesian network model — which in cybersecurity is most commonly associated with anti-spam engines — because it is "well suited for handling the challenges of our specific problem, including high dimensionality, interdependencies between TTPs, and missing or uncertain data," they said.
Bayes' theorem can calculate the probability, given certain TTPs and historical patterns, of a certain group being behind the attacks.
"Massive data can provide insights humans cannot through supervised learning," Ganacharya says. "In this case, the TTPs are used as variables in a Bayesian network model, which is a complex statistical tool used to correlate alerts from various detection systems and [predict] future attack stages. These insights help analysts in attribution when a specific actor is present, allowing focused investigations."
Using the probability model also gives analysts additional tools to predict an attacker's next potential action. If certain TTPs are observed — the Transfer of Tools and Disable Security Tools from the MITRE ATT&CK framework, for example — the model will predict the attacks the defender will most likely see next.
In addition, the model can be easily updated with new information as attackers change their approaches to compromising targets, the company said.
Yet challenges remain. The model requires good data on threat actors and their specific TTPs to create the model. Human experts are required to evaluate the data and, currently, to interpret the model's results for customers.
"If the training data does not represent the true behaviors, the model can make poor predictions," Ganacharya says. "This could result in security operations taking incorrect actions to halt the attack, either wasting critical response time by following false leads or impacting users who are not part of the attack."
About the Author
You May Also Like