News, news analysis, and commentary on the latest trends in cybersecurity technology.
MITRE's Latest ATT&CK Simulations Tackle Cloud DefensesMITRE's Latest ATT&CK Simulations Tackle Cloud Defenses
The MITRE framework's applied exercise provides defenders with critical feedback about how to detect and defend against common, but sophisticated, attacks.
January 24, 2025
In 2025, an international fintech firm will face attacks through its hybrid cloud infrastructure by some of the most sophisticated cyber operators on the Internet, targeting the company's Active Directory instance, employees' LinkedIn profiles, and shared code repositories to further their compromises.
A prediction? Not quite.
The scenario is the premise of the latest MITRE ATT&CK Evaluations test, an annual assessment gauntlet that pits cybersecurity firms against the techniques and tactics of the latest cyber threats actors. For vendors, the exercises — conducted by government contractor MITRE — allow them to test their detection, protection, and response capabilities in real-world scenarios to see what can be improved. For cybersecurity professionals, the results of the assessments can help them determine whether they are prepared to defend against sophisticated attacks.
While some vendors tout their detection ratings in the evaluations, the point is less about grades for security software and more about improving companies' defenses and vendors' products, says Lex Crumpton, principal cybersecurity engineer at MITRE.
"ATT&CK Evaluations is more of an adversary-emulation, purple-teaming, collaboration effort, if you will — we assess the vendors tooling on an environment that we build in-house," she says. "They don't know which techniques we are going to choose, or what we're not going to choose, based off of that techniques and scope document."
The MITRE ATT&CK Framework is well-known as a taxonomy of tactics and techniques used by cyberattackers, but every year MITRE also conducts testing of security products against the latest threats targeting organizations. In 2024, for example, the exercise mimicked attacks by the LockBit ransomware-as-a-service group, the Cl0p ransomware gang, and North Korean state-sponsored threat groups, which have commonly used ransomware to fund national goals.
A variety of ransomware attacks were emulated in the test environment, including those targeting Windows and MacOS, MITRE said in a December 2024 statement.
For 2025, one part of the evaluation — known as the Managed Services Evaluation — will focus on "cloud-based attacks, response/containment strategies, and post-incident analysis," according to the organization's scenario outline.
Companies can use the ATT&CK Evaluations in two ways, says Greg Young, vice president of cybersecurity at Trend Micro, which participated in the 2024 Evaluations along with 18 other companies.
"For [a company's] purchase decisions, this is one sort of data input — it should not be the only data input because the testing for MITRE is exceptionally narrow against a few techniques and tactics," he says. "For the second part, the tests [can inform] companies' own security ops centers and their own red teaming behavior — looking at it and saying, 'Well, what are adversaries using today?'"
Developing More Realistic Adversaries
The ATT&CK evaluations use cybersecurity observations and threat reporting from analysts worldwide, collected from both MITRE's in-house cyber threat intelligence team and from the CTI community at large. The group collects information on attacks and selects the adversaries for the evaluations. A red development team creates a set of tools to emulate current techniques used by selected adversaries, while the detection team — the blue team — confirms whether those approaches are legitimate in terms of the evaluation.
MITRE conducts two distinct rounds of testing. One is a managed-service round, in which the organization creates a black-box testing environment, giving no information about the attack to the vendor being evaluated except for the general category of threat. In an enterprise round, the vendor is given the technical scope and potential information about the adversaries, such as whether they are a nation-state, such as China or the DPRK, or using some other tactics.
Like many testing organizations, MITRE has faced some pushback on aspects of its scenarios, Crumpton says.
"One of the biggest comments we had this year is — because we brought in false-positive noise [such as] benign user activity — some vendors argued that, 'Hey, this could be deemed malicious activity'," she says. "I think one of the benign use cases was disabling the firewall. One vendor said, 'Hey, the sys admins from our companies would never disable the firewall.'"
Evaluations Push for Improvement
Vendors get graded on how they perform, but the focus is on giving information to both the vendors and businesses about how they can improve their defenses, Crumpton says.
"Ultimately, we are there to improve the tools," she explains. "If we're emulating this adversary and we find this technique that your tool can't detect, can we help you improve your tool so that you can now detect that technique? That's something that I think also the customers or the community should look at."
Defenders can take a page from the ATT&CK evaluations as well, creating playbooks to detect and protect against the tested threats, says Trend Micro's Young. During the ATT&CK Evaluation, MITRE logs activity and takes screenshots, giving organizations a detailed picture of the attack unfolding and mapping the steps against the ATT&CK Framework.
"Knowing that adversaries are now using this kind of technique — say, this kind of lateral movement, or they're going to go after this kind of resource — that's exceptionally helpful for [a company] designing their defenses," he says. "I almost think there's more value in looking at the [ATT&CK] framework than the evaluations, but it depends on your purpose."
About the Author
You May Also Like