MITRE Engenuity Launches Evaluations for Security Service Providers

The results are labor-intensive to parse, so knowing how to interpret them is key, security experts say.

6 Min Read
Man pressing finger against touchscreen interface with words managed service provider
Source: LeoWolfert via Shutterstock

A new set of evaluations for managed security service providers that MITRE Engenuity has released can potentially give enterprise decision-makers a handy resource to consult when selecting a provider. The key to benefiting from the information, though, is knowing how to interpret the results, MITRE and others said this week.

MITRE Engenuity's first-ever evaluation of security service providers — like its product evaluations — does not offer any winners or losers, nor any rankings based on performance, nor any indication of how well, or poorly, a vendor might have performed. 

Instead, it offers detailed information on how different security service providers analyze and describe adversary behavior to their clients. MITRE's evaluation leaves it entirely up to security professionals and teams using the data to make any vendor comparisons they might want with it.

An Objective Look at MDR Capabilities

"MITRE Engenuity’s ATT&CK Evaluations for Managed Services is likely the only objective demonstration of what’s available in the managed services and managed detection and response (MDR) market," says Katie Nickels, director of intelligence at Red Canary, one of 16 security service providers that participated in the evaluation. "It allows organizations to see a realistic demonstration of how these tools actually work, with those results being provided by a neutral third party."

For the evaluation, MITRE Engenuity gave each of the participating vendors an opportunity to deploy their adversary detection and monitoring tools on a MITRE-hosted Microsoft Azure environment. Ashwin Radhakrishnan, general manager of ATT&CK evaluations at MITRE Engenuity, says the test environment contained a set of resources typically found in most enterprise environments, and across all sizes.

A MITRE purple team then executed an emulated attack on the environment using tactics and techniques of the well-known Iranian threat group OilRig

Service providers that participated in the evaluation knew the simulated attack would happen within business hours in a specific two-week period. However, MITRE did not inform them of more exact timing, what techniques it would use, or which adversary MITRE Engenuity was emulating.

In carrying out the simulated attack, MITRE Engenuity's team showcased commonly used adversary tactics such as spear-phishing for initial access, credential dumping, Web shell installation, lateral movement, data exfiltration, and cleanup. Vendors had an opportunity to use any of the tools in their MDR portfolio to evaluate the malicious activity and report on it. 

But MITRE's rules prohibited them from taking any steps to respond or block the attack because the goal was to see how each service provider detected and analyzed the unfolding attack and the detail and clarity with which it reported their findings.

Parsing the Results Can Be Challenging

MITRE Engenuity's evaluation results for each participating service provider offers both a high-level and a detailed view of how each of them detected the attack through the entire chain. It provides a look at the depth of the analysis each vendor provided at each stage, their communications with MITRE during the emulation, the individual techniques that they spotted and reported on, and what context and information they provided about the attack.

The information can be very useful for skilled security professionals who don't have the resources to do their own bake-off and are willing to compare results themselves, says John Pescatore, director of emerging security trends at the SANS Institute. But the data can be difficult to parse for others, he says. 

"MITRE Engenuity purposely doesn't make it easy to rank vendors in their evals," Pescatore says. "So, the tests are not useful for someone who just wants to make a 'safe' choice or compete the top three against each other."

"To compare, I’d have to look at each one and count how many techniques, etc., they covered, and I’d get some kind of ranking,' Pescatore notes. "But in order to understand how they did it, to see how that would fit with my processes, I have to either get info from the vendor or play with the product or service myself."

Context Is Key

Nickels from Red Canary says that while the results don't offer a clear apples-to-apples comparison between vendors, that’s not the point. "Every provider is different in how it detects activity and communicate findings, and every organization and security team has different needs," she says.

The best way to get an understanding of the value provided by each vendor in MITRE Engenuity's evaluation is to consider qualitative aspects, such as how each vendor communicated with MITRE during the emulation, the screen shots they took, and the analysis and context they might have provided, she says: "Examining these resources, while labor intensive, will offer organizations the best view into the value provided by each vendor."

In a report this week, Red Canary also highlighted what it described as some limitations of the MITRE Engenuity tests, such as it being too endpoint-focused and being too heavily weighted toward detection coverage and not enough on response. 

"The test required participants to turn off many preventive and other security controls," Nickels says. "Under normal circumstances, most of the vendors who participated would have detected and responded to MITRE’s emulation activity relatively early, thereby preventing the more impactful, later-stage activity."

Another factor to keep in mind when interpreting the results is whether all participating vendors deployed technologies that they normally use for MDR, or if they used something else for the evaluation. "We recommend organizations reviewing these results ask vendors if their environment was normal for the average customer."

Radhakrishnan says organizations may be able to do deeper analytics based on the data in the results archive for each vendor. "For instance, you may be able to check timestamps in images to gauge response time for some results," he says. "In future iterations of the Managed Services Evaluation, we are looking for fair and objective ways to include mean time to respond (MTTR) and other such metrics," he says. Radhakrishnan says the results archive offers users all the content the service provider offered MITRE Engenuity during the evaluation.

"We believe that this evaluation provides organizations a set of results that was collected through objective and fair evaluations, where the variables were consistent across all participants."

MITRE Engenuity's Recommendation

In a blog post, Radhakrishnan, recommended that users consider the results in the proper context. Like Nickels noted, MITRE too strongly recommended against organizations merely looking at the total number of techniques a vendor might have detected as the sole yardstick.

"Before starting any analysis of technique coverage, it is important to determine which techniques are most relevant to your organization based on the adversary groups and threats that your organization faces," Radhakrishnan said in the MITRE blog. The blog post offered 10 ways that security practitioners should interpret the evaluation results.

The recommendations include: looking at top-level report statuses of the service providers to get a high-level understanding of how they performed in the evaluation, looking at how the service providers presented their findings to their customers, and determining if the service providers correctly attributed the adversary (OilRig). Some of the other measures users can consider is whether the service providers recommended any mitigation measures; the length of their reports; the clarity of the language in the reports; and the details in their own releases about the evaluations, MITRE Engenuity said.

About the Author

Jai Vijayan, Contributing Writer

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year career at Computerworld, Jai also covered a variety of other technology topics, including big data, Hadoop, Internet of Things, e-voting, and data analytics. Prior to Computerworld, Jai covered technology issues for The Economic Times in Bangalore, India. Jai has a Master's degree in Statistics and lives in Naperville, Ill.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights