Thinking Outside the Box Is More Than a Slogan
Threat intelligence is more than analyzing bits and bytes. The consequences of failure mean we must think differently about how we analyze problems.
November 18, 2024
Imagine there's a challenging problem to solve — one that has vexed previous teams or frustrated multiple attempts to find an answer. Don't worry. All you have to do is pull everyone together and tell them to "think outside the box." Simple, right?
Except it's not. I have given multiple keynotes on the impending cyberspace Cold War with Russia. Before diving into the history of Russian intelligence, I ask the audience how many have heard of thinking outside the box. Invariably, almost every hand goes up. I inquire if anyone knows the origin of the concept. Crickets.
I next show a slide with the classic nine-dots puzzle — nine dots on the screen arranged in three rows of three. Then I ask how many people see a square represented by the dots on the screen. Again, almost every hand goes up. Even though my audience doesn't realize it, I have framed the problem according to my terms, ensuring it will be nearly impossible to find the solution to my real question: Can you connect all nine dots with four straight lines without taking the pen from the paper? Like this:
Source: Steve Gustafson (Smerdis of Tlön)
The Problem Isn't the Problem
Our cybersecurity adversaries operate in areas outside the box. The problem we see isn't the problem. The only problem that matters is how our adversaries think about it. To beat them, we must begin to think like them.
The original nine-dot problem was developed in 1930 by Norman Frederick Raymond Maier, an experimental psychologist at the University of Michigan. The experiment was designed to test whether college undergraduates could solve it in only a few minutes. Although exceedingly simple on its face, fewer than 5% of the students were successful.
In 1984, Gestalt psychologists attempted to understand the puzzle's dilemma. They concluded that "the nine-dot problem is difficult because people are so dominated by the perception of a square that they do not 'see' the possibility of extending lines outside the square formed by the dots."
How Adversaries Expose Our Flaws
How does this matter to threat intelligence and cybersecurity? Here are some examples of how our adversaries' unconventional thinking has exposed our flaws.
In May 2023, the United States Department of Justice announced a court-ordered disruption of the Snake malware network controlled by Russia's Federal Security Service (FSB). This malware network operated for nearly 20 years, constantly evading detection and removal and upgrading its toolkits to extend and expand its lifespan. Snake operated longer than the vast majority of today's cybersecurity practitioners have been employed.
Another glaring example was the SolarWinds compromise, which exploited the industry's implicit trust in software updates. The attack also exposed an even bigger flaw in our thinking about how updates were pushed to the operational environment. The proverbial wisdom was that telemetry data needed to be kept for only as long as it took to monitor the update's deployment. Russia's Foreign Intelligence Service (SVR) had a simple solution to avoid detection: wait to activate the implant until several days later. Russia didn't out-code us. They out-thought us.
China also taught a non-cyber-related lesson with the spy balloon incident in 2023. Civilian aircraft usually operate at an altitude of 42,000 feet or less. Generally, high-altitude balloons operate above 60,000 feet. At first, China claimed the object was an errant weather balloon, which no one sincerely believed. Like the introduction of benign code into the SolarWinds update server to see if it would be detected, the first package to transit our airspace was most likely an actual weather balloon. The operation exploited a domain awareness gap, a fancy military term that means we weren't monitoring it since we weren't operating in that space.
Hiding in Plain Sight
The most poignant lesson I learned about how thinking outside the box has real-world ramifications happened while teaching behavioral analysis interviewing to damage assessment agents at the National Security Agency.
Victor Cherkashin was a KGB officer who holds the distinction of handling the two most damaging traitors in history: Aldrich Ames and Robert Hansen. The KGB needed a plausible reason for Ames to meet with Cherkashin. The solution was to hide in plain sight. Ames did this by claiming he was running a counterintelligence operation, effectively trying to recruit the same man handling Ames for the KGB. A statement Ames made perfectly sums this up:
"But the defining element is always a betrayal of trust. That is what is at the core of the intelligence officer's world — betraying another person's trust in you."
We can't be content with traditional thinking to defeat the myriad threats arrayed against us; our trust will ultimately be betrayed. Thinking outside the box must be more than a slogan — it has to become a way of life.
By Morgan Wright, Chief Security Advisor, SentinelOne
About the Author:
Morgan Wright is Chief Security Advisor for SentinelOne and Senior Fellow at The Center for Digital Government. His testimony before Congress on Healthcare.gov changed how personally identifiable information was collected. He was a senior advisor in the US State Department Antiterrorism Assistance Program and senior law enforcement advisor for the 2012 Republican National Convention. He has 18 years in law enforcement and taught behavioral analysis at the National Security Agency. He has developed solutions for the largest technology companies in the world, including Cisco, SAIC, Unisys, and Alcatel-Lucent/Bell Labs.
Read more about:
Sponsor Resource CenterYou May Also Like