How Our Behavioral Bad Habits Are a Community Trait and Security Problem

Learn to think three moves ahead of hackers so you're playing chess, not checkers. Instead of reacting to opponents' moves, be strategic, and disrupt expected patterns of vulnerability.

Joe Sechman, Associate Vice President of R&D, Bishop Fox

December 14, 2022

5 Min Read
Serious woman playing chess
Alessandro de Leo via Alamy Stock Photo

Many an article chronicles hacked passwords available in bulk on the evil "Dark Web." It's presented as evidence that the bad behavior of users is the root of all hacking. But as a former red teamer, the end user isn't the only one who is a prisoner to discernable behavioral patterns.

There is a "pattern of vulnerability" in human behavior extending far beyond end users into more complex IT functions. Finding evidence of these patterns can give hackers an upper hand and speed the timeline of compromise.

It's a reality I recognized earlier in my career in operational roles. I've physically helped rebuild and relocate data centers and rewire buildings from top to bottom. It gave me a great perspective of what it takes to build in security from scratch, and how unconscious behaviors and preferences can put it all at risk. In fact, understanding how to identify these patterns gave me a very reliable "superpower" when I moved into red teaming, which ultimately resulted in a patent grant. But more on that later.

Fatal Recall

Let's start by examining how our addiction to patterns betrays us — from credentials, to software operation, to asset naming.

While technology has afforded us so many benefits, the complexity of managing it — and the cumbersome controls intended to protect — drives people to repeatable patterns and the comfort of familiarity. The more regular the task or function becomes, the more complacent we get with the pattern and what it telegraphs. For a red teamer, the ability to watch routines, from the physical to the logical, can offer a wealth of intelligence. Repeatability offers opportunity and time to discern patterns, and then to find the vulnerability in those patterns that can be exploited.

Internal naming schemes in particular — be they asset names, system names, or credential groupings — lend themselves to picking common words for descriptive categorization. I saw one organization that used the names of mountains. And while you may not know which system K2 versus Denali is, it acts as a filter for an attacker as they explore an environment. It's also an excellent social engineering tool, allowing an attacker to "speak the internal IT lingo." You may ask, OK, but "what's in a name?"

Brutal Reality

I'm sure you've heard of brute-force attacks where attackers throw guesses in volume at a target to find the right combination that leads to access. It's a numbers game and a blunt instrument. However, if you can discern the use of naming conventions, it sharpens the ability to focus on a range of accounts or systems, and then understand their potential attributes. It speeds up the clock for an attacker.

But, you ask, "if these are internal conventions, how does an external attacker even find this type of information"?

Patently True

Enter my aforementioned superpower. As any experienced red teamer knows, information leaks out of organizations in many ways, you just need to know where to look, and how to find the signals in the noise.

Internal naming groups and conventions become exposed to the outside world in a variety of ways. They're buried in website code, detailed in technical documentation or as part of APIs, or just simply published in public system information.

Admittedly, this is a very large haystack, but finding the needles is exactly what the patent I was involved in (US Patent 10,515,219) endeavors to do. Site-scanning tools collect a range of information, and unsurprisingly, an overload of information. My approach strips out all the technical programming information (such as markup, JavaScript, etc.), and leaves just words. It then compares results with lists of English words. The algorithm then identifies groupings of words or abbreviations not present in the selected language that, presumably, may signify an internal naming convention or credentials. As is common with brute-force campaigns, it may not, but as the axiom goes, the attacker only needs to be right once, so the ability to generate context-sensitive word lists may make or break your next campaign. This is when the picture may start to become clearer and the shape of things such as user groups, system names, etc., manifest.

Actions Speak Louder

So, we've established how our deeply rooted behaviors can betray our security literally with "writing on the wall." How do we change, or at least be more aware of, our very nature?

There's the old joke summed up in the punch line that you don't need to outrun a tiger — you just need to outrun your companions. In this way, first use the basic "sneaker" technologies. Password managers, multifactor authentication (MFA), and the like at least allow you to outrun your peers so attackers can focus on the laggards in the herd.

Second, elect for regular change. Change is uncomfortable, but that discomfort triggers better situational awareness. If you know yourself and your environment better, and force change, that helps prevent an attacker's ability to get to know you too well.

Next, trust your gut. If something doesn't seem right, it probably isn't. If you focus on failure and not the familiarity of the behavior around failure, you're better equipped to see the bad guys coming and make sure a small anomaly doesn't become a big problem.

Finally, play chess, not checkers. Too many organizations think they're playing chess, and may be employing more complex pieces and roles, but if, ultimately, you're playing in reaction to your opponents' moves, it's checkers in disguise.

It's a lesson I am teaching my own son while he's interested in learning chess. He's learning the strategy behind the game. He understands using the pieces and their characteristics to manipulate the game, and is quickly catching on to the fact that he also needs to focus on manipulating me. I'm teaching him to think three moves ahead, think about what is possible, and lure his opponent into doing what he wants them to do, not what they want to do — and, most importantly, to trust the gambit.

About the Author

Joe Sechman

Associate Vice President of R&D, Bishop Fox

Joe Sechman is responsible for nurturing a culture of innovation across Bishop Fox. With more than 20 years of experience, he has amassed many security certifications, delivered several presentations, and has co-authored multiple industry publications with groups such as ISC2, ISACA, ASIS, HP, and IEEE. Joe is a prolific inventor with nine granted patents in the fields of dynamic and runtime application security testing, attack surface enumeration, and coverage. Prior to joining Bishop Fox, he held leadership positions with companies such as Cobalt Labs, HP Fortify, Royal Philips, and Sunera LLC (now Focal Point Data Risk). Earlier in his career, he served as the lead penetration tester within SPI Labs at SPI Dynamics. Joe received his Bachelor of Business Administration degree in Management Information Systems from the Terry College of Business - University of Georgia.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights