Leveraging Behavioral Insights to Counter LLM-Enabled HackingLeveraging Behavioral Insights to Counter LLM-Enabled Hacking
As LLMs broaden access to hacking and diversify attack strategies, understanding the thought processes behind these innovations will be vital for bolstering IT defenses.
COMMENTARY
Hacking is innovation in its purest form. Like any other innovation, a successful hack requires developing a creative solution to the scenario at hand and then effectively implementing that solution. As technologies facilitate implementation, successfully preventing a hack (that is, blue teaming) or simulating an attack to test defenses (red teaming) will require a better understanding of how adversaries generate creative ideas.
In the 1990s, many organizations and vendors did not sufficiently prioritize security when designing systems. As a result, finding solutions to bypass their security measures took hackers relatively little time. The problem was that while many hackers could imagine attacks that would bypass these rudimentary security measures, few had the technical skills to implement those attacks. For instance, while hacking enthusiasts theoretically understood how to abuse vulnerabilities in insecure network protocols, most lacked the technical skills necessary to write a raw socket library to do so. The bottleneck was implementation.
Over the next two decades, automated tools were developed for almost every generalized attack pattern. Suddenly, the complicated solutions that a '90s hacker could only imagine but lacked the programming capability to execute became possible with the click of a button for anyone. While some attacks still require technical skills, today it is possible to hack by creatively chaining together the abundant functions of various automated hacking tools (e.g., Metasploit, Burp Suite, Mimikatz) to penetrate the system's cracks.
Similarly, it is easy to find help, such as Copilot apps and software developers on freelancing platforms, to write specific functions required to implement an attack. In other words, with the advent of new tools and platforms, the emphasis in a successful hack has been shifting from implementation (that is, being able to write the code for the attack you imagine) to creativity (being able to imagine a novel attack). Now, the advent of large language models (LLMs) with growing inventive capabilities means that pure creativity — rather than bottlenecks in technical capability — will drive the next era of hacking.
A New Breed of Hackers
How will this new breed of hackers differ in terms of how they devise new cyberattacks? In many cases, this creativity will take the form of designing a novel prompt, as implementation will increasingly happen through LLMs and their various plug-ins (for instance, Anthropic's Claude 3.5 Sonnet model can already use computers). Most importantly, because many of them will not have a background in computer science, their reasoning will build on ideas and solutions from different domains — also known as analogical transfer. Many fighters in history designed novel martial arts by drawing inspiration from the behaviors of different animals. In a similar vein, a recently developed side-channel attack uses signals from wireless devices in a building to map the bodies of the people inside (analogous to how bats use echolocation to find their prey). Research has also found that information can be stolen even from air-gapped systems not connected to the Internet by examining the electromagnetic wave patterns emitted by a screen's cable or by analyzing the acoustic sound patterns of the screen itself to reconstruct the contents displayed on the computer's screen (perhaps analogous to reconstructing the recent history of a black hole by analyzing faint remnant signals in the form of Hawking radiation).
It's likely that novel prompts making similar analogies will lead to creative uses of LLMs in devising new and unexpected attack patterns. They may draw inspiration from famous battles, chess games, or business strategies, resulting in novel attack patterns or techniques. This also means that successfully preventing such attacks or emulating them for red-teaming purposes will require using research methods from behavioral sciences — such as marketing — to extrapolate common or uncommon prompts an attacker might try.
Research into potential prompts for designing an attack can take various forms. Traditional research methods, such as idea generation experiments, surveys, and in-depth interviews, can provide insights into common and uncommon prompts people may consider. Additionally, research from search engines and social media platforms may offer ideas about common combinations of knowledge (for instance, market basket analysis), which can be valuable for estimating potential analogies that people interested in hacking may be more likely to generate. Finally, crowdsourcing-based research, such as hacking challenges, will again be an asset, but the focus will be not only on the attack but also on the prompts used to develop that attack. Prompts that result in novel attacks are likely to be regularly utilized by both blue and red teams, much like Google Dorks are employed today.
As LLMs broaden access to hacking and diversify attack strategies, understanding the thought processes behind these innovations will be vital for bolstering IT defenses. Insights from behavioral sciences like marketing will play a key role in achieving this goal.
About the Authors
You May Also Like