Cybersecurity Survival: Hide From Adversarial AI

Consider adding some security-through-obscurity tactics to your organization's protection arsenal to boost protection. Mask your attack surface behind additional zero-trust layers to remove AI's predictive advantage.

Daniel Ballmer, Senior Transformation Analyst for the CXO REvolutionaries & Community, Zscaler

April 24, 2023

5 Min Read
a person's hand hovering before a screen with a selection of options, one of them being AI.
Source: Wright Studio via Adobe Stock

If you read articles about how threat actors can use artificial intelligence (AI), you've probably noticed they fall into two categories: improving deception capabilities and automating malicious coding.

The first case argues that generative AI using large language models (LLMs) can create phishing and smishing lures that are more believable (registration required). Given that 90% of successful cyberattacks begin with phishing, AI improving deceptive lures is cause for major concern. Imagine the convincing communications the new GPT-4 model, which hired people to solve Captcha challenges for it, might write.

The second topic, AI writing malware (which it has done), strikes me as less disturbing. In this case, experts argue that LLMs may write polymorphic malware or make it easier for less-skilled adversaries to create malicious code. However, we live in a world where over 450,000 new variants of malware and potentially unwanted applications (PUAs) are registered every day. It's hard to see how low-skilled users adding to that number would significantly change the threatscape. Simply creating more malware is an easily automated task. AI must create better malware to pose a greater threat.

Adversarial AI Threat Potential

I think the threat potential of adversarial AI is far greater than today's double whammy of better phishing and more malware. Take a moment to put on your threat actor hat and envision what malicious AI might soon achieve. What tools do adversaries typically use, and can well-trained AI improve their effectiveness?

Training AI to Hack

Consider the untapped AI training potential of vulnerability scanning tools such as Acunetix or exploitation frameworks such as Metasploit. These tools automate the reconnaissance and exploitation stages of the cyber kill chain. Today, these tools require human guidance and direction. Advanced persistent threats (APTs) using them to target organizations are focused on the environment of a single victim. The tools do much of the lifting, but people must interpret the results and react accordingly. What if these programs simply supplied their information to a larger data lake for AI to ingest?

We can see how LLMs use massive data sets to create effective, generative AI. The GPT-3 model is advanced enough to predict syllable and word combinations, then construct seemingly intelligent responses to queries. AI can write an essay, haiku, or report on any topic with a simple prompt. Of course, the model doesn't actually know what it's saying. It has simply trained on language data long enough to accurately predict which word should come next in a response.

Imagine an AI that trains on security exploitation as deeply as LLMs train on language. Picture an AI training on all known CVEs, the NIST Cybersecurity Framework, and the OWASP Top 10 as part of its core data set. To truly make this AI dangerous, it should also train on data lakes generated by popular hacking tools. For example, use Nmap for a few million networks and train the AI to recognize correlations between open ports, OS versions, and domains. Run Nessus vulnerability scans in thousands of environments and feed the results to the AI to "learn" patterns of enterprise security flaws. This approach may be beyond the reach of small-time hackers but is certainly within the grasp of state-sponsored threat groups and governments.

Once this malicious AI is well-trained, it may predict vulnerable software and hardware combinations as accurately as ChatGPT chooses words. It may scan an environment, detect several problems, and return a ranked list of possible exploitation techniques. Or it could compromise the environment, then hand access over to threat actors. While better phishing and more malware present immediate problems, they are negligible compared with the potential dangers of a fully weaponized AI.

Security Through Obscurity

Fortunately, organizations can mitigate much of the threat posed by malicious AIs by hiding business infrastructure. As defined by Gartner, zero-trust network access (ZTNA) solutions offer a resilient defense. Think of a cloud-based zero-trust environment as a proxy between users and business resources. Using this configuration, apps, data, and services are hosted on a network separate from users. To access business resources, users must go through an app connector that performs extensive identity analysis. Once a user passes identity and context verifications, the proxy architecture connects them directly to the resource in the cloud.

It's important to note that users are never granted access to the network that hosts the apps, data, and services. They're only connected to the single resource requested in the verified transaction. This approach hides the rest of the network infrastructure and available resources from the secured session. There is no opportunity for an initiator to perform lateral movement, or larger reconnaissance of the environment.

This approach hamstrings adversarial AI trained to discover vulnerabilities in an enterprise environment. With business infrastructure hidden behind a secure cloud-proxy service, the AI cannot see the environment, or know what is exploitable. The AIs ability to find exploitation opportunities relies upon it having an extensive overview of the hardware and software running the business infrastructure.

What about users? An adversarial AI may crack an individual endpoint, such as a laptop or home router. However, controlling these devices does little to offer AI access to the larger business environment. Compromised devices and identities must still pass context analysis before connecting to resources. Access requests will be denied if they come at an odd time or display other suspicious features. If the AI can reasonably mimic a legitimate request from a compromised identity, it is limited to accessing the handful of resources available to that user.

This identity-level ZTNA approach to access eliminates much of an organization's attack surface and minimizes the danger of what remains — moving organizational infrastructure behind a cloud-based zero trust exchange strips adversarial AI of its predictive and knowledge-based advantage. Organizations can achieve security through obscurity by hiding everything malicious AI knows how to exploit.

About the Author

Daniel Ballmer

Senior Transformation Analyst for the CXO REvolutionaries & Community, Zscaler

Daniel Ballmer is a Senior Transformation Analyst for the CXO REvolutionaries & Community at Zscaler. He's held writing, research, and cybersecurity positions while working with several organizations in the IT security industry, including Mircrosoft, Cylance, BlackBerry, and ShiftLeft. His specializations are CXO thought leadership, content creation, and technical research.

Dan has produced top-performing cybersecurity reports, content widely shared by news media, and highly engaging technical articles for general audiences. He earned a bachelor's degree in history from Northern Michigan University, and holds multiple technical and cybersecurity certifications from the University of Texas at Austin.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights