Critical Thinking AI in Cybersecurity: A Stretch or a Possibility?

It might still sound far-fetched to say AI can develop critical thinking skills and help us make decisions in the cybersecurity industry. But we're not far off.

Nenad Zaric, CEO & Co-founder, Trickest

August 21, 2024

5 Min Read
Digital brain and globe
Source: chombosan via Alamy Stock Photo

COMMENTARY

Will artificial intelligence ever think for us? In 2024, when AI is still in somewhat of an early stage, this might be a loaded question. In cybersecurity, the technology doesn't go beyond automating repetitive tasks, leaving security teams to do the decision-making bit. However, AI's impressive growth in the past two years inevitably makes us wonder if, soon enough, it will be used for critical thinking activities in the sector.

This question becomes even more pressing as hackers increasingly use AI to build better, more sophisticated attacks. And, as KPMG posits, the industry must use AI to fight AI. If the industry wishes to be a step ahead of malicious actors, it must also elevate the technology to fight fire with fire. So, security teams must train their AI models to be smarter than their hacker counterparts, nearing critical thinking levels to outsmart attacks.

While AI's possibilities seem limitless and AI cyberattacks are a pressing matter, we can't get ahead of ourselves. There are many improvements yet to be made, and it's up to the cybersecurity industry to channel its development in the correct path. Where should the industry concentrate its efforts so AI can eventually aid in critical thinking tasks?

Let's explore the current state of AI technology in cybersecurity, the obstacles facing its development, and what leaders can do to get it closer to a critical thinking stage.

What's the Current State of AI In Cybersecurity?

In the larger scope, we are still attempting to build trustworthy AI that can generate accurate answers without hallucinations (which have proven to be extremely harmful to cybersecurity). In the cybersecurity industry, it's helping chief information security officers (CISOs) streamline workflows and forensics examine cyberattack incidents. It also provides valuable insights into new attack vectors.

Needless to say, when we talk about critical thinking technology, its purpose will be to aid humans in making decisions that require more than a yes or no answer and to go beyond the current logic we give it — analyzing angles, forecasting outcomes, and suggesting favorable choices.

For example, let's say a company receives a convincing phishing email that appears to be from their CEO requesting an urgent wire transfer of a large sum of money. Traditional AI would simply analyze keywords in the email and sender address. If they match the CEO's information, the transfer could be flagged as legitimate but not necessarily verified.

On the other hand, critical thinking AI would analyze the email content, verify the request, identify anomalies, and cross reference data. This could mean the AI directly contacts the CEO to confirm he made the request, alert security teams about suspicious activities, and check on the CEO's calendar to see if he was even available at the time the email was sent.

AI never makes any vital choices in this scenario because the complexities of our lives, work, and decisions involve numerous little factors that it may not fully comprehend, at least for now. However, it does assess more data points than traditional AI, and becomes more resourceful on its own accord. Ultimately, humans should monitor and confirm its decisions before anything else is done.

This constant vigilance is crucial, especially considering the ongoing arms race with cybercriminals: 93% of leaders already expect daily AI-powered cyberattacks. While the technology is being used to strengthen and secure systems, malicious actors have also found ways to refine their attacks and outsmart cybersecurity protocols — meaning leaders must keep pushing the boundaries of AI to keep platforms safe.

What Are the Most Pressing Obstacles to Building Smarter AI?

It's clear there is a long road ahead to achieving an AI tool we can trust with decision-making in the cybersecurity world. We must start by addressing some major pain points in how we implement the technology right now, like lack of context, data sharing, and unforeseen incidents.

AI is built on large language models (LLM) that can process vast amounts of data, but we might fail to give it a crucial piece of information: context. AI systems often lack the detailed understanding of personal and organizational specifics needed to make accurate choices that reflect a company and its members, leading to potential misjudgments. By giving it company, industry, and more task-specific context, it can begin to arrive at more well-rounded conclusions.

Explaining the "why" will empower AI to discern the best choices in given situations.

Lastly, the technology requires an extreme level of accuracy in terms of its algorithms, data quality, and prompt specificity to achieve the desired outcome. This means training data and algorithms must be optimized continuously, and prompt engineering must be taught to all users.

What Steps Can Cybersecurity Leaders Take to Refine AI?

To fully harness AI's potential while maintaining security, there must be a way to safely provide AI with the necessary context and information. One approach is to create secure and controlled methods for feeding relevant data to AI systems, ensuring they understand the specific goals, context, and security priorities of an organization. For example, automating security scans across attack surfaces can align data with security objectives. Implementing explainable AI and context and scenario-building training data can also help improve AI's critical thinking.

And, as with anything, AI needs limits if we want to get the most optimal results. These limitations will help reign in the tech, preventing it from going out of scope and performing actions that developers didn't anticipate. This is particularly important when considering AI agents capable of executing specific tasks within the context of LLMs. For example, imagine using AI to transfer money for a mortgage payment but instructing it with a twist: "Don't use my money, use John Doe's." It must be developed to avoid unintended manipulation.

It might still sound wild to say AI can develop critical thinking skills and help us make decisions in the cybersecurity industry. However, we're not too far off, and developing the technology through the right path can help businesses build a smarter and more intuitive tool — going above and beyond automation and monitoring.

About the Author

Nenad Zaric

CEO & Co-founder, Trickest

Nenad Zaric is an offensive security professional with more than 10 years of experience in penetration testing, bug bounty hunting, and security automation. He is the co-founder and CEO of Trickest, a company focused on automated offensive cybersecurity. Before founding Trickest, he found critical vulnerabilities in Fortune 500 companies such as Uber, Snapchat, Spotify, Twitter, and Airbnb.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights