3 Ways Hackers Use ChatGPT to Cause Security Headaches

As ChatGPT adoption grows, the industry needs to proceed with caution. Here's why.

Ron Reiter, Co-Founder & CTO, Sentra

May 18, 2023

4 Min Read
The word "ChatGPT" superimposed over a photo of hands typing on a keyboard.
Source: Skorzewiak via Alamy Stock Photo

With ChatGPT making headlines everywhere, it feels like the world has entered a Black Mirror episode. While some argue artificial intelligence will be the ultimate solution to our biggest cybersecurity issues, others say it will introduce a whole slew of new challenges.

I'm on the side of the latter. While I recognize that ChatGPT is an amazing piece of technology, it is also an enabler for hackers, commoditizing nation-state capabilities for the benefit of the "script kiddies" — aka unsophisticated hackers. In addition to writing text, the technology opens up a scary scenario where a computer can be guided to look for information within images that humans can't immediately pick up but machines are sensitive enough to see. Examples would be reflections of passwords on glass, or people who appear in photos that would not appear in them without the help of AI.

As ChatGPT adoption grows, I believe the industry needs to proceed with caution, and here's why. There are three types of capabilities hackers can use ChatGPT for: mass phishing, reverse engineering, and smart malware. Let's take a look at each one of these in detail.

Mass Phishing

Because ChatGPT is so powerful, it can reduce the amount of time it takes to create handcrafted, personalized emails to a list of people from a few days to just minutes. And with just the click of a button, ChatGPT can answer very specific questions and use its knowledge to impersonate both security and non-security personnel experts. Because ChatGPT can also translate text into any style of writing or proofread at a very high level, once a list of employees and their details are attained, it's easy to mass create emails where a hacker is pretending to be someone else to increase the chances of a successful attack.

Phishing is an essential part of hacking organizations, whether it be to gain access to the servers of an organization or to attempt to convince people to transfer money. To combat this, business leaders must educate employees on the security implications of ChatGPT and how to spot potential attacks. I think employees should be especially critical of text and never assume something is coming from an authentic source. Instead of just blindly trusting anyone, employees should put their trust in other mechanisms, like paying close attention to whether an email/code came from the company server or whether it includes the proper signature. My biggest advice is for employees to use other means of verification besides the style of the text itself.

Reverse Engineering

ChatGPT is amazing at understanding code, or even machine code. By providing either binary code or obfuscated code of a system, ChatGPT can explain how the code works and what it does in a way that makes it easy for hackers to manipulate the piece of software and enable them to gain access to the company's servers.

Reverse engineering used to be a very rare and highly lucrative skill; historically, only nation-states could incorporate it into their operations. This is now something that can be done by the most basic hackers.

Smart Malware

ChatGPT can function as a mini-brain for malware, making it completely autonomous. Nowadays, sophisticated malware allows the hacker to tunnel through a company's servers to observe the hacked network and servers, and send commands on how to extract information. This operation usually involves a hacker connecting to malware he or she has managed to install somewhere within the hacked company's network or servers.

With ChatGPT, the malware can make decisions autonomously so it understands where the interesting data is, where passwords might be stored locally, and even how to connect to the data sources and extract the data automatically. ChatGPT can sift through much more data than a human — in some cases, even do it better — making the risk of a malware using ChatGPT much more dangerous than would be the case with simple malware that allows hackers to connect and operate it. Since it is completely autonomous — there's no longer a need for a hacker to control the malware and manually direct it to the right place — it is not only more dangerous, but it could be more common. This means more companies are at risk for being sporadically attacked, instead of being specifically targeted.

We've already seen Samsung fall victim to one of the first cyberattacks using ChatGPT, and it surely won't be the last to do so. This serves as a good reminder that companies across all industries must stay vigilant, training their employees to understand the cybersecurity risks of ChatGPT — not only when they are using it themselves, but when somebody else may be using it against them. Over time, I expect (and encourage) investment priorities to change so that privacy and security teams can ensure their organizations are not vulnerable to the negative ramifications of ChatGPT.

About the Author

Ron Reiter

Co-Founder & CTO, Sentra

Ron Reiter is a Co-Founder and CTO at Sentra, a cloud data security company. He is an experienced entrepreneur who sold his company to Oracle in 2016 and went on to invest in over a dozen new startups. After serving in Unit 8200, Ron spent 15 years in various managing positions in data engineering, cybersecurity, and cloud infrastructure.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights