A Golden Age of AI … or Security Threats?

Now is the time to build safeguards into nascent AI technology.

Joey Stanford, Vice President of Privacy & Security, Platform.sh

July 5, 2023

5 Min Read
Brain on a digital background -- AI concept art
Source: chombosan via Alamy Stock Photo

Are we in a golden age of artificial intelligence? It's tough to make predictions — and probably ill-advised. Who wants to predict the future of such a new technology?

We can, however, say some things for certain. A great deal is being made of AI's application to creative works, from voice acting to first drafts of screenplays. But AI is likely to be most useful when dealing with drudgery. This is good news for developers, if the promise matches early experiments — first drafts of code can be easily created and ready for developers to tweak and iterate upon.

However, it's important to remember that not every coder is working for a legitimate business. Just as cybersecurity threat actors have emulated their targets in becoming more businesslike, they're also adopting new technologies and techniques. We can expect AI to help in the development of malware and other threats in the coming years, whether we're entering a golden age of AI or not.

Drafting Code and Scams

One trend we've seen in recent years is a rise of "as-a-service" offerings. Early hackers were tinkerers and mischief-makers, tricking phone systems or causing chaos mostly as an exercise in fun. This has fundamentally changed. Threat actors are professional and often sell their products for others to use.

AI will fit very nicely into this way of working. Able to create code to tackle specific problems, AI can amend code to target vulnerabilities or take existing code and change it, so it's not so easily detected by security measures looking for specific patterns.

But the possibilities for AI's misuse doesn't stop there. Many phishing emails are detected by effective filtering tools and end up in junk folders. Those that do make it to the inbox are often very obviously scams, written so badly they're borderline incomprehensible. But AI could break this pattern, creating thousands of plausible emails that can evade detection and be well-written enough to fool both filters and end users.

Spear-phishing, the more targeted form of this attack, could also be revolutionized by this tech. Sure, it's easy to ignore an email from your boss asking you to wire cash or urgently buy gift cards — cybersecurity training helps employees avoid this sort of scam. But what about a deep-fake phone call or video chat? AI has the potential to take broadcast appearances and podcasts and turn them into a convincing simulacrum, something far harder to ignore.

Fighting Back Against AI Cyberattacks

There are two main ways to fight back against the benefits that AI will confer on the enemy — better AI and better training — And both will be necessary.

The advent of this new generation of AI has started a new arms race. As cybercriminals use it to evolve their attacks so will security teams need to use it to evolve their defenses.

Without AI, defenses rely on overworked people and monitoring for certain preprogramed patterns to prevent attacks. AI defensive tools will be able to predict attack vectors and pinpoint sensitive areas of the network and systems. They'll also be able to analyze malicious code, allowing a better understanding of how new attacks work and how they can be prevented.

AI could also work, in a pinch, as an emergency stop — disabling the network upon detecting a breach and locking down the entire system. While not ideal from a business continuity perspective, this could be far less damaging than a data breach.

But fighting AI with AI is not the only answer. We also need human intelligence. No matter how smart and targeted, the best defense against a phishing attack is an employee or customer who knows what to look for and is suspicious enough not to take the bait. Implementing robust security policies and best-practice cyber hygiene will remain key to defending against attacks.

This means that training will need to be updated to include the signs of an AI attack … whatever those might be. Training will need to evolve with AI — a single training course every few years isn't going to cut it anymore when that training is quickly out of date.

While the possible signs of an AI-driven cyberattack are changing rapidly, in general, attacks are:

  • Fast and scalable, exploiting multiple vulnerabilities in a short time span.

  • Adaptive and evasive, changing tactics and techniques to avoid detection and response.

  • Targeted and personalized, using AI to craft convincing phishing emails or social engineering campaigns.

  • Deceptive and manipulative, using AI to create fake or altered content such as deepfakes, voice cloning, or text generation.

  • Stealthy and persistent, hiding in the network infrastructure for a long time without being noticed.

These signs aren't exhaustive, and some AI-driven attacks may not exhibit all of them. However, they indicate the level of threat that AI poses to cybersecurity.

To effectively fight AI-driven cyberattacks, businesses must think beyond individual bad actors and prepare for coordinated attacks by state-sponsored actors or criminal organizations that may use AI to launch sophisticated campaigns using a risk-based approach. They should also have a proactive strategy that includes regular security audits, backups, encryption, and incident response plans. This is most easily accomplished by obtaining a well-known security certification such as PCI-DSS.

Finally, it's imperative that organizations improve the cybersecurity of their own AI systems by ensuring their integrity, confidentiality, and availability, and by mitigating the risks of adversarial attacks, data poisoning, and model stealing.

These strategies will help protect businesses, but they shouldn't standalone — security should be collaborative. By collaborating with other organizations, researchers, and authorities to share information, best practices, and failures that can be learned from, businesses will be better prepared for the new wave of AI security threats.

AI is both a new threat and a continuation of older threats. Businesses will have to evolve how they tackle cyber threats as these threats become more sophisticated and more numerous, but a lot of the fundamentals remain the same. Getting these right remains critical. Security teams don't need to pivot away from old ideas but build on them to keep their businesses safe.

About the Author

Joey Stanford

Vice President of Privacy & Security, Platform.sh

Joey Stanford brings more than 30 years of experience to his role as the VP of Privacy and Security at Platform.sh. Prior to joining Platform.sh he managed information security and devops programs for companies in the US, France, and the UK. With a passion for free and open source software, Stanford is responsible for global security, data management and compliance, and ensuring Platform.sh is a trusted custodian of their customers' data.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights