Why Artificial Intelligence Is Not a Silver Bullet for Cybersecurity

Like any technology, AI and machine learning have limitations. Three are detection, power, and people.

Tomas Honzak, Director, Security and Compliance, GoodData

July 20, 2018

5 Min Read
Dark Reading logo in a gray background | Dark Reading

A recent Cisco survey found that 39% of CISOs say their organizations are reliant on automation for cybersecurity, another 34% say they are reliant on machine learning, and 32% report they are highly reliant on artificial intelligence (AI). I'm impressed by the optimism these CISOs have about AI, but good luck with that. I think it's unlikely that AI will be used for much beyond spotting malicious behavior.

To be fair, AI definitely has a few clear advantages for cybersecurity. With malware that self-modifies like the flu virus, it would be close to impossible to develop a response strategy without using AI. It's also handy for financial institutions like banks or credit card providers who are always on the hunt for ways to improve their fraud detection and prevention; once properly trained, AI can heavily enhance their SIEM systems. But AI is not the cybersecurity silver bullet that everyone wants you to believe. In reality, like any technology, AI has its limitations.

1. Fool Me Once: AI Can Be Used to Fool Other AIs
This is the big one for me. If you're using AI to better detect threats, there's an attacker out there who had the exact same thought. Where a company is using AI to detect attacks with greater accuracy, an attacker is using AI to develop malware that's smarter and evolves to avoid detection. Basically, the malware escapes being detected by an AI ... by using AI. Once attackers make it past the company's AI, it's easy for them to remain unnoticed while mapping the environment, behavior that a company's AI would rule out as a statistical error. Even when the malware is detected, security already has been compromised and damage might already have been done.

2. Power Matters: With Low-Power Devices, AI Might Be Too Little, Too Late
Internet of Things (IoT) networks are typically low power with a small amount of data. If an attacker manages to deploy malware at this level, then chances are that AI won't be able to help. AI needs a lot of memory, computing power, and, most importantly, big data to run successfully. There is no way this can be done on an IoT device; the data will have to be sent to the cloud for processing before the AI can respond. By then, it's already too late. It's like your car calling 911 for you and reporting your location at the time of crash, but you've still crashed. It might report the crash a little faster than a bystander would have, but it didn't do anything to actually prevent the collision. At best, AI might be helpful in detecting that something's going wrong before you lose control over the device, or, in the worst case, over your whole IoT infrastructure.

3. The Known Unknown: AI Can't Analyze What It Does Not Know
While AI is likely to work quite well over a strictly controlled network, the reality is much more colorful and much less controlled. AI's Four Horsemen of the Apocalypse are the proliferation of shadow IT, bring-your-own-device programs, software-as-a-service systems, and, as always, employees. Regardless of how much big data you have for your AI, you need to tame all four of these simultaneously — a difficult or near-impossible task. There will always be a situation where an employee catches up on Gmail-based company email from a personal laptop over an unsecured Wi-Fi network and boom! There goes your sensitive data without AI even getting the chance to know about it. In the end, your own application might be protected by AI that prevents you from misusing it, but how do you secure it for the end user who might be using a device that you weren't even aware of? Or, how do you introduce AI to a cloud-based system that offers only smartphone apps and no corporate access control, not to mention real-time logs? There's simply no way for a company to successfully employ machine learning in this type of situation.

AI does help, but it's not a game changer. AI can be used to detect malware or an attacker in the system it controls, but it's hard to prevent malware from being distributed through company systems, and there's no way it can help unless you ensure it can control all your endpoint devices and systems. We're still fighting the same battle we've always been fighting, but we — and the attackers — are using different weapons, and the defenses we have are efficient only when properly deployed and managed.

Rather than looking to AI as the Cyber Savior, we need to keep the focus on the same old boring problems we've always had: the lack of control, lack of monitoring, and lack of understanding of potential threats. Only by understanding who your users are and which devices they have for what purposes and then ensuring the systems used actually can be protected by AI can you start deploying and training it.

Related Content:

Learn from the industry's most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info

About the Author

Tomas Honzak

Director, Security and Compliance, GoodData

Tomáš Honzák serves as the head of security, privacy and compliance at GoodData, where he built an information security management system compliant with security and privacy management standards and regulations such as SOC 2, HIPAA and U.S.-EU Privacy Shield, enabling the company to help Fortune 500 companies distribute customized analytics to their business ecosystem.

 

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights