The Emerging AI Security Threat: 4 Ways To Prepare
Artificial intelligence represents a huge opportunity for cybercriminals to wreak havoc and extort organizations as AI becomes more pervasive.
When people talk about artificial intelligence (AI) and security, the conversation almost always revolves around how AI and machine learning can be applied to fighting malware and other malicious cyberattacks that threaten enterprises.
Take, for example, a recent survey in which 700 IT executives expressed nearly unanimous enthusiasm about AI's potential for transforming daily operations, products, and services at their companies. In fact, they cited detecting and blocking malware, along with predictive insights for network troubleshooting, as the most beneficial use cases AI will provide to their organizations.
That's great, as AI indeed holds enormous promise as a way to bolster cybersecurity posture. But there's another side to the AI security discussion that's only starting to get the attention it deserves: securing AI systems themselves.
Unfortunately, AI represents a huge stretch opportunity for cybercriminals to wreak havoc and extort organizations for ransom as the technology becomes more pervasive throughout companies and society in general. For that reason, many experts expect breaches of AI data and models to rise in the coming years.
As a Brookings Institution report put it: "Increasing dependence on AI for critical functions and services will not only create greater incentives for attackers to target those algorithms, but also the potential for each successful attack to have more severe consequences."
Furthermore, in the case of AI-based security solutions specifically, the report said, "If we rely on machine learning algorithms to detect and respond to cyberattacks, it is all the more important that those algorithms be protected from interference, compromise, or misuse."
Because most organizations are still relatively early on the AI adoption curve, however, they're only just starting to wrap their arms around the special security considerations of AI development and deployment.
AI attacks are different than traditional application or network breaches. Traditional breaches typically involve stealing and/or encrypting information, perhaps via ransomware, or taking control of a network through all too familiar means like denial-of-service attacks, DNS tunneling, etc. AI threats, on the other hand, have more to do with corruption of the large amounts of data used in training AI models.
Thus, to keep AI systems secure, organizations must understand and defend against the distinctive infiltration tactics adversaries may use, such as:
Poisoning attacks, in which hackers use malware to gain access during the AI model training phase and then tamper with the learning process by injecting inaccurate or mislabeled data that harms the trained model's accuracy.
Model stealing, where bad actors purloin model parameters by gaining access to source code repositories through phishing or weak passwords and hunt for model files.
Data extraction attacks, in which intruders use tricks to query a model to retrieve information about the training data.
Given these risks, it's essential that organizations don't delay rethinking their security ecosystems to safeguard AI data and models. Here are four steps to take right now.
Take Inventory
A company can't protect its AI models, algorithms, and systems unless it has a firm grasp on where they all are. Therefore, every organization should diligently develop and maintain a formal catalog of all its AI uses.
It's not easy work. "One bank made an inventory of all their models that use advanced or AI-powered algorithms and found a staggering total of 20,000," Brookings Institution said. But the effort is well worth it.
Elevate
Organizations should treat AI models not as some IT outlier but as a hard asset to be tracked just as rigorously as a laptop or phone issued to an employee. Every company that has customized AI models proprietary to their business needs to know their whereabouts, chronicle their usage, and understand who has access. This level of discipline is the only way to maintain the rigor needed to properly protect AI systems.
Extend
AI is all about data, so companies should double down on their efforts to execute policies, procedures, and best practices for securing all enterprise data – including the entire AI ecosystem, from development to deployment.
Innovate
As industry awareness of the AI security challenge grows, technologies and initiatives to help are bound to keep emerging. One example is Private Aggregation of Teacher Ensembles (PATE), an approach that defends AI systems from model duplicating techniques that can leak sensitive data. Companies should make it a priority to stay abreast of such developments.
By following these four steps, organizations can start getting ahead of the AI security threat and mitigate risks that, if unchecked, could prove very destructive as AI adoption rapidly accelerates.
About the Authors
You May Also Like