Are AI-Based Attacks Too Good for Security Awareness Training?
With AI, traditional security awareness training faces an existential threat. To ensure its long-term effectiveness, we have to rethink what we train individuals to recognize.
June 17, 2024
In an era where artificial intelligence (AI) continues to advance at a staggering pace, traditional security awareness training is being challenged like never before. The rise of sophisticated AI-powered threats such as smishing, vishing, deepfakes, and AI chatbot-based attacks could render this traditional human-centric approach to defense increasingly ineffective.
Today, Humans Have a Slight Advantage
Currently, security awareness training teaches individuals to spot the signs and tactics used in social engineering attacks. Consumers and employees are taught to recognize suspicious emails (phishing), dubious text messages (smishing), and manipulative phone calls (vishing). Training programs help individuals identify red flags and detect subtle inconsistencies — such as slight variations in language, unexpected requests, or minor errors in communication — to provide a critical line of defense.
A well-trained employee might notice that an email supposedly from a colleague contains unusual phrasing or that a voice message requesting sensitive information comes "from" an executive who should already have access to that information. Consumers, too, can be trained to avoid mass-produced smishing and vishing scams with some effect.
However, even the best-trained individuals are fallible. Stress, fatigue, and cognitive overload can impair judgment, making it easier for AI-attacks to succeed.
Tomorrow, AI Has the Advantage
Fast-forward two to three years and AI-driven attacks will have access to more data and bigger and better large language models (LLMs). They will generate more convincing, context-aware interactions that mimic knowledgeable human behavior with alarming precision.
Today, AI-supported attack tools can craft emails and messages that are virtually indistinguishable from those of legitimate contacts. Voice cloning, too, can mimic the speech of virtually anyone. Tomorrow, these techniques will combine with advanced deep learning models to merge vast amounts of real-time data, spyware, speech patterns and more into near-perfect deepfakes, making AI-generated attacks indistinguishable from human contact.
Already, AI-based attacks have advantages including:
Seamless personalization: AI algorithms can analyze vast amounts of data to tailor attacks specific to an individual's habits, preferences, and communication styles.
Real-time adaptation: AI systems can adapt in real time, modifying their tactics based on the responses they receive. If an initial approach fails, the AI can quickly pivot, trying different strategies until it finds an attack that works.
Emotional manipulation: AI can exploit psychological human weaknesses with unprecedented precision. For instance, an AI-generated deepfake of a trusted family member in distress could convincingly solicit urgent help, bypassing rational scrutiny and triggering an immediate, emotionally driven response.
At Appdome, we're starting to see exploits using AI chatbots, superimposed via an overlay attack over a mobile application, engage a customer or employee in a seemingly harmless conversation. Some brands are starting to prepare for the same attack carried out via an AI-powered keyboard the victim installs on a mobile device. In either case, the overlay or keyboard can gather information on the victim, persuade the victim, present malicious choices, or act on behalf of the victim to compromise security, accounts, or transactions. Unlike today, where anomalies can be detected and action controlled by an individual, the future of AI-driven attacks will include autonomously crafted interactions inside applications and AI agents that act on behalf of the victim, removing the human from the attack lifecycle altogether.
The Future of Security Awareness Training
As AI technology evolves, the traditional security awareness training faces an existential threat, and the margin for human error is evaporating quickly. The future of security awareness training requires a multifaceted approach that leverages real-time automated intervention, better cyber transparency, and AI detection, alongside human training and intuition.
Technical Attack Intervention
Security awareness training must expand to include teaching individuals to recognize a true technical intervention by the brand or enterprise, not just the attack. Even if the individual can't discern a real from a fake interaction by the attacker, recognizing a system-level intervention designed to protect the user should be easier. Brands and enterprises can detect when malware, technical methods of spying, control, and account takeovers are in use, and they can use that information to intercede before any real damage is done.
Better Cyber Transparency
For security awareness training to thrive, organizations need to embrace better cyber transparency so users understand the expected defense response in applications or systems. Of course, this requires having robust defense technology measures in applications and systems to begin with. Still, enterprise policies and consumer-facing product release notes should contain "what to expect" when a threat is detected by the brand or enterprise defenses.
Recognizing AI and AI Agents Interacting with Apps
Brands and enterprises must deploy defense methods that detect the unique ways machines interact with applications and systems. This includes patterns in typing, tapping, recording, in-app or on-device movements, and even the systems used for these interactions. Non-human patterns can be used to trigger end-user alerts, enhance due diligence workflows inside applications, or perform additional authorization steps to complete transactions.
Prepare for the AI-Powered Future
The rise of AI-powered social engineering attacks marks a significant shift in the cybersecurity landscape. If security awareness training is to remain a valuable tool in cyber defense, it must adapt to include application and system level interventions, better cyber transparency, and the ability to recognize automated interactions with applications and systems. By doing this, we can protect our brands and enterprises against the inevitable rise of AI-powered deception and help ensure a more secure future.
By Tom Tovar, CEO & Co-Creator, Appdome
About the Author
Tom Tovar is the CEO and co-creator of Appdome, the only fully automated unified mobile app defense platform. Today, he's a coder, hacker, and business leader. He started his career as a Stanford-educated, tech-focused, corporate and securities lawyer. He brings practical advice to serving as a board member and in C-level leadership roles at several cyber and technology companies.
Read more about:
Sponsor Resource CenterYou May Also Like