6 AI-Related Security Trends to Watch in 2025

AI tools will enable significant productivity and efficiency benefits for organizations in the coming year, but they also will exacerbate privacy, governance, and security risks.

6 Min Read
A hand about to press a large digital button marked AI on a screen
Source: Anggalih Prasetya via Shutterstock

Most industry analysts expect organizations will accelerate efforts to harness generative artificial intelligence (GenAI) and large language models (LLMs) in a variety of use cases over the next year.

Typical examples include customer support, fraud detection, content creation, data analytics, knowledge management, and, increasingly, software development. A recent survey of 1,700 IT professionals conducted by Centient on behalf of OutSystems had 81% of respondents describing their organizations as currently using GenAI to assist with coding and software development. Nearly three-quarters (74%) plan on building 10 or more apps over the next 12 months using AI-powered development approaches.

While such use cases promise to deliver significant efficiency and productivity gains for organizations, they also introduce new privacy, governance, and security risks. Here are six AI-related security issues that industry experts say IT and security leaders should pay attention to in the next 12 months.

AI Coding Assistants Will Go Mainstream — and So Will Risks

Use of AI-based coding assistants, such as GitHub Copilot, Amazon CodeWhisperer, and OpenAI Codex, will go from experimental and early adopter status to mainstream, especially among startup organizations. The touted upsides of such tools include improved developer productivity, automation of repetitive tasks, error reduction, and faster development times. However, as with all new technologies, there are some downsides as well. From a security standpoint these include auto-coding responses like vulnerable code, data exposure, and propagation of insecure coding practices.

"While AI-based code assistants undoubtedly offer strong benefits when it comes to auto-complete, code generation, re-use, and making coding more accessible to a non-engineering audience, it is not without risks," says Derek Holt, CEO of Digital.ai. The biggest is the fact that the AI models are only as good as the code they are trained on. Early users saw coding errors, security anti-patterns, and code sprawl while using AI coding assistants for development, Holt says. "Enterprises users will continue to be required to scan for known vulnerabilities with [Dynamic Application Security Testing, or DAST; and Static Application Security Testing, or SAST] and harden code against reverse-engineering attempts to ensure negative impacts are limited and productivity gains are driving expect benefits."

AI to Accelerate Adoption of xOps Practices

As more organizations work to embed AI capabilities into their software, expect to see DevSecOps, DataOps, and ModelOps — or the practice of managing and monitoring AI models in production — converge into a broader, all-encompassing xOps management approach, Holt says. The push to AI-enabled software is increasingly blurring the lines between traditional declarative apps that follow predefined rules to achieve specific outcomes, and LLMs and GenAI apps that dynamically generate responses based on patterns learned from training data sets, Holt says. The trend will put new pressures on operations, support, and QA teams, and drive adoption of xOps, he notes.

"xOps is an emerging term that outlines the DevOps requirements when creating applications that leverage in-house or open source models trained on enterprise proprietary data," he says. "This new approach recognizes that when delivering mobile or web applications that leverage AI models, there is a requirement to integrate and synchronize traditional DevSecOps processes with that of DataOps, MLOps, and ModelOps into an integrated end-to-end life cycle." Holt perceives this emerging set of best practices will become hyper-critical for companies to ensure quality, secure, and supportable AI-enhanced applications.

Shadow AI: A Bigger Security Headache

The easy availability of a wide and rapidly growing range of GenAI tools has fueled unauthorized use of the technologies at many organizations and spawned a new set of challenges for already overburdened security teams. One example is the rapidly proliferating — and often unmanaged — use of AI chatbots among workers for a variety of purposes. The trend has heightened concerns about the inadvertent exposure of sensitive data at many organizations.

Security teams can expect to see a spike in the unsanctioned use of such tools in the coming year, predicts Nicole Carignan, vice president of strategic cyber AI at Darktrace. "We will see an explosion of tools that use AI and generative AI within enterprises and on devices used by employees," leading to a rise in shadow AI, Carignan says. "If unchecked, this raises serious questions and concerns about data loss prevention as well as compliance concerns as new regulations like the EU AI Act start to take effect," she says. Carignan expects that chief information officers (CIOs) and chief information security officers (CISOs) will come under increasing pressure to implement capabilities for detecting, tracking, and rooting out unsanctioned use of AI tools in their environment.

AI Will Augment, Not Replace, Human Skills

AI excels at processing massive volumes of threat data and identifying patterns in that data. But for some time at least, it remains at best an augmentation tool that is adept at handling repetitive tasks and enabling automation of basic threat detection functions. The most successful security programs over the next year will continue to be ones that combine AI's processing power with human creativity, according to Stephen Kowski, field CTO at SlashNext Email Security+.

Many organizations will continue to require human expertise to identify and respond to real-world attacks that evolve beyond the historical patterns that AI systems use. Effective threat hunting will continue to depend on human intuition and skills to spot subtle anomalies and connect seemingly unrelated indicators, he says. "The key is achieving the right balance where AI handles high-volume routine detection while skilled analysts investigate novel attack patterns and determine strategic responses."

AI's ability to rapidly analyze large datasets will heighten the need for cybersecurity workers to sharpen their data analytics skills, adds Julian Davies, vice president of advanced services at Bugcrowd. "The ability to interpret AI-generated insights will be essential for detecting anomalies, predicting threats, and enhancing overall security measures." Prompt engineering skills are going to be increasingly useful as well for organizations seeking to derive maximum value from their AI investments, he adds.

Attackers Will Leverage AI to Exploit Open Source Vulns

Venky Raju, field CTO at ColorTokens, expects threat actors will leverage AI tools to exploit vulnerabilities and automatically generate exploit code in open source software. "Even closed source software is not immune, as AI-based fuzzing tools can identify vulnerabilities without access to the original source code. Such zero-day attacks are a significant concern for the cybersecurity community," Raju says.

In a report earlier this year, CrowdStrike pointed to AI-enabled ransomware as an example of how attackers are harnessing AI to hone their malicious capabilities. Attackers could also use AI to research targets, identify system vulnerabilities, encrypt data, and easily adapt and modify ransomware to evade endpoint detection and remediation mechanisms.

Verification, Human Oversight Will Be Critical

Organizations will continue to find it hard to fully and implicitly trust AI to do the right thing. A recent survey by Qlik of 4,200 C-suite executives and AI decision-makers showed most respondents overwhelmingly favored the use of AI for a variety of uses. At the same time, 37% described their senior managers as lacking trust in AI, with 42% of mid-level managers expressing the same sentiment. Some 21% reported their customers as distrusting AI as well.

"Trust in AI will remain a complex balance of benefits versus risks, as current research shows that eliminating bias and hallucinations may be counterproductive and impossible," SlashNext's Kowski says. "While industry agreements provide some ethical frameworks, the subjective nature of ethics means different organizations and cultures will continue to interpret and implement AI guidelines differently." The practical approach is to implement robust verification systems and maintain human oversight rather than seeking perfect trustworthiness, he says.

Davies from Bugcrowd says there's already a growing need for professionals who can handle the ethical implications of AI. Their role is to ensure privacy, prevent bias, and maintain transparency in AI-driven decisions. "The ability to test for AI’s unique security and safety use cases is becoming critical," he says.

About the Author

Jai Vijayan, Contributing Writer

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year career at Computerworld, Jai also covered a variety of other technology topics, including big data, Hadoop, Internet of Things, e-voting, and data analytics. Prior to Computerworld, Jai covered technology issues for The Economic Times in Bangalore, India. Jai has a Master's degree in Statistics and lives in Naperville, Ill.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights