Is AI a Friend or Foe of Healthcare Security?Is AI a Friend or Foe of Healthcare Security?

When it comes to keeping patient information safe, people empowerment is just as necessary as deploying new technologies.

Claudio Gallo, Lead Security Engineer, C4HCO

February 12, 2025

5 Min Read
Doctor using a tablet in a hospital with digital overlay or CGI.
Source: Yuri Arcurs via Alamy Stock Photos

COMMENTARY

Some say artificial intelligence (AI) has changed healthcare in ways we couldn't have imagined just a few years ago. It's now used for everything from paperwork to helping doctors make better diagnoses. But like any new tech, there are risks involved.

Currently, AI is both a potent defense mechanism and an attacker enabler. Therefore, the question that must be asked is clear: Is AI an enemy or a friend of cybersecurity in healthcare? Honestly, the answer is both.

AI as the Defender: Enhancing Healthcare Security

Healthcare systems are rich targets for malicious actors, with considerable protected health information (PHI) spread across interconnected assets such as electronic health records, Internet of Things (IoT)-enabled medical devices, and telehealth platforms. It has been proven that traditional cybersecurity tools often lack the resources and features required to protect such complex ecosystems and, as in different industries, struggle to keep pace with both the volume of data being generated and evolving attack methodologies.

The advantage of machine learning algorithms is that they can find a potential threat before it is serious. AI-powered security tools can detect anomalies in system behaviors, such as unauthorized data transfer or suspicious login activities, and thus proactively prevent a breach. Indeed, several hospitals using AI-powered systems have been able to avert ransomware attacks and maintain operational integrity and patient safety.

Artificial Intelligence is also incomparable in terms of its critical role in reducing administrative burdens and further complying with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations. AI-powered tools, such as virtual assistants and data processing systems, take over administrative work while safeguarding sensitive data. These tools protect PHI and free human resources to focus on patient care.

AI as the Enabler of Cyber Threats

While AI hardens the defense, it turbocharges the attacker side, too. In such a way, cyber threats in healthcare have become increasingly sophisticated. The game changed with generative AI tools that let attackers create unbelievably realistic tailor-made emails with perfect grammar and formatting that quickly slipped through traditional security filters.

Deepfakes add another layer to these deceptions: generating hyperreal audio and video that makes an attacker sound like senior health leaders or other trusted voices. These fabrications have been used to deceive staff into granting unauthorized access, sharing PHI, and even making fraudulent financial transactions. In some cases, attackers have used deepfakes to spread false medical information or to undermine public confidence, further destabilizing an already complex threat landscape.

AI-powered malware leverages machine learning to make live changes, evade traditional detection, and zero in on critical systems, such as IoT-enabled devices and electronic health records. Attackers manipulate diagnostic data, alter medical imaging, and gain entry through vulnerabilities in lightly secured IoT devices, enabling them to create avenues to coordinate attacks. Combining AI with IoT could pose a greater threat to patient safety and trust in healthcare systems than just financial losses. 

AI-powered threats sound an alarm for information security, IT, and healthcare leaders. These risks are reshaping the cybersecurity landscape. Preemptive defenses require advanced AI tools, employee training, and collaboration across cross-functional teams. This would, in turn, involve policy and detection system reviews to grant top priority for countering AI-impelled social engineering and malware. Constantly being one step ahead of the bad actors requires constant vigilance, innovative thinking, and a core commitment to data safety and patient care.

Balancing AI's Potential with Realistic Implementation

As an expert or executive, you face the critical decision of managing the promise of AI and the risk it further introduces into an already overcomplicated cybersecurity landscape. AI is not the Holy Grail; it's a tool that can be used for and against us. AI’s transformative potential in healthcare and security comes from how it is implemented, so leaders must approach its adoption with a balanced perspective. They should be excited yet cautious, knowing full well that attackers are leveraging the very same technology to undermine our systems, data, and trust.

In my experience, the excitement around adopting AI tools like transcript generators, grammar checkers, or automated note-taking systems often takes precedence over critical security assessments. I have seen teams advocate for rapid implementation to save time and resources without assessing the risks; common questions such as where the data is stored, how it is processed, or if the vendor is compliant often are not asked. This rush to embrace convenience creates gaps that attackers can exploit, especially in healthcare, where even minor oversights can lead to significant breaches of PHI or personally identifiable information (PII).

Deepfakes, adaptive malware, and the exploitation of IoT devices, all powered by AI, require a new type of thinking to address these threats — one that changes from legacy defenses or even leading-edge AI-powered tools to placing those tools within an extended proactive security framework encompassing audits, employee training, and reliable governance. For that to happen, health workers and administrators must be empowered to recognize sophisticated attacks, faked video calls, or some other unexpected data transfer AI flagged up. People empowerment is just as necessary in deploying new technologies.

Drive collaboration between IT, security, and clinical teams in developing customized strategies for technical vulnerabilities and operational realities. This means vigilance, from systems monitoring to continuing review of AI's evolving role in your institution.

Safeguarding healthcare systems includes protecting the trust and well-being of the patients it cares for and its entire community. This depends entirely on the type of leadership that doesn't just react to threats but proactively takes bold measures to mitigate risks before they spread. Security embedded in all facets of the organization should ensure continuity of critical operations and uncompromised care of the patients by leaders in healthcare.

About the Author

Claudio Gallo

Lead Security Engineer, C4HCO

Claudio Gallo is a lead security engineer of the Colorado Marketplace Exchange, dedicated to protecting critical information in the healthcare and insurance industries. With expertise in application security, cloud architecture, and compliance, Claudio designs innovative strategies to safeguard sensitive data and ensure trust in important services. Passionate about making a meaningful difference, he is equally committed to mentoring aspiring professionals in the information security industry, sharing knowledge, and fostering the next generation of cybersecurity talent.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights