Skip to main content
    All Episodes
    Episode 059 · February 19, 2026 · 37m listen

    The Hidden Cybersecurity Risks When Doctors Use AI Diagnostics | Ep. 58

    Episode Summary

    The widespread, unauthorized use of AI diagnostic tools by medical professionals presents significant cybersecurity risks, as discussed in this episode of The Med Device Cyber Podcast. Despite regulatory frameworks such as IEC 62304 governing medical software development, nearly 25% of clinicians are utilizing AI without proper controls, often uploading sensitive patient data like X-rays to consumer-grade AI. This practice not only violates patient privacy and compliance regulations but also exposes models to data poisoning, where even minimal corrupted training data can lead to substantial errors in diagnosis. The episode highlights concerns about AI-generated code, with studies showing that nearly 50% introduces vulnerabilities like cross-site scripting. While AI can enhance developer productivity, it frequently produces bloated, unmaintainable, and insecure code if not properly guided. The discussion emphasizes the critical need for human oversight, rigorous testing, and adherence to established cybersecurity labeling schemes, such as Singapore's CLS MD, to ensure patient safety and data integrity in the rapidly evolving landscape of AI in healthcare. This episode is crucial for product security teams, regulatory leads, and engineers navigating the complexities of AI adoption in medical devices.

    Key Takeaways

    • 01Clinicians are increasingly using unauthorized AI tools, such as ChatGPT, for diagnostics, raising significant privacy and security concerns by uploading sensitive patient data like X-rays.
    • 02Data poisoning, even with a small percentage of corrupted training data, can lead to a disproportionately large increase in incorrect AI outputs, jeopardizing diagnostic accuracy.
    • 03AI-generated code often introduces vulnerabilities like cross-site scripting due to being trained on poorly written open-source code, necessitating extensive manual review and remediation.
    • 04Strict adherence to regulated frameworks like IEC 62304 and robust cybersecurity labeling schemes are essential for managing risks and ensuring patient safety in medical device software development.
    • 05Hardcoded credentials and the use of outdated, unmaintained third-party libraries remain prevalent security weaknesses in medical device software, requiring vigilant inventory and updating.
    • 06Effective integration of AI in medical device development requires human oversight, treating AI as a "pair programmer" rather than an autonomous developer, and implementing safeguards to ensure safe failure states and prevent automation bias.
    • 07The cybersecurity labeling scheme for medical devices (CLS MD) in Singapore aims to provide a clear indication of a product's security posture, giving consumers and developers a standardized measure of security rigor.
    • 08Despite the potential for AI to accelerate development, the current state often leads to bloated, difficult-to-maintain codebases, highlighting the ongoing need for skilled human engineers to ensure code quality and security.
    • 09The episode underscores that with medical devices, cybersecurity is not just about data theft but about preventing misdiagnosis, patient harm, or even death, emphasizing the high stakes involved.
    • 10It is critical to guide AI with clear requirements and compartmentalized tasks, rather than allowing it to operate autonomously, to prevent the introduction of security flaws and maintain control over the development process.

    Frequently Asked Questions

    Quick answers drawn from this episode.

    • The widespread, unauthorized use of AI diagnostic tools by medical professionals presents significant cybersecurity risks, as discussed in this episode of The Med Device Cyber Podcast.

    • Clinicians are increasingly using unauthorized AI tools, such as ChatGPT, for diagnostics, raising significant privacy and security concerns by uploading sensitive patient data like X-rays. Data poisoning, even with a small percentage of corrupted training data, can lead to a disproportionately large increase in incorrect AI outputs, jeopardizing...

    • This practice not only violates patient privacy and compliance regulations but also exposes models to data poisoning, where even minimal corrupted training data can lead to substantial errors in diagnosis. It's most useful for medical device manufacturers, cybersecurity engineers, regulatory affairs professionals, and MedTech founders...

    • Clinicians are increasingly using unauthorized AI tools, such as ChatGPT, for diagnostics, raising significant privacy and security concerns by uploading sensitive patient data like X-rays.

    Listeners also asked

    Quick answers pulled from related episodes.

    Share this episode

    Pre-fills with: "Clinicians are increasingly using unauthorized AI tools, such as ChatGPT, for diagnostics, raising significant privacy and security concerns by uploading sensitive patient data like X-rays."

    A lot of physicians or clinicians—almost 25%—are using AI in an unauthorized manner without any real controls around that. It is so convenient to just take up your phone while you are going around rounds in the hospitals, or even as a general practitioner in the clinic, to take out ChatGPT, just type in a few phrases to either diagnose a patient using text, or even if you have X-ray imagery, you just send it to ChatGPT and ask, 'Do you spot any anomalies?' 0.001% of training data resulted in a 5% increase in wrong output. If you're training your AI on bad data, it's going to give you bad output every single time. Almost 50% of AI generated code introduces vulnerabilities such as cross-site scripting. In the medical space, IEC 62304 dictates the way that medical software needs to be developed in a safe fashion. Is this becoming a bigger problem than we think? Hello and welcome back to The Med Device Cyber Podcast. We have here your usual co-hosts, Trevor Slattery and Christian Espinosa, and then we have a very special guest coming in from Singapore as well. Today, we're going to be talking about some really exciting things with code security, as well as how AI has helped it, and then in some ways, how it's hurt it, and what we can do to make sure that we're developing safer code within the medical space. I want to start by turning it over to you, June, to do a little bit of an intro and some background on yourself, and then we can go ahead and jump right in.

    Hosted by

    More from your hosts

    Other episodes diving into Christian and Trevor's areas of focus.

    Episodes covering similar ground.

    Listen to this episode