Skip to main content
    All Episodes
    Episode 041 · October 7, 2025 · 23m listen

    What Happens When AI in Medical Devices Make Mistakes? | Ep. 40

    Episode Summary

    In this episode of The Med Device Cyber Podcast, hosts Christian Espinosa and Trevor Slattery explore the critical safety and regulatory challenges surrounding artificial intelligence in medical devices. They focus on the European Union's AI Act and the Medical Device Coordination Group's (MDCG) new guidance, contrasting it with the less regulated approach in the United States. The discussion highlights a tragic real-world case where an AI-powered mental health chatbot provided harmful advice, leading to a patient's death. This incident underscores the urgent need for robust threat modeling and a comprehensive understanding of AI's edge cases in high-risk medical applications. The hosts emphasize that while AI offers groundbreaking innovation, its deployment in healthcare demands a rigorous focus on safety, security, and well-defined guardrails. They also touch upon the current 'AI boom' and how regulatory changes, similar to those seen with mobile medical apps, may temper the uncritical adoption of AI if manufacturers are forced to seriously consider liability and risk management rather than just marketing hype. The episode serves as a crucial listen for product security teams, regulatory leads, and engineers navigating the complex landscape of AI in medical technology.

    Key Takeaways

    • 01The EU AI Act classifies medical devices as high-risk, necessitating granular understanding and specific guidance like that from the MDCG.
    • 02Manufacturers of AI-enabled medical devices bear the burden of identifying and mitigating edge cases through threat modeling to prevent patient harm.
    • 03The distinction between AI providing clinical decision support and AI making diagnostic or treatment decisions is critical for liability and regulatory compliance.
    • 04Current US regulations for AI in medical devices are less stringent compared to the EU, creating a 'wild west' environment that increases risk.
    • 05The hype around AI in medical devices for funding and marketing overlooks crucial considerations for safety and regulatory compliance, a situation likely to change as regulations become finalized.
    • 06Regulators are increasingly focusing on how AI in medical devices can fail and the potential for harm, rather than just its success rates.

    Frequently Asked Questions

    Quick answers drawn from this episode.

    • In this episode of The Med Device Cyber Podcast, hosts Christian Espinosa and Trevor Slattery explore the critical safety and regulatory challenges surrounding artificial intelligence in medical devices.

    • The EU AI Act classifies medical devices as high-risk, necessitating granular understanding and specific guidance like that from the MDCG. Manufacturers of AI-enabled medical devices bear the burden of identifying and mitigating edge cases through threat modeling to prevent patient harm. The distinction between AI providing clinical decision support and AI...

    • This episode covers Threat Modeling. It's part of The Med Device Cyber Podcast, hosted by Blue Goat Cyber, focused on practical medical device cybersecurity guidance for MedTech teams.

    • The discussion highlights a tragic real-world case where an AI-powered mental health chatbot provided harmful advice, leading to a patient's death. It's most useful for medical device manufacturers, cybersecurity engineers, regulatory affairs professionals, and MedTech founders preparing for FDA review.

    • The EU AI Act classifies medical devices as high-risk, necessitating granular understanding and specific guidance like that from the MDCG.

    Listeners also asked

    Quick answers pulled from related episodes.

    Share this episode

    Pre-fills with: "The EU AI Act classifies medical devices as high-risk, necessitating granular understanding and specific guidance like that from the MDCG."

    Hello and welcome back to another episode of The Med Device Cyber Podcast. I'm your co-host, Trevor Slattery, joined by our co-host, Christian Espinosa. Today, we're going to look at something really interesting: what happens when AI gets it wrong in the medical context? This can mean someone's life is on the line, so AI making a decision, trying to step in as the place of diagnosis or therapy provision, is a little bit of a dangerous territory. Of course, AI provides a lot of great innovation, but we want to make sure that it's handled safely. How are you doing today, Christian? I'm doing well, doing well. It's Thursday today, I think it is, and it's hot in Phoenix. I'm still recovering from last week. Last week, I was in New Jersey doing a Formula 4 race course. So, what's the difference between the different, I guess, formulas going up 4 to 1? Well, Formula 4, Formula 3, and Formula 2, the cars are all the same, so it's purely up to the driver as far as skill set. Formula 1, the cars, each team constructs their own cars based on a set of specifications from the FIA. So, the cars are different—like the team cars are different. Some are faster, some are slower, some are faster on the straight, some are faster on turns, and the drivers are different. So, from a driver perspective, it's an equal playing field. Then, like F4, F3, and F2, F1, you have the manufacturer of the car that comes into play as well. And then F4 cars aren't quite as fast or powerful as F3 or F2 or F1. They go up in order from, you know, one is the fastest, obviously, and also the most expensive. Yep. Yeah. We're unpacking into our new apartment right now, and I can't see where it is, but we have a signed photo from Charles Leclerc, which I'm going to put right back here on the wall. Cool. Um, yeah, so I'm hoping to do an F4 race at some point also, probably maybe later this year or early next year. That'd be pretty cool. Any preferred location for the track? Whatever track, I'm good with. I did that course at the New Jersey Motorsports Parkway, so I'm very familiar with that course. I watched a couple races there in F4 on that course, so I feel like I could kick ass at that course, but a new course, you know, I have to learn the course and all that stuff, which is part of the challenge. Well, there you go. Maybe it'll be back to New Jersey for your championship trophy. Well, I'd be pretty ambitious to win a championship in the first race, but we'll see. Awesome. Well, let's jump right into some of these AI considerations. So, this comes up as a little bit of, you know, some existing guidance, but with some changes here. So, right now, what we're looking at is the EU AI Act and then some of the new guidance pushed out by the Medical Device Coordination Group in the EU. Now, while this is a little bit separate from our standard focus around FDA problems and FDA considerations, it all ties into medical device safety and, of course, we do handle a lot of work within the EU MDR and IVDR. So, it is especially relevant. I know we're looking at just how to bring in a little bit more security in AI systems, which the EU has been more on top of than a lot of other agencies, I would say, making sure that AI is pushed out securely, safely, and there are regulations in place to try to add some guardrails to what can be, in some cases, a little bit of a risky technology. Yeah, and I think it's good to provide a little context here with a real case that's an active case going on right now. And I think you're familiar with this case. There was a medical device manufacturer that has a mental health application that has an AI-based chatbot. So, from a mental health perspective, it's supposed to, you know, help with the mental health of the patient. And the case that is being examined today is there was a suicidal patient that, over a course of several months, was interacting with this AI-based mental health chatbot. And after three months, for some reason, the AI-based chatbot told the patient,

    Hosted by

    Explore every episode in the topics covered here.

    More from your hosts

    Other episodes diving into Christian and Trevor's areas of focus.

    Episodes covering similar ground - including Threat Modeling.

    Why this matches shares the Threat Modeling topic and covers similar themes around urgent, death, health.

    Why this matches shares the Threat Modeling topic and covers similar themes around stringent, case, treatment.

    Why this matches shares the Threat Modeling topic and covers similar themes around regulators, regulated, harm.

    Why this matches shares the Threat Modeling topic and covers similar themes around guidance, burden, upon.

    Listen to this episode