Skip to main content
    All Episodes
    Episode 023 · June 3, 2025 · 41m listen

    AI in Medical Devices: Opportunities & Regulation with Matt Lemay | Ep. 22

    Matt Lemay
    CEO
    Lemay.ai

    Episode Summary

    This episode of The Med Device Cyber Podcast features Matt Lemay, CEO of Lemay.ai, discussing the critical intersection of AI in medical devices and regulatory compliance. The conversation delves into the challenges and opportunities for MedTech manufacturers in adopting AI, emphasizing the often-overlooked aspects of data governance, security, and long-term viability. A key focus is placed on ISO 42001, highlighting its emergence as a certifiable standard for AI management systems and its potential to integrate with existing medical device oversight frameworks. Lemay stresses the importance of considering the intended purpose of AI in medical devices, as it directly impacts certification needs and regulatory strategies. The discussion also covers significant cybersecurity risks, such as improper training data, data sovereignty issues, and the lack of robust version control for cloud-based AI models. The episode further explores the complex question of liability when AI is involved in diagnostic or treatment decisions, drawing parallels with professional engineering certifications and accountability structures. This podcast is a must-listen for product security teams, regulatory leads, and engineers navigating the evolving landscape of AI in medical devices, offering practical insights into secure AI development and deployment.

    Key Takeaways

    • 01ISO 42001 is emerging as a certifiable standard for Artificial Intelligence management systems, offering a new pathway for external verification of AI used in medical devices.
    • 02The purpose of Artificial Intelligence within a medical device significantly influences the necessary certification and regulatory strategy, distinguishing between exploratory data science and diagnostic decision-making.
    • 03Critical cybersecurity risks for Artificial Intelligence in medical devices include improper training data, data sovereignty concerns, and the lack of robust version control for cloud-based models that can lead to performance degradation.
    • 04Establishing clear liability for Artificial Intelligence-driven medical decisions is complex, necessitating frameworks akin to professional engineering certifications where an individual is accountable for the design and deployment of intelligent agents.
    • 05When designing Artificial Intelligence for medical devices, it is crucial to consider the deployment environment from the outset, including whether the AI will run on a wearable, smartphone, or in the cloud, to ensure performance and address latency and connectivity challenges.
    • 06To ensure long-term viability and maintain performance, complex Artificial Intelligence models can be converted into simpler math-based representations like polynomials, significantly reducing computational requirements and making them suitable for low-power microcontrollers.

    Frequently Asked Questions

    Quick answers drawn from this episode.

    • This episode of The Med Device Cyber Podcast features Matt Lemay, CEO of Lemay.ai, discussing the critical intersection of AI in medical devices and regulatory compliance.

    • ISO 42001 is emerging as a certifiable standard for Artificial Intelligence management systems, offering a new pathway for external verification of AI used in medical devices. The purpose of Artificial Intelligence within a medical device significantly influences the necessary certification and regulatory strategy, distinguishing between exploratory data...

    • A key focus is placed on ISO 42001, highlighting its emergence as a certifiable standard for AI management systems and its potential to integrate with existing medical device oversight frameworks. It's most useful for medical device manufacturers, cybersecurity engineers, regulatory affairs professionals, and MedTech founders preparing...

    • ISO 42001 is emerging as a certifiable standard for Artificial Intelligence management systems, offering a new pathway for external verification of AI used in medical devices.

    Listeners also asked

    Quick answers pulled from related episodes.

    Share this episode

    Pre-fills with: "ISO 42001 is emerging as a certifiable standard for Artificial Intelligence management systems, offering a new pathway for external verification of AI used in medical devices."

    Hello and welcome back to another episode of the Med Device Cyber Podcast. Today we're going to talk about artificial intelligence in medical devices as well as AI in regulated industries. We have a special guest on today, Matt Lemay, who is the CEO of Lemay.ai. How are you doing today, Matt? I'm good. Thanks for having me. It's going to be a good conversation. Looking forward to it. Yeah, a little bit of context. Melissa and I met Matt on the cultural tour, I believe it was at MedTech World Dubai. Then Matt gave me this awesome book here, The 50 Inventions That Shaped the Modern Economy. I haven't started reading it yet. I just got it a couple days ago, but I've kind of thumbed through it. It looks pretty awesome. Absolutely. So, Christian and I were in the lobby of the Intercontinental at Festival City, and you just see a group of people that are all CEOs and co-founders and engineers in medical devices. Everyone's in shorts and flip-flops and polos going on a tour. It was great conversations all around. I'm definitely looking forward to the next MedTech World event. Yeah, for sure. So you want to give us a little context about what you do, Matt, what your organization does, and spend a little bit on MedTech since we're focused on MedTech? Absolutely. Well, I think MedTech definitely shaped the entire growth of our organization. So my team, Lemay.ai, we're a team of 30 people, about 85% engineers that specialize in helping clients on their AI adoption journey, specifically in regulated industries including MedTech. We do this by delivering tailored AI solutions at whatever stage you're at. So if you need strategic guidance on which projects you should implement, if you need some core implementation support, if you need some scale-up or even now regulatory approval as AI is becoming more and more relevant in each one of these regulated industries, we have a framework for helping clients along this journey. Specifically about myself, I actually came from a medical device startup where we implemented ISO 13485 from scratch, where I met my now co-founder, Daniel. We did a lot of work back in the day, not just in systematizing our design and development processes to be able to achieve CE mark, FDA approval processes, Health Canada approval. Each one of those compliance mechanisms requiring a certain amount of oversight, and that's what helps shape how we actually do AI. So part of what we do now and how we do it is heavily influenced by engineering principles that are required to comply with a lot of these emerging standards. So when would an organization, like say a MedTech manufacturer, when would they want to engage with you if they're going to do, let's say, image enhancement using AI on software as a medical device? That's a fantastic question. There's a lot of ways of approaching it, and the simple answer is somewhere between not yet and five years ago. So, at the not yet level, what we're seeing in the regulatory changes in the landscape is a lot of people have been pushing for various governance frameworks on how to do safe AI. The thing with a lot of these governance frameworks is that they're hard to audit and they're hard to verify and therefore hard to include in a lot of these medical device oversight frameworks and audit processes. What we're seeing right now is that there's actually a lot of work happening under one particular standard, ISO 42001, which specifically prescribes how to manage your AI systems. So 42001 is being called AI management systems. What we find very, very interesting with that standard is that it is certifiable. So for the first time, you can have an AI included in your medical devices that is able to be verified by an external third-party. When it comes to image recognition, what you also have to keep in mind is what is the purpose of this image recognition, which will immediately impact the strategy that you want to pursue and whether or not a team like ours actually makes sense. So if you say, I want to do exploratory data science and I want to look at the pictures of dermatology, X-ray, I want to look at cells at the microscopic level to understand how they're moving, how they're understanding, and your intent at that level is much more investigative. It's much more open-ended, then you really don't need a lot of certification. You can actually engage in that direction as long as you are in respect with GDPR and a lot of these data protection mechanisms, you're going to be fine. Where you have to tread a bit more carefully is when you start making diagnostic calls or when you start having a lot of decisions being automated through the information that you see. Now, you want to make sure that there are a series of systems in place to ensure that whatever you're prescribing to a client in terms of recommended treatment, or if there are some particular categorization that you want to pursue, you at least want a bit more structure in your processes that touches not just the code, but also the policies and procedures on how you do the work. So if a client comes to you and follows the standards and does AI from a proper engineering perspective, do you feel like the AI would be more secure from a cybersecurity perspective? There's a strong overlap. So if you look at a lot of these cybersecurity standards like ISO 27000, if you look at GDPR, there's a lot of prescriptivity around how the data should be handled. Now, with the AI management systems that are designed to properly fit into a lot of these frameworks, they tend to have a lot of prescription of what you do with the data. So now you have how you manipulate the data, you have what to do, and together you have a sense of why you should go in a particular direction. It'll help prescribe what your business logic is. It'll help dictate what can and can't you do with the data that you have. Do you need more data? So we find that together they mesh quite nicely. Trevor, what do you think some of the biggest threats that we've noticed with our clients about AI are? I think it depends on the exact type of AI if we're looking at, you know, a large language model as opposed to a pure machine learning application. But I think the biggest thing that can come up, especially in that diagnostic space, like you were mentioning, Matt, is if the AI is not properly trained on the correct data. So an issue can be if you're feeding in some grainy images into the AI and it's not getting a good understanding of how to look for something like cancer, it's obviously not going to have the information that it needs to make the correct diagnostic call. So you're going to get inconsistent results and you might trigger a misdiagnosis. So I think the training data is a pretty critical thing that needs to be considered. Oh, absolutely. We've seen a lot of examples of people that start working on a particular open dataset only to see that the size, the resolution, the color scheme of the images that they were using was completely different, not something that is easily perceivable when you start looking at the data, but unless you actually do an audit and a verification of what happens post-deployment, what type of data drift you might be manifesting, that absolutely has been a big issue. The other big cybersecurity risk that often people forget when there isn't a good meshing between the data scientists and the machine learning engineers is where does a model actually exist? Is it going to be on the device? Is it going to be on the local computer inside the clinic? Is it going to be in the cloud somewhere? If it's in the cloud, under which geopolitical segment does it fall under? All of these start touching on data sovereignty issues, cloud sovereignty issues, privacy and security, and depending on where it resides, you're actually going to see a lot of changes. One thing that we've seen as well is sometimes depending on API calls that people like to use early on, because you can just send the data and it takes care of itself, there isn't a strong compliance or version control mechanism. So without you knowing it, it is possible that whatever cloud provider you use updates the models, starts changing the result, and now suddenly your device is less performant than it was before without ever having received a type of notification or some notice that there was a change actually happening. So anything that you can do to maintain sovereignty is absolutely effective. But that's also something that you've seen just at the communication of the data level, right? Like, how do you guys go about making sure that when you have these images get captured, how does it go back and forth between the devices, the cloud, the cell phones, and everywhere? Yeah, I think that making sure that data is protected is another huge concern as far as AI goes. I had a call not too long ago with someone who had an idea for an AI product that was going to handle a lot of PHI. They said,

    Hosted by

    More from your hosts

    Other episodes diving into Christian and Trevor's areas of focus.

    Episodes covering similar ground.

    Why this matches covers similar themes around performance, intelligence, artificial.

    Why this matches covers similar themes around opportunities, often-overlooked, integrate.

    Why this matches covers similar themes around directly, individual, drawing.

    Why this matches covers similar themes around training, frameworks, maintain.

    Listen to this episode