Skip to main content
    All Episodes
    Episode 062 · March 12, 2026 · 38m listen

    How to Design Devices That Integrate Into Clinical Workflow Without Disruption | Ep. 61

    Dr. Omar Ahmed
    Professor of Cardiac Anesthesiology and Critical Care, Co-founder of HIO
    University of Leicester / HIO

    Episode Summary

    This episode of The Med Device Cyber Podcast features Dr. Omar Ahmed, a professor of cardiac anesthesiology and critical care and co-founder of HIO, discussing the crucial role of integrating medical devices seamlessly into clinical workflows. Dr. Ahmed, a Key Opinion Leader (KOL) in his field, emphasizes that cybersecurity in medtech is paramount to ensure data reliability and patient safety. The discussion highlights the common pitfall of medtech companies developing solutions without first identifying a clinical problem, often leading to products that don't integrate effectively within hospital IT systems or clinical workflows.The episode delves into the concept of digital twins in healthcare and their implications for personalized medicine. While personalized treatment offers significant benefits, it also introduces magnified cybersecurity risks, particularly concerning patient harm and data integrity. The speakers explore the regulatory landscape for clinical decision support systems (CDSS) versus diagnostic medical devices, noting the FDA's ongoing efforts to clarify liability in this evolving area. The conversation underscores the importance of medtech innovators collaborating with KOLs and spending time in clinical environments during the design phase to create truly effective, secure, and integrated medical devices that enhance patient care without disrupting existing workflows.

    Key Takeaways

    • 01Medtech companies often err by developing solutions without first deeply understanding and addressing specific clinical problems, leading to poor integration into hospital IT systems and workflows.
    • 02Effective medical device design should prioritize seamless integration into existing clinical environments, becoming indispensable without causing disruption or requiring significant changes to established processes.
    • 03Digital twins and personalized medicine, while highly beneficial, introduce heightened cybersecurity risks, including the potential for incorrect treatments due to compromised data and the magnified exposure of sensitive patient health information.
    • 04The reliability and integrity of medical data are absolutely vital for clinical decision-making, as erroneous or compromised data can lead to patient harm and misdiagnosis.
    • 05Medtech innovators should engage with Key Opinion Leaders (KOLs) and immerse themselves in clinical settings during the design and development phases to ensure products meet actual clinical needs and seamlessly integrate into real-world workflows.
    • 06The regulatory distinction between clinical decision support software and diagnostic medical devices is crucial for liability, with the FDA actively trying to clarify who is responsible when erroneous data from a device leads to patient issues.

    Frequently Asked Questions

    Quick answers drawn from this episode.

    • This episode of The Med Device Cyber Podcast features Dr. Omar Ahmed, a professor of cardiac anesthesiology and critical care and co-founder of HIO, discussing the crucial role of integrating medical devices seamlessly into clinical workflows.

    • Medtech companies often err by developing solutions without first deeply understanding and addressing specific clinical problems, leading to poor integration into hospital IT systems and workflows. Effective medical device design should prioritize seamless integration into existing clinical environments, becoming indispensable without causing disruption or...

    • The discussion highlights the common pitfall of medtech companies developing solutions without first identifying a clinical problem, often leading to products that don't integrate effectively within hospital IT systems or clinical workflows.The episode delves into the concept of digital twins in healthcare and their implications for...

    • Medtech companies often err by developing solutions without first deeply understanding and addressing specific clinical problems, leading to poor integration into hospital IT systems and workflows.

    Listeners also asked

    Quick answers pulled from related episodes.

    Share this episode

    Pre-fills with: "Medtech companies often err by developing solutions without first deeply understanding and addressing specific clinical problems, leading to poor integration into hospital IT systems and workflows."

    Cybersecurity in medtech is absolutely vital because the key thing is that the doctors who are the end users have to be dead sure that what we're receiving has not been hacked. The data is reliable, and then we can use that to actually treat our patients. That is something that many medtech companies get wrong because they come out with a solution and they start looking for a problem to solve. The biggest issue, I think we find with medtech, is the integration into the IT systems of hospitals. And especially for one of these wellness tools, where it can't make any claims to anything. So, reading through it, it never says, “Oh, you need to get more sleep. You need to go to bed earlier tonight.” It just says, “You didn't sleep well last night.” Okay, cool. So, what's the response? What do you do as the next step? It's a hard problem to solve for these wellness companies because then you're bridging the line between a wellness product and a medical device, where obviously becoming a medical device is a big deal. Hi, welcome back to another episode of The Med Device Cyber Podcast. Today, we're talking about a very important topic, and a topic that is misunderstood. I didn't know much about it until maybe a year and a half ago. It's about KOLs, or Trevor, what does a K stand for? Do you remember? A Key Opinion Leader. Yes. So, we have a KOL on the call today, Dr. Omar Ahmed, and maybe you can explain a little bit about what you do and what your role is with HIO, and also as a KOL. Great. Thank you. I'd like to thank you both for inviting me to speak here. It's an honor and absolute pleasure. My background is I'm a professor of cardiac anesthesiology and critical care in the University of Leicester in the UK. And what that means is I'm a practicing full-time cardiac anesthesiologist, but also I am a co-founder of the company HIO, for which we're going to talk about a little bit later on. Now, just to go into the Key Opinion Leader business, the reason why we're called that is because a handful of us, if you like, across the world are known as leaders in our field, and we publish a lot of the academic papers, a lot of clinical research, and a lot of the international guidelines that define what the best practice or current state-of-the-art in medicine is. Each one of us works in a different field. I work in the area of cardiac anesthesia and critical care, but others work in oncology, hematology, and others. So, what we do is we publish the guidelines on the basis of best practice and evidence. So, we look at all of the papers out there, we look at all of the evidence, and we make decisions as a group, not just individuals, but as a group of us, and publish these as guidelines that other hospitals take as best practice. So that when you go into a hospital as a patient, you know that you're going to get treated with the best available evidence by your physician. And that's what defines us as a Key Opinion Leader because industry likes to call us that because we're the ones who effectively formulate opinions within all our other peer group. And so, what we tend to say is evidence-based, and what we tend to publish tends to get followed by the majority of clinicians across the world. And that's what a KOL is. So, it's a label attached to me and many of my colleagues. But essentially, once you're a real expert in the field and you are recognized as somebody who really knows what they're doing and they publish in that subject, that's how they become Key Opinion Leaders, and that takes many, many years. I've been a cardiac anesthesiologist for over a quarter of a century. So, I've been practicing this and doing this and honing my craft, if you like, and become an expert in the field over many, many years. And that's how we get known in the field. That's why we get invited to speak at scientific conferences, and then we end up in podcast rooms like this with yourselves. Well, awesome. I think it's important, like from my perspective, and I could have the wrong perspective, I'm not an expert on KOLs. I maybe I am a KOL in my field, I guess. But from my perspective, you've got a lot of medical device innovators that don't have a KOL as part of their team. So, there's pretty high opportunity for the product to not actually fit a need or not work in a clinical setting. Is that a fair assessment? Because I know what you do with HIO is you provide the real knowledge of how your device and product would work in the environment. Yeah, thank you. That's a really good point you make, Christian, because one of the things we come across as doctors is, you know, we work in a high-pressure environment. We work in a team. We work in a clinical scenario which is changing from second to second, minute to minute. And so, we need our equipment and our monitoring and everything else we use to be 100% reliable, number one, secure as the business you're in. And most importantly, it needs to not get in the workflow of what we do. It must allow me to do my job and do it better, not hinder what I'm doing in terms of my workflow. And that is something that many medtech companies get wrong because they come out with a solution and they start looking for a problem to solve. Whereas what they should be doing is talking to people like me and saying, “What is the problem in your clinical field that you really need solving?” And then engineering their software or their hardware product to fit in with what we need in the environment. So, it's the wrong approach to have. And fortunately, with HIO, I'm one of the co-founders, and I approached this with this situation talking to many of my colleagues in the field across North America and across Europe to say, “Look, guys, we think we've got a clinical problem in the operating theater and in the ICU. And I think we can solve the problem by using software and our clinical technology platform. What do you think about it?” And almost unanimously, everybody who's heard about it and heard how it works say, “Yes, if you can get this to work and get funding for it and make it happen, and it doesn't, you know, it fits into the operating room, this will work very, very well.” So, it's really important to get this out there to the medtech world. Is for the guys who are inventing tech, we're very grateful to them because that's how medicine advances, of course. But they need to speak to us first at the design stage and the ergonomic stage before they get to the point where they're raising large sums of money to develop the product, because if they get that part of it wrong at the development stage, it will not do well on the market because people will buy it, one or two hospitals will invest in it, and then it'll sit in the corner, gathering dust. It won't get used. I've heard said before that the best way to design a medical device is to be seamless to integrate into an existing system and impossible to take out. So, it should be something where it's not disrupting, it's not causing any problems when you're building it in. You don't really have to do that much to your existing workflow. I mean, as a doctor, obviously, you're going to know you have a process for doing things, and branching too far outside of it is probably not going to be in your comfort zone, and it's not going to be as effective. But once you have something successfully integrated, it should become such a seamless part of your day-to-day usage that taking it out should feel like there's something missing going on. Absolutely. That's a very good point you make because, you know, every so often something really disruptive comes along that changes the way we do medicine. But that's not very often. It's usually an incremental approach to things, that things get refined over the years. Something new comes in, we become reliant upon it, as you quite rightly said, and then it becomes so invaluable to us that almost taking it away is dropping the standard of care for the patient, and it almost defines itself by becoming a standard of care. So, if a patient X doesn't have that standard of care, they're getting a substandard service. So, that's the way it should be implemented. And the biggest issue, I think, we find with medtech is the integration into the IT systems of hospitals because there's security issues there. There's patient confidentiality issues there. There's also the fact that they want to be 100% secure in terms of patient data. And that's before any of the system actually starts to work for us as clinicians. You said something earlier, and Trevor talks about this quite a bit, and I do as well, that a lot of innovators have a solution and they try to find the problem for the solution. Is that the majority, do you think, or the majority actually talk to the KOLs and understand the actual problem, then come up with the solution? I think unfortunately, it happens more than you think, and it happens, and then eventually what happens is they spend quite a significant amount of investment developing a product to a certain stage, and then they have to backtrack and re-engineer the product according to when they actually speak to the doctors who are going to use it. Then they find out that the software needs to be redesigned or the hardware needs to be integrated in a different manner because it doesn't fit in with the workflow. The best thing a medtech can do is actually spend a week or two weeks in a hospital with their chosen physician or their surgeon and look at how it is that they actually work. Whether it's the outpatient clinic, whether it's the operating room or the intensive care unit, if they can see how it is that they work and what they're doing and how their solution can fit into that medical problem, that's the best way they could engineer the product. So, just to give you an example about HIO, we've invented the software which is created essentially a digital twin model of how the blood coagulation space works in your body. Every one of us has blood flowing around our circulation. Every one of us has different levels of thinness or thickness of our blood. And the way that happens is dependent on many, many things. It depends on the genes that you inherit from your parents. It's inherited by the drugs that you take medicines-wise. It is determined by your other co-morbidities, in other words, the other illnesses that you may or may not have. And all of that comes together and leads to a different situation whereby every one of us has a different way of clotting and a different way of bleeding. And that's really, really important in the operating theater because you can imagine you're coming from major surgery or if you're walking around and you have a high risk of developing blood clots in your body, you want to know about that, and you also want your physician to be able to treat that so that it treats you on a personal basis rather than treating everybody the same. And that's where I spend my life in, you know, speaking at medical conferences, over the next 10 years or 15 years, medicine is heading into the world of personalization of therapy and prediction and precision medicine. And what that means is that currently what doctors do is they treat you all in the same way. So, if you come to me and say, “Look, I've got a problem with my coronary arteries,” the doctor's going to put you on some aspirin, and he's going to give you a fixed dose, say 150 milligrams. But that dose may not be the right dose for you. You may be different to what Trevor needs or different to what I need. And the only way we can do that is by personalizing the dosages. So, you may find that that dose is ineffective for you. On the other hand, it may be too much for somebody else. And yet we weigh the same, we look the same, we're the same age group, we may have the same diseases. So, that's where the future of AI and medicine is coming in. So, it will allow us to individually treat our patients looking at their individual makeup and their individual case histories and the individual drugs that they take and allow us to engineer how much and how we need to treat them. And this becomes really important in the operating room because you imagine if you're having major surgery, for example, you know, a heart operation, then you need to be able to know, or the surgeon and the anesthetist need to know, that if I cut this patient, how likely is he to develop bleeding complications during the operating because that will affect the way he operates? It'll affect your outcome. It affects how much blood transfusion you need, and therefore how fast you make a recovery in the operating theater and also in the intensive care unit afterwards. And the only way you can predict that is by measuring many, many blood samples that we do during the operating room and putting all of that data together and putting the histories together. All of that comes together, and that lends itself very nicely to AI analysis and allowing us to use artificial intelligence to take these multiple sources of information together and personalizing them and getting an output that says to the physician or the surgeon at the time of the stuff happening, “Your guy is about to bleed. You need to do something.” And if you need to do something, you can speak to the blood bank earlier. You can get your blood stocks up earlier because blood, interestingly, is a very, very limited resource. It's not a physical engineered source. It only comes from other people. So, we rely 100% on donations. There's no such thing as artificial blood yet. Although the first person who invents that will be a very, very rich man. But at the moment, we don't have that. And so what we have is we're relying on people to donate. And we also have to be cognizant of the fact that when you use blood, it's only usable for four hours. So, once you throw a unit of blood out from the blood bank and you throw it ready to give to somebody, it's either use it within 4 hours or you have to throw it away, and it goes straight in the bin because it can't be used because of infection risks and others. So, it makes a very precious and time-sensitive resource, which means that if we use it properly and have careful stewardship of the way blood is used, then it will improve your patient's outcomes, but also it'll save the institutions and hospitals a lot of money. We're talking about millions and millions of pounds per year per institution. And that's why it's so important to match what we're doing in the OR with what the blood stocks are doing. And that's essentially what HIO is about. That's where we fit in to make the physician and surgeon's life easier and to make sure that the right people get the right transfusions at the right time. Yeah, I was just realizing today, we had done a couple podcast recordings, and both people were referring to currency in pounds, or both of you are living in the UK at this moment. So, if you multiply by 1.36, that'll give you dollars. There you go. All right. So, Trevor, you're familiar with this digital twin concept, I'm assuming? Like, what are some of the cybersecurity implications with digital twins? Because it seems like this is the wave of the future, everyone having a digital twin, basically, to provide specific doses and specific types of treatment versus a general treatment. Yeah, there's a lot that can go into this as far as, you know, of course, the benefits, and then of course, with the benefits, what could be some potential risks there. I completely agree with you on this push towards personalized treatment, personalized healthcare, even personalized wellness. I mean, you know, everyone has their Whoop or their Oura Ring, or whatever, to try to craft the right diet, the right exercise, the right lifestyle based on what they need specifically. And so, I think just as an industry, medtech is really branching in this direction, seeing what people want to do. Does your Oura Ring actually know what you need specifically, though? Well, the Oura Ring isn't a medical device, and so it'll say like, “Oh, your I.D. says my ideal sleep time is from 11:08 to 7:08.” Exactly. How does it know that? By seeing my body temperature at night, by seeing my pulse at night, by seeing my breathing rate. The different things that'll vary when my body naturally wants to wake up is apparently around 7:08, when it naturally wants to go to sleep is around 11. I find that to never actually be the real case, but, you know, one can dream. So, I thought it is a medical device if you switch one setting? It is a medical device if you're using it for fertility as a woman. That is the only time it's considered a medical device. Though, I actually just got a notification the other day, they're doing an investigational study to market it as a software as a medical device for hypertension. And so, we'll see how that ends up coming along. I'm going to sign up for the study and see if I can get some early sneak peeks into that. So, if we have a digital twin, yeah, if we have a digital twin and, you know, all of us end up having some form of digital twin, but it's compromised, then we could be getting the wrong type of treatment, right? Well, there are a couple of things that can go wrong here. The first, you know, the wrong type of treatment, and this is what we always talk about, is the patient is the focus here, not the data. The patient harm is the biggest risk that we can see within the medical space. It's unique to the medical space, but it's really important if there is a risk that we can say, “Okay, let's take a look at, you know, how likely this patient is to bleed,” and we say that we can try to modify the values. A very likely to bleed patient, we can say it's not going to happen, and then you might not be as prepared to treat them if something goes wrong. Inversely, if a patient's not likely to bleed, and we can modify values to say that they are, they might receive unnecessary treatment, which is going to cost the hospital money and may not be good for the patient. You're not going to want a transfusion if it's not required. Same way, you're not going to need drugs if they're not required. So, that's going to be the main thing that we're looking at. And this is consistent across any use case here. Even if, and you know, the Oura Ring, it's not doing it to this capacity. Everything's anonymized and all that because it's not a medical device. But let's pretend if there's a digital twin out there and it has all this cardiac information about me, and you know, I'm especially sensitive to it. I've had a lot of cardiac problems. I've gone through heart surgery in the past as a result of them. And so, if someone was able to take and modify that information and say that I should be doing something potentially harmful, and I'm getting clear signals on why I'm supposed to be doing it, it might be something that I lean towards without knowing the full picture. The other side of this is, I know we say patient safety is the most important thing, but this isn't to say that data isn't important. It's just the analogy we always say is like, would you rather get hit by a car, or have someone take your credit card? You'd probably rather have someone take your credit card. It's inconvenient, but it's not going to be the end of the world. I've never seen that analogy. You always say, “Like, would you rather get, you know, super hurt, or have someone steal your information?” I say, “Shocked to death with a, if a defibrillator is hackable.” Yeah, that's a good one, too. Shocked to death with a defibrillator, or someone, you know, stealing your, stealing your patient records. Of course, both are going to be bad. But with that digital twin, we are effectively creating a copy of you and a copy of your health. Any of the same risks that you would have from getting your health information exposed or breached are going to be present if someone's able to access that digital twin, but to a much more magnified degree. This digital twin, I mean, the sky's the limit with how much information this can have. It could effectively be a replica of you and all of your good and all of your bad health conditions in one place, and that information can be really dangerous if given to the wrong people. So, a digital twin, would that be considered a clinical decision support system, then? Potentially, it depends on what the digital twin's doing. Yes, if I may just come in there, I think you make some very, very good points because at the end of the day, the data that we use in the operating theater or anywhere else, for that matter, in the clinic or in the intensive care unit, has to be 100% reliable. We need to be able to trust what we're seeing and what we're getting. That trust can only come from security. So, we expect that, you know, the lab data that comes to me when I do a blood gas, for instance, I need to know that that blood gas is 100% accurate. It's come from the patient. The sample's come from the patient, and the information that's come from the lab to me is secure. No one's tampered with those numbers because I'm going to use those numbers to treat my patient. So, the integrity of that data is absolutely vital in what we do because we make decisions on the basis of the information that we get, and, you know, in real time in the operating theater, I'm making decisions all the time in blood pressure, heart rate, CVPs, you know, echo data, and I need to know that what I'm seeing and what I'm getting there from that machine is 100% reliable and reproducible, and that the error rate from that numeric data is minimal, if zero. And this is the point that you make, I think, Trevor, very well, that more and more increasingly, people are taking responsibility for their own health, and this is a really good trend because people are actively, you know, 5, 10 years ago, nobody measured or knew what their heart rate was, what their blood pressure was, with the rings, why they needed to know, you know, the sleep patterns and stuff. And these are all really important because people are actively taking account of their health. It also means, and it opens up the possibilities for remote monitoring from our physicians. So, for instance, if you have a medical problem and you have a ring and you can transmit that data across the cloud to your physician who's sitting in the office in the hospital to monitor your heart rhythm, for instance, or to monitor your coagulation space. You know, you may be taking an anticoagulant every day, and at the moment, you have to go once a week to have your anticoagulant levels monitored at the hospital. Wouldn't it be great if you could have that done as a point of care test at home, not having to come into the office, not have to come into the hospital, and then transmit that data to your physician in a secure way, and then I can make treatment decisions on the basis of that and tell you how to change your dose? That's game-changing for a patient because it completely transforms their the way they live, how often they need to be monitored, and how often they need to be seen by a doctor. And yet, they have instant access to that information, and the physician has access to that information in the context of that patient's physiology. So, that's a very, very good point. I think the fact that people are walking around with rings and relying on that data to be 100% accurate, you know, the sleep business is good. In my business, we don't sleep very much, and we get to operate, we get to operate all night, and we sleep when we can. And it's great to have a ring to tell me, “Yeah, you should be sleeping more.” But the fact is that, you know, sometimes your job prevents you from doing that. Just like you guys, you have to be on call 24 hours a day to make sure your systems are secure. So, yeah, I think cybersecurity in medtech is absolutely vital because the key thing is that the doctors who are the end users have to be dead sure that what we're receiving has not been hacked. The data is reliable, and then we can use that to actually treat our patients with. Yeah, I think Trevor has been working less since he got that ring. He's been claiming he needs more sleep. That's a problem. I know now that you mentioned that, I yeah, I see he's been slacking since he got that ring. This is a, it's a productivity killer. It is. It is really funny, though. If, you know, if I don't sleep that much the night before, then I wake up, and the ring says, “It's not nice to you.” It's like, “So, things didn't go as planned last night.” I see. I go, “Yeah, clearly that's why we're here this morning.” But I'm curious with the ring, because the rings can pick up different depths of sleep, because there are many, many stages of sleep. And sometimes you get, you think you're sleeping, but you're not getting quality sleep. You're getting time, but you're not getting quality to allow your brain to regenerate. And your rings can pick that up. So, it will tell you the best time to wake up and the best time to sleep on the basis of your depth of sleep. And that's something we've never been able, based on your REM cycles. Yeah. Exactly. Totally. And it's super interesting, too. I'll wake up, and, you know, I can't say like, “Oh, this is exactly how much of REM, deep, light sleep I got,” but I wake up and I go, “I feel bad today. Let's take a look at that.” And sure enough, it says 10% of your sleep was REM sleep instead of, you know, the target of like 20% or whatever it is. So, it's pretty accurate. I mean, for all I know, I don't know. It's not like I'm terrified. Your body will tell you. It tells you if you wake up in the middle of a deep REM cycle, you feel pretty rotten when you wake up, and it takes a while to get over that. Whereas, if you wake up at the right point, you feel fresh and regenerated and you're ready to go. It does bring up kind of an interesting point, which is like, this information is only as good as what you're doing with it. And especially for one of these wellness tools where it can't make any claims to anything. And so, reading through it, it never says, you know, “Oh, you need to get more sleep. You need to go to bed earlier tonight.” It just says, “You didn't sleep well last night.” And go, “Okay, cool. So, what's the response? What do you do as the next step?” And it's a hard problem to solve for these wellness companies, because then you're bridging the line between a wellness product and a medical device, where obviously becoming a medical device is a big deal. You're regulated, you're tightly controlled. You don't get the same freedom to do things, but you do have a little bit more insight. And so, what you were saying about getting this information to your healthcare provider. If all I had to do was tap a button on the app, it beams this information over to my doctor, and I don't have to, you know, go down to the hospital and take a look at what's going on with my heart. All the better. And even on the inverse, it's going to be convenient, save time, more effective, save money for the patient. And for the hospital, they're going to have to allocate less time, less resources to dealing with this. You don't need to, you know, check this person in. You don't need to read these vitals yourself. It's just there, and it's ready. So, I love the direction that healthcare is going with this personalized treatment, with this focus on wellness. I think it's a, it's a good shift. Well, I think that solves, at least it attempts to solve a major problem. I know I talk to a lot of people in the UK, and they always complain like, it's going to take six months to see a doctor. But if you can have your data transmitted, then a doctor can make, see more patients or make more decisions in a shorter period of time than waiting for someone to show up. Yeah. No, that's a very good point you make because the information that's transmitted from these devices allows both patient and physician interaction together for the first time. It's a partnership rather than me telling you what to do as a patient. So, you can send data to me, and as long as I know that the data is robust, I can make my treatment decisions on the basis of what I'm receiving from you, and that saves, as you quite rightly said, time, physician time, patient time traveling into hospital. And in a pressured healthcare environment, the UK is unique because we have this thing called the National Health Service where everybody is entitled to have free healthcare. But that comes at a price. Of course, it's tax subsidized, but it means that you have to prioritize those in real need of urgent therapy versus those in need of follow-up, but not urgent follow-up. And that's where I think the system gets clogged up a little bit in the UK because it's difficult to prioritize who really needs to have immediate treatment versus those who can delay, and those who need low-priority treatment but still need to be seen. And that, I mean, that's a subject of a whole different podcast, if you like. But going on to the business of the regulatory authorities, this is quite important because at the moment what we have is there are two types of devices. There are those devices that simply aggregate data together and present them in an easily readable format to the doctors, whether it's in the ICU with multiple sources of information, or in the operating theater, or in outpatients. But all that does is presents that data, and the doctors and the surgeons and the physicians decide how to treat the patient on the basis of that information. Where I see the real future of this is that when AI-based models come in, and they're well-matched using data-driven technology to make good predictive decisions with high accuracy, they will be able to tell the physician, “These numbers suggest that you should be doing A, B, and C.” And that's where it is really, really important to have those robust data sets. Because if the devices start telling me I need to treat my patient using X, Y, and Z, then I need to be dead sure that the information being fed into the system and the diagnosis it's making and the therapeutic decisions it's making are accurate and they're safe. And that's where I think your security systems come online. That's where regulatory authorities are really, really interested, like the FDA, because they want to know the data sets and the clinical trials upon which you base the technology. And they want to know if there are any outliers because the great thing about medtech is that in the vast majority of people, 80%, 90% of people, medical technology works really, really well because the majority of people are within the normal range, if you like, of blood pressure, heart rate, ECGs, and other things. Where it gets really interesting is the 10% of people that are outliers either side of the midline. So, those who are either extremely high or extremely low with their values, and for those, those technologies are difficult to establish and difficult to prove because they don't have the data sets to actually educate them on those extremes of physiology. So, we find that they're really, really good in the mid-range where actually the physicians don't need that much information. But where we do need the help is at the extremes of medical problems. And that's when the monitors and the diagnostic devices aren't that great because they haven't been fed the right data, and there's not insufficient data to drive them. And that's where we need to correct what we're doing a little bit because they become inaccurate. Then when they become inaccurate, then it boils down to basically physicians or surgeons making their decisions on the basis of the information that they have. So, I'm really keen to see this move forward and how it goes. But I think we need to be very careful that we need to safeguard patient safety as we're doing it as well. From a liability perspective, if I've got this device that's providing data and the data is erroneous, is it the device's problem, or is it the physician's problem for making the wrong decision? That's a really tricky question. Really tricky question because across the world, I think, you know, doctors are only as good as what they see in front of them and what the monitoring and data is presented to them. Okay. Now, at the end of the day, the final monitor for me is not the screens. It's not the information that you give to me. It's the patient. And if I see my patient doing something different in front of me, like bleeding out, or not bleeding, or coagulating in a different way, or causing a stroke, it doesn't matter what the monitor is telling me. If the patient's telling me something different is happening, I'm going to believe the patient. So, that then opens up the question, at what point does the company making the medtech become or become legally responsible for the data it's generating? At what point do I have sufficient confidence in the technology to be able to say, “The technology made me do it,” or, “The technology told me to do it?” And I think it's not clear yet at all. This is a real hazy area right now where even the FDA is struggling with, and the regulatory authorities, because right now nobody knows, you know, at the moment the end arbiter is my decision-making because at the end of the day, I make a decision to treat my patient in a certain way. But if the technology comes along and says, “No, no, no, you're doing it the wrong way, and you should be doing it this way,” and then I treat on the basis of what the technology said, and then something happens, then it's unclear as to who should be liable. And I think that's a very, very tricky question to answer, a very important question. I know the FDA, at least, is trying to carve out some distinctions on a regulatory pathway to more clearly establish the blame criteria there. They recently had some more finalized guidance come out on clinical decision support software. So, something that's assisting you to make your decision, but it's not saying, it's not making the decision for you. And, you know, it's such a blurry line. And I think, I completely agree. We're going to see it get a little bit more mature, but it is really hard to say when does this start and when does this stop. Some of the terminology can be a little bit hazy. But the basic gist that the FDA is trying to get at is, if you're a clinical decision support software, the responsibility has to lie on the physician. Having said that, you know, when you have this device, and it's providing all of this signal, saying, “This is the problem. This is where you need to go. This is what we think you should do. This is what we're indicating is going on.” And you follow that, you were given some of these signals that you should go in this way. And so, are you really to say that you're at fault if this device was giving you bad input? According to the regulators, yes. But is that the correct answer? I'm not so much sure. It becomes a little bit more clear-cut if you're a diagnostic medical device. A medical device performing a diagnosis should get it right. That is what it is saying it is supposed to do. And so, if it is misdiagnosing, that's the device's problem. Full stop. But this clinical decision support software, it's such this murky area right now. And I think it may well take a couple of test legal cases before it becomes apparent as to who holds final accountability over this, because right now it's not clear at all. And, and so, yes, you're quite right that the final arbiter is the physician, although, you know, 50 years from now, that may change again. But right now, it's our responsibility, and we are accountable for our actions for what we do. And if I have inaccurate information being presented to me, I often, we get something called alarm fatigue, we call it in medicine. And we get alarms being flashed at us by monitors that are either not picking up the right data or they're getting unreliable data because of noise, or electrical interference, or something. And they flash alarms at us, and we switch them off because we know that what is really happening is different to what the machines are saying. And that discretion is still right now entertained by the physicians. But the closer we get to AI-based technology that is making more therapeutic suggestions rather than just clinical decision support, the hazier and more blurred this outline is going to get. It's going to be very interesting to the first case of product liability and therapeutic decision-making in the courts of law, rather than the physician being held to account, but the software being held to account. That's the American way. Get it in court and see what happens. That's how it works everywhere at the end of the day. You know, the lawyers make the final decisions, and then that sets a precedent, and then that precedent then determines who, who carries the liability. Right. Well, we're coming up here on time. I'd like to go around and get some final departing words or last-minute words of advice for our audience. I'll start with you, Trevor. See what you have to say today. See you say something different. I'm going to, you know, try to distill down the conversation and also steal probably one of the most overused tech sayings ever, which is, “Look for the signal, and look for the noise.” And I think that taking it even past what do you need to do from, you know, obviously Mark Zuckerberg talking about it with Facebook, “Look for the signal, and look for the noise,” is way different than looking at it as a physician. But understanding what is relevant, what is important, how can you drill in on that as your focus instead of just seeing all these alarms going off all the time and trying to respond to everything? You know, you mentioned your patient is going to be your final bit of signal on what's really going on. You look at that patient, you should know what's happening more than these alarms going off that are trying to pull you in 10 different directions. So, yeah, and that's the ground truth, isn't it? Your patient is the ground truth. You see exactly what is happening there, and everything else being monitored from that patient is secondary. So, we rely on that information as being the, the sort of the mission-critical procedures that we're doing, we rely on the patient to tell us what's going on, but we are aided and embedded by all the monitoring systems near us. But you make a very good point that, you know, as the technology integrates more and more into physician workflow, it will make our jobs easier, but it we will also have to be a little bit wary because alarm fatigue is a real phenomenon in hospitals, and it's it, it leads to sometimes wrong decisions being made, but also it leads to good technology being ignored because it's been raising so many false flags so often that when the real situation arises and is flashing alarms at you, you ignore it. And that's when doctors sometimes get into trouble because they don't realize what is going on because the technology hasn't been that reliable to tell you. So, for us, the main importance is that it has to be reliable, it has to be accurate, it has to be reproducible, and it has to be secure. And those are the four things I would say. And I think the future is very exciting for medtech. You know, when I first trained and I started off in heart surgery a quarter of a century ago, we are operating now on patients that are sicker, older, have more diseases, and are much more complex than we ever imagined 25 years ago. You would have been turned over for surgery then, yet technology is helping us, and we're getting further and further along, and I continue to see how this will develop in the intelligent operating room, which is where we're heading. Awesome. I will add something you said, Omar, about the medtech innovator should spend a week or two in the hospital or in their environment where their product may end up so they can design it properly, rather than have to spend a lot of time redesigning it, as you mentioned. I think that's extremely important. It seems like to me that would be common sense if your, if your clients are going to be using your product, to understand actually how they would use it. But as I, as we know, common sense is not always common practice. I was going to say, I've heard you say that one to me a couple times. I've said that myself, and very much so. “Keep it simple, stupid,” is a thing we use all the time in medicine. If you add needless complexity to your environment when it's not required, it serves to add noise to what you're doing, rather than keeping the noise away and making your decision-making that much more difficult. So, yeah, that's what I'd like to end with. Thank you. Awesome. Cool. Well, thanks everyone for tuning in. I hope you found value in this episode, and we hope to see you on the next one. Great. Thank you for your invitation for joining us. See you again soon.

    Hosted by

    More from your hosts

    Other episodes diving into Christian and Trevor's areas of focus.

    Episodes covering similar ground.

    Listen to this episode