In this episode of The Med Device Cyber Podcast, hosts Christian and Trevor welcome Chris Danek, CEO of Bessel, to delve into the critical importance of early design decisions in shaping the success and cybersecurity of medical devices. The discussion emphasizes that robust cybersecurity is not merely about data protection but fundamentally about patient safety, citing examples of severe harm that could result from compromised devices. The conversation highlights common misconceptions, such as the belief that all software developers inherently understand cybersecurity or that devices without obvious external connections are immune to cyber threats. A key takeaway is the necessity of integrating cybersecurity considerations from a product's inception, including hardware choices like microcontrollers, and the meticulous vetting of third-party software components through the creation of a Software Bill of Materials (SBOM). The episode stresses the iterative nature of cybersecurity throughout the total product lifecycle, rather than as a one-time assessment, and introduces threat modeling as an essential early-stage activity. The experts also touch upon the nuances of FDA expectations, particularly concerning vulnerabilities like self-signed certificates, and the distinction between traditional IT cybersecurity and the highly regulated medical device cybersecurity landscape.
Key Takeaways
01Cybersecurity in medical devices is primarily driven by patient safety, not just data protection, due to the potential for severe physical harm from compromised devices.
02Lack of preparedness regarding the extensive scope of cybersecurity, particularly concerning third-party software components and hardware choices, can lead to significant delays and product setbacks.
03The FDA explicitly disallows the use of probability for cybersecurity risk assessments, instead focusing on the criteria that must be true for an exploit to occur.
04Early and continuous engagement with cybersecurity experts, including threat modeling from the idea stage, is crucial for making sound design decisions and avoiding costly delays.
05The misconception that all software developers are cybersecurity experts is dangerous; specialized cybersecurity expertise is necessary due to differing skill sets and the evolving threat landscape.
06Cybersecurity must be integrated throughout the entire total product lifecycle of a medical device, from initial design requirements to end-of-life considerations, rather than being treated as a one-time study.
07In the context of FDA submissions, be aware of specific vulnerabilities like self-signed certificates that, while often overlooked in traditional IT security, are a significant concern for regulators due to data privacy and encryption implications.
Frequently Asked Questions
Quick answers drawn from this episode.
In this episode of The Med Device Cyber Podcast, hosts Christian and Trevor welcome Chris Danek, CEO of Bessel, to delve into the critical importance of early design decisions in shaping the success and cybersecurity of medical devices.
Cybersecurity in medical devices is primarily driven by patient safety, not just data protection, due to the potential for severe physical harm from compromised devices. Lack of preparedness regarding the extensive scope of cybersecurity, particularly concerning third-party software components and hardware choices, can lead to significant delays and...
This episode covers SBOM Management and Threat Modeling. It's part of The Med Device Cyber Podcast, hosted by Blue Goat Cyber, focused on practical medical device cybersecurity guidance for MedTech teams.
The conversation highlights common misconceptions, such as the belief that all software developers inherently understand cybersecurity or that devices without obvious external connections are immune to cyber threats. It's most useful for medical device manufacturers, cybersecurity engineers, regulatory affairs professionals, and MedTech...
Cybersecurity in medical devices is primarily driven by patient safety, not just data protection, due to the potential for severe physical harm from compromised devices.
Listeners also asked
Quick answers pulled from related episodes.
What does Episode 27 cover about "Why Cybersecurity and Quality Are One and the Same"?
Episode 27 of The Med Device Cyber Podcast covers Why Cybersecurity and Quality Are One and the Same.
Pre-fills with: "Cybersecurity in medical devices is primarily driven by patient safety, not just data protection, due to the potential for severe physical harm from compromised devices."
Startup companies sometimes can run past a milestone in a funding capacity, the runway of their company that could be make or break for the company itself. They like to say, “Oh, cybersecurity is just about data protection with medical devices.” But the primary driver is patient safety, because if you think about it, you can hack into a surgical robot that's performing surgery on somebody's spine. You can paralyze that person. There's this misconception that software developers understand cybersecurity. They will all tell you they are experts in cybersecurity, but the reality is, from my experience, I would say one out of a hundred actually know about cybersecurity.
Christian, before you go forward on that, I'm just going to say, well, that's provocative and maybe to some people, not me, but to some people that's inflammatory. I think there's a good reason why many software engineers therefore then feel that they have some level of expertise. We'll start testing three months out from submissions when it's the first time someone's touched their code. And we come back with 3,500 vulnerabilities on day one and we say, “Well, you know, this is a conversation we need to have.” Yeah, it can save you like $500,000 in cost overruns and delays and everything else from a ten-minute conversation, right?
Welcome back to another episode of The Med Device Cyber Podcast. Today, we're talking about how to build medical devices for impact and we have Chris Danek, a guest of ours from Bessel. We've also got our co-host Trevor as usual and then myself. I'm coming from beautiful Tempe, Arizona, where it's like 85 degrees out today. I think Trevor is coming from foggy San Francisco, where he insisted on moving to for, I don't know why, but you know, he likes the fog and the cold and rain. And then where are you coming from, Chris? You're also in California, right?
I'm in San Carlos, California. You could think Silicon Valley. It's between the San Francisco airport and Palo Alto and I would say that our weather is typically pretty nice. Yeah, even though you're close to San Francisco, the weather is nicer there, isn't it? Yes, that's for sure.
Why didn't you move to that area, Trevor? Why San Francisco? Why not Santa Clara or like closer to Silicon Valley? Well, Santa Clara, I don't like San Jose. Something about it, it just feels like this expanse. And San Francisco is nice because everything is in 49 square miles. So it's so easy to get from anywhere to anywhere.
Yeah, and I'm down with Christian. I think the Niners come from the gold rush, but I didn't make that connection before. Denver airport is a larger area than San Francisco. Pretty interesting. Hey, Christian, thanks for inviting me on this podcast. I've been watching what Blue Goat's been doing for the past few years and I think you're filling a real gap in understanding, awareness, and actually execution on cybersecurity, which is more and more important. And I'm interested to talk too about the common challenges outside of cybersecurity that startups in our field are facing. So, thanks for the invitation.
Yeah, thanks for joining us. I know we ran into each other at JP Morgan not too long ago and we're talking about the fog. I think one of the things you help companies do is remove some of that fog on their journey to commercialization. Would that be a good way to kind of kick off what you do and maybe describe a little bit about the companies you work with and everything?
Yeah, I like that. I like that metaphor and trying to bring clarity in strategy, as you mentioned, and how we execute against that and how we can actually fuel the tank with fundraising. Those are the things we do. But it starts with the concept of breakthrough impact. And to me, breakthrough is an innovation in our domain of healthcare that will sustain and scale. Without that, it doesn't have the ability to impact millions of patients and thousands of caregivers and clinicians. So that's what we're all striving for in this industry really is to create breakthroughs that scale impact.
And I would say that, you know, the challenges remain the same, but it's harder than ever to address the significant questions and concerns or risks that startups have to be able to answer the questions that investors have. You know, I would say that it used to be the case that you could talk about your commercialization plan later in the company life cycle because first, we know there's a demonstrated clinical need. We know the market is big enough and we think we have a line of sight in certain areas. And if we show technical proof of concept, clinical proof of concept, then maybe during the Series A round, we'll work harder on the commercialization plan and make it specific and concrete. This idea of proxy or relying on the experience of the team and the judgment of others around it, it breaks down because now startups have to answer, they have to have a good path to answering all of the questions that investors will have, even from the earliest stage that relates to commercialization.
On the technical side, if we dial back from a successful launch through not just reimbursement, but the regulatory approval, looking at the host of constraints that a development team has in getting a product that's able to help patients, and typically we would think about software lifecycle processes, usability engineering, and electrical safety IEC 60601. Those used to be sort of the “big three” in terms of standards and systems for making sure that we develop safe and effective medical devices. And I'm going to say especially apt in this podcast that cybersecurity and considerations of cybersecurity is added to that list of things that you should be thinking about from the beginning. Making sure you're making good design choices and all that sort of thing. So that's another reason I was so happy to be here and join you guys today.
Yeah, it's interesting that the philosophy was to just worry about commercialization later on. I mean, we are bringing an innovative product to market, but it's also a business. And something you said from a breakthrough or impact perspective, if the business fails, then the technology doesn't get to the people that need it, the product doesn't get to the patients that need it. So they have to go hand in hand. And it's one of those things, like Trevor and I always talk about reverse engineering the market you want to get into, the healthcare delivery organizations you want to get into, because they may have specific procurement requirements that are very different than you might think. So from a commercialization standpoint or cost reimbursement standpoint, and just an acceptance and adoption standpoint, these are all things that I think are still thought about way too late in the game.
And same with the same thing with cybersecurity. A lot of healthcare delivery organizations have very specific guardrails around cybersecurity now because there's been so many data breaches. So they don't want to just accept any medical device that has some sort of connection into their environment. They want to see proof that it meets all these requirements, and that means you have to design the device and you have to have the right claims with the device to show that proof and that traceability and that body of evidence, which I think is still a little bit of an awareness challenge in the industry.
That's for sure. I mean, if I take a step back and think about this breakthrough impact is the guiding light that we have or the vision or the mission or the purpose that we share, and then we're going to try to deliver on that. It starts with deeply understanding the problem and knowing it's worth solving and knowing what the ideal solution would be and developing criteria for the ideal solution. And in MedTech, you know, that includes what's going to, how you're going to change and improve the clinician's workflows, how you're going to change and improve patient outcomes. But there are a host of other things. And you can look at the Stanford biodesign program for a set of filters that you should use in screening your innovation and can you move it forward, and it relates to things we talked about: reimbursement, regulatory, IP, things like that.
When we think about cybersecurity, it used to be kind of a nice hack for your first generation product to be unconnected and inaccessible to the user without tools. So if we could say, look, we know cybersecurity is important, but for our first generation device, we're just going to make it not available and open to the outside world. Well, that causes a problem because you just mentioned that we have to understand the customer and the value we need to bring today, frequently, maybe the majority of times, is going to relate to devices that involve hardware and software that are connected to the outside world to deliver the value or the impact. If it's connected to the outside world and it's needed by the customer, then cybersecurity comes as a mandatory requirement.
If we decide that we want, if we had capital equipment, for example, and we needed to do things like software upgrades, whether it's in the field or the software upgrades are done via the cloud, then we also are triggering the need to bring cybersecurity to the forefront. So in our development, Christian, a lot of times in the very early stages when we're thinking about the configuration or the architecture of a product, we will look at some of these requirements, standards, testing things that we need to verify and validate, and we'll make sure that our design choices make it easier to do that well. And so, I mentioned electrical safety, we can look at a number of other areas, but that's a great example of making sure that we get a design review, making sure that we make good choices.
And I'll just say that I come into this podcast fairly naive to cybersecurity. I've worked on projects where we've had to prepare and submit to FDA on cybersecurity, but I am far from an expert in that area. And so one of the things that I wanted to think about as we move forward when Bessel moves forward with our clients and helping them aim for the proper target and from the beginning make good choices is what we should be doing and thinking about with cybersecurity at the architecture stage so that we make good choices. And I was curious to learn from you when I joined this conversation, what are some of the key tips that you have for folks? And also, honestly, everybody loves a good story, and we learn a lot from failures and mistakes of others. I'm curious if you've got any examples of when you're developing a piece of capital equipment, for example, and it's got an embedded hardware with embedded firmware, like a microcontroller with a motherboard, PCBA, choices you make that sounded like a good idea at the time. And I would just wonder if you've got any examples of where people put themselves into a jam on cybersecurity without realizing it.
Ynow, I think it's something that is an especially easy situation to find yourself in. Cybersecurity comes across as – the way I usually like to put it – is as kind of an exploratory process more so than a prescriptive process. And so if we're looking at most regulations, most standards, how a product needs to be built, there's an acceptable tolerance and then there's an unacceptable level. Cybersecurity, there is a lot that can go into it. You don't really know what the risk in the product is going to be until you start to test it. You start to model it, you start to analyze it. And even those risks once you realize them, they can be wildly different from your expectations. And so, it is hard to have a prescriptive set of, you know, these are the things that you need to do. This level of control is good, this level of control isn't.
When you're talking about some situations where someone gets painted into a corner, the biggest thing that we see is a lack of preparedness around just how much goes into cybersecurity. But from a technical perspective, one really big thing, and if you were to pull anything as a main takeaway, is what components you put into your code. So when you're pulling in all of these third-party libraries, you're building out your software bill of materials. It's obviously an extremely important part of the submission. It's the FDA's favorite thing to threaten a hard bomb recently. That is a situation where you can, it can be a product killer. You can integrate this critical component. Your whole code base is basically wrapped around it and then you step back and realize, “Wow, there are security holes in this and the vendor is not going to fix them.” They're not a medical company. They aren't subject to the same regulations. They're saying, you know, “Hey, this is on you for picking that.”
So stepping back and having a process for vetting any components you're bringing into your product is probably – it's going to make development more complicated. There's no way around it. Having a subset of what you can choose is going to be harder, but it's going to allow you to build a much, much more effective product and truthfully would clear out a very big majority of a lot of problems that we're seeing come back relating to some of these early design decisions.
Okay, Trevor, especially when we prototype and we want to make something fast, we reach for something that's at hand, even in software modules or components, as you're talking about, something at hand, something we're used to working with. We want to make something work quickly. Sometimes that gets frozen into the design over time. How does someone who's working at the earliest stages look at software components for cybersecurity? Like, is there any type of resources available that would let you understand the, oh, threat profile might be a specific cybersecurity term and that might be a wrong usage of it, but the level of concern that you would have about including it as a part of your product. How could we, how could teams that I work with look at that and make good choices when they're just trying to make a first learning prototype and see if they can get something working?
I think part of it, it's such a common problem is it is really hard. And you know that exact situation where if you want to see if something works, you're trying to just get validation for your idea, understand you really do have product market fit, do what works, don't worry about it until you're moving towards these steps. But that situation where it gets baked in is what becomes hard. So when we're really looking at what things the FDA is concerned about and what things the manufacturer needs to be concerned about, if I were to pick two, it's: are the suppliers for that component still updating it? Have they said when they're going to stop? Okay, so end of life, level of support, those are two things you should start with.
Now it's really hard. Well, beyond 99% of open-source components do not have a published end of life. Well beyond that figure. It is 99, probably 999. It is a very, very small subset that do have a published end of life date. If you are on a service contract with someone, let's say Microsoft, you have Windows 10 on, or Windows 11 now, on your device. Well, Microsoft is going to publish when they stop supporting that. So trying to find that information is really hard. And if you can't, that doesn't mean you need to stop entirely and you shouldn't use that component. It means that you should just pause and think, “Where are we using this? Is this going to be in a safety-critical component?” If we're using a component that has not been patched in two years and might have some known vulnerabilities and likely isn't going to receive any support, we probably should not use that in a Class C component of the device. If we're using a Class A component, maybe it's not going to be as big of a deal.
So understanding what is that worst-case scenario if something were to fail in that component and if that failure were to be as severe as it could, and that lets you do an understanding of how much due diligence needs to go into this. Do we really need to go track this down or maybe even create our own component for this higher risk profile? There's a good chance you should lean in that direction. For a lower risk profile, you might be able to peel back a little bit of those controls. And so just having an understanding, you know, mapping it to your specific device is going to give you the most effective results.
So number one, building on a shaky foundation and putting components into your product that, especially for the high-risk software units or modules components, that you have to take more care. That's the number one thing. What else happens at that stage? Especially we say like people talk about freezing the design for verification and validation. Like that's the moment you say this is what's going to represent the manufactured product, and I want to verify and validate this just as you're moving into that stage. What else do you see about decisions just from lack of awareness or understanding or depth of consideration that medical device companies are making that leads to problems?
One of it, and Trevor, you can touch upon it too, is it's not just the software. We've been talking about software, but it's also a simple microcontroller can greatly impact cybersecurity. We had a client not too long ago that made a design decision several years prior to communicating with us that chose a specific microcontroller that didn't support secure boot. Secure boot is a function that the FDA requires for a device to reduce the risk to an acceptable level so that those code can't be tampered with and other things. So that manufacturer had to strip back a lot of the functionality to get their device on the market.
And this is a publicly traded company. So they had to change their story about what they were telling their shareholders and their board of directors, everybody else, because initially they were going to get this great product with feature X, Y, and Z, all this stuff on the market. Now it's just feature X. So what happened, right? And they had to come up with a narrative to support that. And it ultimately set them back quite some time. So I think when you're making these decisions, not just from a software perspective, but also a hardware perspective, it's important to look at cybersecurity and consult with somebody that's an expert in this area because like, we could have easily in a ten-minute discussion told them not to use that microcontroller, right? But they probably weren't even thinking about it.
That really hurts. It hurts because of, as you mentioned before, the human impact, you know, this means a delay in getting something that's valued and needed to help patients, financial impact. If it's a startup company instead of a publicly traded company with more resourcing, I presume, a startup company sometimes can run past a milestone in a funding capacity and the runway of their company. And so that can really impair the prospects of the company and in some situations in bad environments that could be make or break for the company itself, also putting that the innovation and patient impact putting that at risk.
So I think my big takeaway, I've been thinking about what startups need to do now that's different than ever before is to try to think through and work through their plans in certain areas. And so on, we talked about commercialization. There really needs to be more voice of customer. There needs to be a detailed go-to-market and commercialization plan for who the customers will be and how you will sell to them and how you will work in that environment.
When we look at things that we need to do now on the development side that are more than ever before, it's always been a good idea and best practice to bring in like an electrical safety compliance engineering expert from outside and have them working with you hand in hand throughout the program. Construction design reviews, pre-scan testing, you know, it's input into requirements and risk analysis. These are all things that used to be done internally by the development team and then we go out to a test house and test our way through the standard. It doesn't work that way anymore because there's too big of a chance or risk that you're going to miss something that will require revising the product and delaying. So I now add cybersecurity to that list of getting an expert to help you hand in hand.
What's great about it, in my point of view and I understand that you do this at Blue Goat, is that for an expert that realizes it's not going to be a lot of revenue or model or significant engagement to provide that insight you said the value you can bring with ten minutes is can be invaluable. Yeah, I can save you like $500,000 in cost overruns and delays and everything else from a ten-minute conversation, right?
Yeah, so that you and possibly others are willing to engage on the steering in something that's cost-effective and agile and able to deal with a lot of unknowns. You know, typically if you're going to do a cybersecurity analysis, you'd like to know, as Trevor was saying, what's in the software. It's nice when you can look at decisions that have been made and decisions that are about to be made and look at the health as you move through time. So I'm curious to understand, in your business, you showcase the success that you will lock in and help a company succeed with FDA on cybersecurity. To achieve that, ideally, for the company to streamline and to reduce expenses and to cut the time of development, you'll be working with them from the earliest stages. What do you do with companies? How do you work with them? Whether it's informal, a quick consult on the phone, or a more formal, I'm going to call it a design review or a technical review of cybersecurity, but I'm curious how you call it, what you do, what all that means.
Now, I love something that you were talking about earlier around the different types of, we have the safety risk, we have the security risk, and if we're missing things, we're going to lead into overruns, which kind of opens up the door to some more risk here, which is that business and financial risk. And I think that's really what we're mitigating here during some of these early conversations when we're sitting down with these manufacturers and saying, “Let's sit down and talk about this cybersecurity risk is what we're trying to address and cover.” But what we're doing is we're really derisking this from the business perspective.
So when we're sitting down, we're taking a look at, you know, these early decisions: what hardware components are you using? What level of encryption are you using? What data is transferred to where? How are you handling it with what format? Are you using a standardized protocol? Are you running this over HL7 or Fire? Are you trying to do something proprietary and then, you know, convert it over later? So really taking a look at what is your product doing? How are you trying to make it do it? Going down into, you know, the requirements level.
What I always like to talk about when you're looking at a product's requirements is a functional requirement is what it does. A non-functional requirement is how it does it. So the MedTech innovators and the manufacturers, they come to us and they know what they want to do. They know what this product has to do. And that's the how it gets done is usually where these questions come up. So, we're really trying to help answer that how. How are you going to deliver this therapy safely? How are you going to transfer this data securely? How are you going to avoid all of these pitfalls?
Taking a look at the product, taking a look at the entry points, the architecture, the hardware, how strong is the hardware? Is this something where you can just pop off a couple screws and open it up, or do you need a crowbar? These types of things are all important. And I think taking this down into a single task or a single activity, because it can be obviously super broad. There are tons of different ways we can help out in consulting. But I think the single activity that's going to be the best this phase is threat modeling.
And so a threat model is we're taking a look at the product. We're saying, “What is everything that can go wrong? What is everything bad someone could do to this device?” And then, what can we do to design it a little bit more safely? And you can start with this threat modeling the second you have the idea for the product. You don't need to have a completed product. You don't need to be, you know, hands-on keyboard or have a hardware prototype. This is going to let you develop a more effective prototype.
Well, people do threat modeling all the time. They probably don't even realize it. I think in cybersecurity and with Med, we overly complicate it. But like, I do threat modeling in my condo. I know where my doors are, what kind of locks they have. I know the windows, what kind of locks they have. I know the entry points. I know where the cameras are. So I know where somebody could break into here and I have the appropriate defenses, right? It's kind of like the same thing. We know we go park in a parking lot, and it's the middle of the night, and we have a nice car. Maybe we should park under a light that's actually working, right? Or maybe we shouldn't park so far away by the garbage can where somebody can easily break in the car. I mean, we, I think we naturally do threat modeling, but we overly complicate it sometimes with cybersecurity.
Yeah, I think that's a great point about threat modeling just being something that you do. It's and it makes… That's something I do. I'm not sure it's something everybody does, but I'm… Well, I'm hypervigilant about these things for some reason. Maybe it's because I used to be in the military. I'm not sure. So yeah. You know, maybe even if it's not to the same extent, like we can take the apartment example. So I can look around my apartment right now and go, “What are going to be the entry points?” There's a window there, a window there. I'm on the third floor, so I'm probably not that worried about the windows, and I leave them open all the time. But I have a door right here. I don't have a lock on this painting. I don't have a lock on this wall because I wouldn't expect someone to try to break through the wall. That's not a very big risk scenario. That's not a real threat that you would expect.
Would someone… Depends on what you have to offer. In there if it depends on what you may have that's a value, right? That's the other part of it. Depends on how valuable the painting is. Trevor, I think Christian is saying if it was valuable enough, people would come with that. But… Point taken. Point taken. So you have a lock on your door. Let's go one level more specific, right? Which is, I like what you talk about. So threat modeling is a first step in understanding the situation that you have and how you're planning to do it and thinking about the risks or the threats from cybersecurity.
I draw an analogy to different elements of risk analysis that have been formalized for quite some time in medical devices. And maybe, maybe the best one to talk about as a bridge or, you know, leading to more understanding of cyber would be around use risk analysis and usability. Because we have to understand how the device is used. You were talking about this a little bit for a use risk analysis. We look at the tasks that are done by the different types of users and we look at the look at the places where they can go wrong and we can have harm.
So, go a little deeper. If I want to sit down with a piece of paper and a pen on a single page and draft a threat model for a software-containing product that I want to develop, what does that look like? The simplest version but specific to a product. What is the threat model? You mentioned connectivity. You also mentioned what you built into your product itself. What are the nuts and bolts or like a one, two, three on how to put a threat model down in the earliest stage while you're still thinking about your product?
So, I love this, and I think that tying it back to a risk analysis from a safety perspective is a great way to think of it. This is where we start to see a different process though. So, threat modeling and there's the MITRE playbook for threat modeling medical devices, which is seen as the golden standard for this process and it's going to ask us four questions: What are we working on? What can go wrong? What are we going to do about it? And did we do a good enough job? And so really breaking it down into those four questions. What are we working on? Can we really understand everything about this device? So you know, what are your entry points? What are you using? And then mapped back to each of those, what can go wrong?
So when we're doing all these assessments, it should be from a worst-case scenario. This is factoring in, you know, how likely, and I put a little asterisk next to there so I can revisit that, and then what the impact of exploitation is going to be. Now, the reason that I hesitated on likely is this is where we're going to branch away from a safety risk assessment. Risk assessments as per ISO 14971 are done on the likelihood or the probability of something going wrong per X keeps. Cybersecurity risk assessments are done based on the criteria that must be true for X to happen.
So going back to the lock on the door example, you would say someone could not access it remotely. And so you need physical access. Someone else needs to interact with it. I need to unlock the door if someone's going to open that door. So this, you know, having a user interacting with it is going to be adding more complexity. Are there factors outside of the attacker's control? Well, not really. The door's locked or it's not. It's not like sometimes the lock is, the door is locked, and they don't know what the timing is. So these are the types of questions that we're asking to get the criteria instead of a likelihood. It's actually explicitly disallowed to use probability for cybersecurity risk assessments so that we can have a bit more of an exact answer.
So when we're doing the assessment, we're following a similar process but we're looking at it through that lens. Going back to those questions, once we understand, you know, what can go wrong, it's a matter of thinking about the controls. If we say, “Well, a worst-case scenario here is, you know, we have a PAX system and someone rips out all of these medical records.” Great. Well, what can we do about it? Can we encrypt all the medical records at rest? Can we encrypt them in transit? How can we protect access to that pack system? Does it need to be on the internet? Can we hide it internally? And that's going to start these conversations on where to go. So, it's a very much top-down exercise. You start with just the device. What are the concern points on the device? What are the concern points with those concern points?
Thanks, that's very helpful. I like a lot of signposts to follow in that area. Christian, you talk to folks like me who are not expert in cybersecurity and you boil it down, and I know that you've talked about misconceptions as well as the, you know, the 101 Trevor's been saying, “Hey, this is what you need to think of. What do you put in your product and then what can go wrong and so forth?” What are the misconceptions about cybersecurity that we should all be aware of?
I'll quickly hit the top five. One of them we hear quite often is people think their device does not need cybersecurity. They don't think it is a quote cyber device. Technically anything with software and any way to connect to it is a cyber device, and has to be cybersecurity tested, has to have the risk assessment, everything else. So, this includes Bluetooth Low Energy, NFC, even a USB port because somebody can plug a USB drive into that device that has malicious software on it and impact a patient monitoring system, for example. So, that's the first one we hear quite a bit.
The next one, this is still a major misconception that drives me a little bit crazy because I hear investors argue with me all the time about it, especially investors I know for some reason, they still don't get it. They like to say, “Oh, cybersecurity is just about data protection with medical devices.” I'm like, “Kind of, but the primary driver is patient safety.” Because if you think about it and you can hack into a surgical robot that's performing surgery on somebody's spine, you can paralyze that person. If you have a defibrillator, somebody wirelessly connects to it and they're shocking you to death, they're shocking your heart over and over and over while they're also stealing your personal health information or protected health information. Which one is a priority? Probably the patient safety. I can't recover from dying, at least we can't yet, but I can certainly recover from my medical records being stolen.
The third one, which we just had a call earlier today about this, is there's this misconception that software developers understand cybersecurity. They will all tell you they are experts in cybersecurity, but the reality is, from my experience, I would say one out of a hundred actually know about cybersecurity. And it's not their fault. It's very different skill sets. Software developers, their job is to build software that's functional, that works, has a great user interface, that doesn't have too many bugs. Our job as cybersecurity professionals and hackers is to get the software to do things it was never intended to do. To use abuse cases against it, not use cases of use cases to sit at malformed data, to make it do things that the software developer had no requirements to defend against, basically. So they're very different skill sets, and you really need both skill sets to bring a product securely to market.
The other one we hear. Christian, before you go forward on that, excuse me, but I'm just going to say, well, that's provocative and maybe to some people, not me, but to some people that's inflammatory. And I'll, and I'll, I'll give an analogy. An R&D engineer needs to take all of the considerations into account and develop so you meet all of the constraints. So if I, if I take your example, software engineers are responsible to develop something that addresses cybersecurity. Many of them have an awareness and are working to deliver something that will meet requirements of cybersecurity.
I think there's a good reason why many software engineers therefore then feel that they, they have some level of expertise. But one in a hundred? Okay. So you, you have, you, you have a pure focus and expertise in cybersecurity. I guess what you're saying is this: that unless you have that degree of focus, there's too much to know, and, and it's the situation is changing too rapidly for anyone to hope to stay on top of it as well as handle the development aspects of, of software engineering. Let me, let me clarify a couple things like this, like please explain your statement because I want to know, I want to, I want to understand.
So let me clarify a couple things here. I'm not trying to say software developers, all of them don't understand cybersecurity. There's a couple contributing factors. One of them is most of them do not understand cybersecurity. Part of that is goes back to what you said, Chris, often no one gave them any cybersecurity requirements. So how are they supposed to develop the software when they don't even have a cybersecurity requirement? If they had the requirements, which, you know, we can argue where those requirements should come from, then they may have been able to develop more secure software. So that, that's rarely dictated to them.
In a job I had, I managed a team of 25 software developers once, and they didn't really understand cybersecurity, but, but they also weren't given the requirements sometimes. Even though we tried to give them to them, they still didn't understand because like Trevor said, it's not a functional requirement. It's a non-functional requirement. And functional requirements are fairly easy to understand. Non-functional, like make this function design so somebody sends it a bunch of weird data, it won't crash. That's a very different requirement than make it take this data and send it over to this parser or something.
Let me agree with you on this way. Right. So in the absence of requirements, developers need to make assumptions and make good choices. Any developer who has awareness and understanding who's making local requirement assumptions is missing the big picture because cybersecurity, there's probably a whole range of good ways to do something. But if you pick one approach or cybersecurity kind of way of doing things, it means all the decisions that are made at the individual requirement development unit need to be aligned with that. And you could have people making decisions that, you know, they could be a good decision if a set of other criteria were met throughout the product, but if they're not, it's not taken holistically, then you're going to run into trouble.
So the, the fact that you brought up requirements as a part of that seems to be another really powerful thing that we, we need to inject cybersecurity requirements when we develop our requirements. It sounds obvious now that you've said it, but but it hasn't been something that's in the forefront of of development teams' minds. It's, it's there. I'll tell you with my clients, it's there and it's being, it's being developed, but it's maybe not at the level like equal seat at the table that some other, some other considerations take. So, so thanks Christian for, for explaining, explain that. You might, you might want to make a few more points or…
I'll make one more point about this because I don't want to upset software developers too much. The because we work with them all the time and there's some great software developers. The other thing, like when I managed that team of 25 software developers and I'm sure this happens quite often is, I remember this scenario. We got this product almost complete. We're like 80% complete. We were still going to do security testing, do a bunch of testing. The CEO came in one day, said, “We sold a bunch of them. They have to be shipped out the door next week.” I'm like, “Well, dude, we're not done with testing. This is going to be full of bugs, full of security issues.” He's like, “I don't care. Stop development. Box it up and ship it out the door.” So I'm sure this happens a lot from a budget and timeline and deliverable perspective and promise perspective. And that is outside of the software developer's control. No matter how good your gates are, your CI/CD pipeline, if someone forces you to deliver before you've actually gone through these gates, it's going to cause issues as well.
I see. I see. Yeah, that makes sense. Yeah, I appreciate that deeper, deeper conversation because it, it elaborated requirements and making sure that we pay really good attention not just, not just get an informal review like, “Hey, are we on the right track here?” But making sure we understand what requirements should be baked in from the beginning to help guide those decisions, help guide the detailed implementation of the software that you described. So that's, that's… Yeah, and this ties into like the fourth misconception.
Cybersecurity needs to be included in the total product lifecycle, not just a point-in-time study. It's an iterative process. It needs to be designed into the product through the requirements. It needs to go all the way through the lifecycle until someone disposes of the product. It has to be throughout that entire lifecycle. There's still a misconception that, you know, in Q4 of 2026, we're just going to do cybersecurity. Then we're just going to run it like a stability study or a biocompatibility study. But that doesn't work. It's not a point-in-time study. It has to be done iteratively throughout the development.
Because the environment changes, right? I mean, we hear about in our consumer software packages, “Oh, there's a security concern. You need to update your software or you're at risk.” Whether it's your iPhone or your laptop or what have you. Is that the main reason? What are, what are some other reasons why it needs to be a lifecycle process?
A couple of reasons. One is it's going to be much more secure if it was designed with cybersecurity requirements. The other one, and we've had clients have this challenge, is they didn't think about the end-of-life like what happens when a healthcare delivery organization is done with our product and they're going to throw it away or sell it or do whatever. So they did not encrypt their hard drives or their device. And those hard drives were full of client data. People were buying these products on the secondary market and extracting all that sensitive data out of there. So they didn't think about it through the entire lifecycle. But with these types of products, you have to be thinking about it. This is a highly regulated industry. We're talking about patient safety, patient data as well, and we really need to think about it throughout the entire lifecycle.
I think even tying it back to your level of business risk. So if you're treating cybersecurity as a point-in-time study, there's a lot that goes into it, and every global regulator wants to see cybersecurity addressed through the total product lifecycle. None of them are okay with it just being a one and done process. And so it, since it is spread out throughout the full lifecycle, as you're going through the pre-market submission, documentation, testing, all of that, there should be testing describing how you've addressed cybersecurity all the way from when you sourced your requirements.
Whether or not that is present is not always going to be an easy answer. And more often than not, you're not going to see that level of testing mapped all the way back. And so having to stop and go, “Okay, we could start the testing now. Is this going to be sufficient for the FDA? Are we going to uncover, you know, we'll, we'll start testing three months out from submissions when it's the first time someone's touched their code?” And we come back with 3,500 vulnerabilities on day one. And we say, “Well, you know, this is, this is a conversation we need to have. Your product was in the…” Please don't tell me that. Happens all the time. We're not the bearer of good news. It has happened. Okay. Yeah. Yeah. Yeah. Um, I guess don't shoot the messenger. You guys come with solutions, too. So, that's, that's a, that's a plus.
Um, I want to come back to FDA in a second, but what's the fifth, what's the fifth misconception? Fifth most common misconception is what I like to say cybersecurity equals cybersecurity. That's not true. Traditional IT cybersecurity is very different than MedTech and regulated cybersecurity. As Trevor mentioned, we looked at exploitability. We from a risk rate matrix profile, not probability. That's one thing. We're doing a lot of hardware testing that's very specific. It's very regulated. It involves patient safety, not just data protection. And, and those three things right there, most traditional cybersecurity firms know very little about.
Yeah, I think hardware especially, it is a shockingly uncommon skill for penetration testers. And we've been talking about, you know, the underanding of the developers knowing cybersecurity. And I think when you get out of product security, it becomes even more dramatic. Anytime someone asks me, how do I get into penetration testing? I should, should I go learn how to code? I say no. You know, I'm a very good penetration tester, but I'm not a good developer. I know. I don't know how to develop a platform, but I do know networking inside and out. I know how computer networks work really, really well. And that is the fundamental skill for penetration testers and for cybersecurity where it's just not as much the case for development. And, you know, obviously writing code, developing a product, those are those fundamental skills. And so I think that's where you start seeing the divergence really quickly. But once we're going in product security on medical devices, it comes back around to how are we securing this code better more so than how are we securing an information system as well.
Yeah, you, you mentioned FDA briefly a moment ago. We talked about ways that you can build cybersecurity considerations into design choices and requirements from the very beginning. Finding things when you do testing before you submit that can be very disruptive and trying to prevent that. What about when you have a submission that you present to FDA and then you get a, you get a response back from the agency that you, you don't see eye to eye on the cybersecurity of, of the product? I want to ask you like, what are the, what are the things that have surprised you and because you take great pride in your 100% success rate? I imagine sometimes you get questions that come back that you have to answer to deliver it.
You know, it is interesting. We, so we do get a lot of clients coming to us in the middle of interactive review but we don't get too many questions from the FDA. And part of that is we are probably even a little more strict on risk than they are. So when we're working with our clients and we say, “Hey, here are the risks in your device that we've identified. This is what we recommend you get rid of,” and it's going to be pretty close to everything. “This is probably what's going to be acceptable as per FDA definitions,” and it's going to be a lot less risk removal. So part of what we do is we try to go the extra mile above and beyond so that the FDA doesn't have room to ask any questions.
But some things that they do come back with and we'll see, especially when clients are coming to us for help once they're already in review, is the types of vulnerabilities that the FDA is concerned about that no one in the cybersecurity industry cares about. So I'll pick an easy example here. Self-signed certificates. Everyone, everyone in cybersecurity, every pen tester kind of rolls their eyes when they hear that. And so the certificate being what is pretty much allowing for authentication on a website, on a webpage, within a local or on the internet. If the FDA sees that you're signing your own certificate, so you are saying your own encryption is secure and trusted, someone could manipulate the certificate and then modify the information on the website. You wouldn't be able to know. Normally, you have a central authority managing this.
So, having said that, the odds of someone breaking in, replacing certificates, and giving you a fake website to try to steal your information, it doesn't happen very often. And so, cybersecurity professionals don't usually give it too much focus. The FDA, however, is very particular about things like this. Anything relating to data privacy and encryption, they lean into to an extra degree. And so, when we'll see this vulnerability come back, and I've even seen, you know, some larger medical companies send us this is our risk or our risk scoring based on vulnerability severity. And I'll say, “Hey, I see you have self-signed certificates as a low here. And while I agree with you, you got to bump that up to a high because you're not going to be successful with regulators if you leave it there.”
Well, one thing I would say that I, you know, I really believe that empathy is, is a superpower and, and empathy in the, in the context of 360 in our team and really, really caring about each other and understanding where you're coming from so we can work better together for the business. So we are really aggressive about achieving milestones and making the best product and always asking how we can improve that. But also with outside partners or stakeholders like FDA.
And so Trevor, I definitely see your point, but I found, and it sounds like you, you practice this, understanding the real concerns that FDA has. While you may not agree, reasonable people, their job is to, is to balance public health, and this is something that they've zeroed in on. So it's good to know that through many submissions with FDA your experience is understanding what they care about and making sure that you can protect your clients by, by addressing those as well as doing a, um, homegrown isn't the right word, but, but an internally developed, even with an expert like Blue Goat. If you rely only on your internal consistency and you don't look around and take advantage of, of what you can learn from, from respecting what FDA has to say, then, then maybe you're missing something.
So, I think it's good that you have that level of, of insight and, and experience just practically to be able to get things to go through. It just depends on, it just depends on your own point of view whether there's a structural issue. You know, some people will, will battle FDA rightfully so on certain things, but often times it really just boils down to it's a reasonable position to have and, and reasonable people can disagree, but let's, let's kind of keep moving forward together. So I just wanted to share that because, you know, what you do with your clients, you have to understand their business. You have to understand the considerations and how to be appropriate with cybersecurity, but be true to the real, the real goal of patient safety and delivering something that's secure. And you have to be harder on the companies than they will be on themselves. And you feel that you need to be harder on the companies sometimes than the FDA should in certain areas that you gauge the risk to be higher. So, it's interesting how we have to work and, and, and integrate feedback from all these different stakeholders. I respect what you guys are doing.
Well, we're coming up on time here. I like to go around the room and get some last-minute words of wisdom. So, I typically start with Trevor and I'll start with Trevor again, then I'll go over to you, Chris. You know, I think we've covered so many amazing points here, but I really do want to just emphasize what we just covered, which is these are reasonable people who have reasonable points, and they may come to disagree. Do I disagree on certain assessments that the FDA comes back with? Sure, I'm sure they disagree with some assessments that we would come back with. But overall, they've always do a good effort at being reasonable and helping us get to the problem. Their goal is that you're putting out a safe and effective product. That's it. That is a noble goal. That's something that we want to support. And so I think that even though these situations may come up, disagreements between reasonable people, we're this happens to try to get to the source of a very real problem and get to a really effective solution. So I think it's all, it's all a good effort that we have here.
All right, awesome. What about you, Chris? Yeah, a couple of things. First, in response to Trevor, your point, I heard a quote attributed to Dr. Thomas Fogerty. Don't take the problem inside yourself, but keep the problem in front of you and work on it and work on it together to solve it. I think that's kind of what you just expressed. And so I think that's, that's profound, and it's the way we try to work together in this industry.
Related to the detailed conversation we had, I had a couple of takeaways that we should get cybersecurity review at the earliest stages to help guide our design decisions on architecture and, and development. And we also talked about making sure cybersecurity is reflected in the requirements so that all of the engineers who are developing the product have something to go by and guide their decisions that, that, that are made at the very detailed level every day when they're making choices about about how to build the product. And I just want to also express gratitude for what you guys are doing for the industry, and, and this conversation was a lot of fun.
Awesome. And I'll end with one thing. We were talking about the FDA, and I think Chris, you used the term us giving the innovators kind of a hard time with cybersecurity from a business risk perspective. Blue Goat Cyber works on a product. We want to make sure to the best of our abilities and to the best of our due diligence, that that product is safe on the, on the market, that somebody can't hack into it. So we're naturally going to do probably above any regulator wants or expects because it is super important to us, and we're putting sort of our stamp of approval on that product as well. I love it and it's what we all do is put the patients first in our industry. So I, I appreciate that. Thanks, guys. Yeah, Chris, for being a guest and thanks everyone for tuning in. I hope you found this episode of The Med Device Cyber Podcast valuable and hope to see you on the next one.