In this episode of The Med Device Cyber Podcast, the hosts delve into the intricacies of the four security architecture views mandated by the FDA for medical devices. They meticulously break down each view: the Global System View, Updatability and Patchability View, Multi-Patient Harm View, and Secure Use Case Views. The discussion emphasizes the importance of accurately defining the device's scope, which often extends beyond the physical device to include companion apps, cloud services, and update infrastructure. Listeners will gain insights into securing the entire product lifecycle, from initial development to decommissioning, with a keen focus on preventing multi-patient harm and ensuring robust security across all device functionalities and data flows. The hosts also highlight common pitfalls manufacturers face when developing these views, offering valuable advice for product security teams, regulatory leads, and engineers navigating FDA premarket guidance and product security challenges.
Key Takeaways
01The FDA defines four critical security architecture views: Global System View, Updatability and Patchability View, Multi-Patient Harm View, and Secure Use Case Views.
02The Global System View requires a comprehensive understanding of the device's scope, including physical hardware, software components, cloud services, companion apps, and the update infrastructure.
03The Updatability and Patchability View focuses on securing the end-to-end update process, from the creation of the update package to its secure installation on the device, including the development environment's security.
04The Multi-Patient Harm View necessitates assessing scenarios where a compromise of one device or user could lead to harm across multiple devices or patients, emphasizing risk and impact-based approaches.
05Secure Use Case Views mandate addressing security for every specific functionality, data flow, process, and state of the device, often aligning with a device's functional requirements.
06A common mistake is incorrectly defining the device's scope, neglecting elements like update infrastructure or interoperable components, or failing to provide sufficient detail and rationale for the architecture design.
07Proactively incorporating security requirements into functional requirements during product design can prevent significant rework and address FDA expectations more effectively.
Frequently Asked Questions
Quick answers drawn from this episode.
In this episode of The Med Device Cyber Podcast, the hosts delve into the intricacies of the four security architecture views mandated by the FDA for medical devices. They meticulously break down each view: the Global System View, Updatability and Patchability View, Multi-Patient Harm View, and Secure Use Case Views.
The FDA defines four critical security architecture views: Global System View, Updatability and Patchability View, Multi-Patient Harm View, and Secure Use Case Views. The Global System View requires a comprehensive understanding of the device's scope, including physical hardware, software components, cloud services, companion apps, and the update...
The discussion emphasizes the importance of accurately defining the device's scope, which often extends beyond the physical device to include companion apps, cloud services, and update infrastructure. It's most useful for medical device manufacturers, cybersecurity engineers, regulatory affairs professionals, and MedTech founders...
The FDA defines four critical security architecture views: Global System View, Updatability and Patchability View, Multi-Patient Harm View, and Secure Use Case Views.
Listeners also asked
Quick answers pulled from related episodes.
What does Episode 64 cover about "Why Cybersecurity and Quality Are One and the Same"?
This episode of The Med Device Cyber Podcast features Ash Garuli, principal and founder of Ingenious Solutions, discussing the critical intersection of cybersecurity and quality management in medical device development. Together with host Trevor Slatterie, Ash tackles common...
What does Episode 53 cover about "The Human Factor: Why Cybersecurity Awareness is Key in Medical Device Manufacturing"?
In this episode of "The Med Device Cyber Podcast," the hosts delve into the critical role of the "human factor" in medical device cybersecurity. They explore how human vulnerabilities, from weak passwords to configuration oversights, often present easier and more impactful...
What does Episode 50 cover about "What 15 Years In MedTech Taught This CEO About Cybersecurity with Marc Zemel"?
In this episode of The Med Device Cyber Podcast, Marc Zemel, co-founder and CEO of Rhae Medical, shares insights from his 15 years in MedTech, transitioning from a mechanical engineer at MIT to leading a medical device company. He discusses the evolution of medical technology,...
Pre-fills with: "The FDA defines four critical security architecture views: Global System View, Updatability and Patchability View, Multi-Patient Harm View, and Secure Use Case Views."
In this episode of The Med Device Cyber Podcast, the hosts delve into the intricacies of the four security architecture views mandated by the FDA for medical devices. They meticulously break down each view: the Global System View, Updatability and Patchability View, Multi-Patient Harm View, and Secure Use Case Views. The discussion emphasizes the importance of accurately defining the device's scope, which often extends beyond the physical device to include companion apps, cloud services, and update infrastructure. Listeners will gain insights into securing the entire product lifecycle, from initial development to decommissioning, with a keen focus on preventing multi-patient harm and ensuring robust security across all device functionalities and data flows. The hosts also highlight common pitfalls manufacturers face when developing these views, offering valuable advice for product security teams, regulatory leads, and engineers navigating FDA premarket guidance and product security challenges.
Hello and welcome back to the Med Device Cyber Podcast. We are going to be talking about a very interesting topic today, looking at device security architecture. So we are going to go into how we define the scope of a device, how we are considering security on all these entry points and internal components, and then looking at what the FDA is really concerned about as far as any specific use cases and how we are addressing security there.
I am joined by our co-host Christian Espinosa. How are you doing today, Christian? I am pretty good. A little bit jet-lagged. I traveled about 28 hours this weekend and I haven't slept much, but it is all good. I had the same flight, but I got in on Friday, so I got to recover this weekend. You got in yesterday, right? I got in Sunday night, so yeah, yeah, it is a little bit rough.
Cool. Yeah, I'm excited to talk about this topic. And I know the FDA defines the security architecture views; there are four of those, which are commonly misunderstood. So, how are you doing? Did you recover from your jet lag before we jump into the topic? Yeah, I have my secret patented recipe for avoiding jet lag, and it is don't sleep for 48 hours during the flight process. And then you are so tired when you land, it doesn't matter what time it is, you will just wake up. So, and then a healthy dose of caffeine throughout the day to make sure that I don't break the pattern.
Yeah, maybe I should try that. I just had a nitro cold brew and finished all of it. So, there you go. Yeah. So, with these architecture views, there are four main ones that the FDA wants to look at, and we will get into each one. First is the Global System View. What is the device?
Next is the Updatability and Patchability View. How are you providing software updates to the device? Then we are looking at the Multi-Patient Harm View. So, what situations can lead to multiple individuals, multiple products, multiple systems getting harmed with one attack? And then Secure Use Cases, which that one is, I think, the most difficult one to really fully wrap your head around. That is when the FDA wants to see you have all these different states for the device, different functionalities, different process flows, and how are you addressing security for each one of those different areas.
Typically, when we are talking about architecture views, this is coming from software documentation terms, and software documentation is usually going to be more mature than cybersecurity documentation. And I feel like this is where a lot of engineers can get a bit confused with the distinction. If you say architecture view for software, it is purely what is the device? How can we outline a data flow diagram, an architecture diagram? The architecture diagram is all the components, what is the barrier for the device? Where do we consider something to be out of scope versus in scope for the device?
And so I think that that can be a bit of a misconception that leads to a lot of confusion when building out this cybersecurity documentation is that the terms are used interchangeably. It is something that can be not just confined to architecture documentation, just having these interchangeable terms throughout cybersecurity and software, but I think that, personally, that is a big area that leads to deficiencies coming up or any problems with this documentation. Well, I think the clarity needs to be around the FDA specifically defines security architecture views and those four views you mentioned, which is very different than, like you said, a typical architecture diagram for a software or device.
Right, yeah, and I think the further that we can divide the two terms, and so saying, you know, security architecture views, oftentimes it will just get abbreviated to architecture views when handling the documentation. And then architecture diagram, architecture views are close enough that there can be definitely that misconception there and a little bit of confusion. But yeah, I agree, the more that we have that distinction, the better.
Now where it does sort of blend in is when we are looking at that first security view. That is that Global System View. That is going to actually be fairly similar to an architecture view under a traditional software scope. We are looking at what is the total scope of the device, what is each component within the device, and that helps us build into some of those further discussions on what are individual use cases, how can we harm multiple patients, what is the interoperability component. So I think that it, the software documentation does lead in as a very valuable first step to the cybersecurity documentation.
Yeah, so you are saying the FDA security architecture view that is the most, I guess, related to typical architecture views is the Global System View. So, there are four of them and that is the one we are talking about now. The Global System View, which does include the overall architecture but includes more than just that. It is internal components, any connections to a cloud service, a mobile app, and also the update infrastructure. It is like the whole view of the system. Right.
Exactly. And this is, you know, this leads into, it could be its own conversation entirely, but defining what is the scope of your device. Often times you will, it is a very common situation now to have sort of a three-piece suit with a medical device. Medical device connects to a companion app, that companion app connects to a cloud service in AWS or in Google cloud provider, whatever it may be.
And so where are you drawing the line? What is the scope of the device? Is the phone that the mobile app is sitting on part of that scope? Is that going to be something included in the Global System View? Is it just referenced as an interoperable component? But I think that a lot of manufacturers will just draw the line around the device when it needs to include, like you mentioned, the update infrastructure, these cloud components, any additional accessory apps.
We also need to talk about what is going to connect to the device. So, we will say this app can sit on a mobile phone. Do we have to address the mobile phone itself as far as security? That is out of scope. That is going to be handled by Apple or Samsung or whoever is creating the phone. But how are we interacting safely with it? How are we interacting with a printer that the device can connect to or to a workstation that it can connect to? So, we need to reference these as part of our Global System View. Even if we don't necessarily have control of them, we need to figure out what we do have control of.
And with a Global System View, you talked about the three-piece suit, the device, the mobile app, the cloud, which is very typical. I think you referred to the mobile app as a companion app. The update infrastructure is something that I feel like is part of a Global System View. It is, but it is often completely overlooked. So, you want to expand upon that a little bit and I can give you my opinion as well.
Totally. And this will tie right into the next view that we have to cover, that Updatability and Patchability. But the FDA wants to see the total product life cycle security considerations. This is start to finish. So, as soon as you have, you know, the idea for a product, as you are developing it, as you are designing the product, you should design security into it. Moving throughout the full life cycle all the way down to decommissioning.
At some point between those two areas, we have to talk about what if we need to make a change to the device. The FDA wants to see devices are able to be updated, able to be patched in the field in case of a vulnerability to prevent the need for like a total recall or some major drastic change and major difficult process like that. If we can make a simple software update, it removes the need for a lot of complicated recalls and collection and security advisories.
So, even still, we need to figure out how security is addressed there. What if there is a malicious insider that is tampering the binary going out to the device? What if someone can hack into the update server and change what gets sent down to the device? What if someone can intercept the connection and see exactly what is being changed and then make their own changes? There are a lot of things that can go wrong with an update process.
Even if you don't have remote infrastructure, if service technicians doing it on site with a USB stick, what if they go to the bathroom, they leave that USB stick unattended, someone changes what goes onto the device? All of these different problems need to be considered when we are looking at that total product life cycle. And so, however the update infrastructure works, it needs to be part of the Global System View and figuring out how we are getting this binary or this update package onto the device from wherever it is coming from. That is something that is going into this Global System View. It is part of that global overall device.
Yeah. And then we dive deeper into that process in the Updatability and Patchability View, which is why you talk about how we do it securely. How we make sure the firmware is updated with a hash or whatever, so we make sure it has integrity. That is further expanded upon updatability, which I don't even know if that is a real word actually. I think the FDA made that up. And patchability, I don't even know about that one either, do you?
Yeah, anytime we are looking at going through any of our submission packets, then I am making sure that everything looks all good, all of our grammar, all of our spelling is correct. All of our grammar checkers freak out every time it sees updatability and patchability. So, I think the FDA did make those words up, but it has become part of the regular lexicon now. Yeah.
And once we are getting into that view, like you said, it is focused on how we are addressing security on that single process. And so it goes to show how important it is if the FDA wants to see an entire separate view specifically for the update process, instead of lumping it into the Secure Use Case Views, which cover other functionalities and other data flows. Yeah.
Hey, before we jump into the Updatability and Patchability View on the Global System View, what are a couple common mistakes that manufacturers make when trying to create this on their own that you have seen for the Global System View? The most common one I would say is not defining the scope of the device correctly. And so this goes back to what we said. They will forget the update infrastructure. They won't address a service technician putting in a USB stick. They won't address any of these potential interoperable components.
That would be probably the most common one. And so when we are going through a Global System View, another issue that we do is, or another issue that we have to cover, is not only what does it look like, but why does it look like that? What are each of these components doing? What is the purpose of having all of these interfaces? What is the purpose of these internal components?
And so I think that can be another big area that manufacturers miss the mark on is getting to that right level of detail and really talking about and telling a story. This is the reason the device looks this way. This is what each of these components is responsible for. This is why we have it in this specific configuration. This is why anything on the edge of that trust boundary, anything on the outside that you can interact with, is there and why we need to have that in place, what we need to work with.
So explaining why the Global System View is the way it is is probably going to be the other big area that we see a lot of deficiencies for. Yeah. And I was just talking with someone yesterday and they had a hard time wrapping their head around that. They, they said they have software as a, you know, software in the cloud, but they didn't think it was a quote device. But we have SaMD, software as a medical device, so technically for a Global System View, the software is part of that device and that scope and that global view as you, as you mentioned. But a lot of people think of a device as physical, but it, from the FDA's perspective, it could be software as well.
Exactly. And the Global System View could just be only that little cloud component. I know we talked about the typical setups that we see, but we have seen it just be a mobile app can be a medical device. And so that is purely running on a phone. You are looking at how does the app interact with the phone? What does the app look like from a software perspective? So I think that does bring up another interesting point. That Global System View is, of course, addressing hardware, the physical device, but you don't always have a physical device. When we are looking at a cyber device, part of the criteria is it has to have software. So we need to address the hardware and the software as part of these views.
Awesome. So view number one we discussed is the Global System View and these are the security architecture views defined by the FDA. There are four of them. The second one we have sort of like dived into or dove into a little bit is the Updatability and Patchability View. So let us further expand on that one because I think this is a common way attackers can get into an environment as we have seen with SolarWinds and all these other ways where the supply chain is compromised. And I think that is why the FDA wants us to focus on this to make sure our update infrastructure is secure.
Exactly. So often times manufacturers will focus on security for their device, focus on security for companion app, and maybe the update infrastructure gets pushed to the back burner a little bit. It is really important to have security covered for anything in that update process. And additionally, a device itself should make sure that these updates are safe. So, how are you designing security into the device itself to receive these updates? How are you designing security into the infrastructure pushing out these updates?
There are a lot of different considerations that the FDA wants to see addressed. They want to see what the overall process looks like. So, that Updatability and Patchability View should be end-to-end from where the update package or the change is created, usually in the manufacturer's version control, or if there is a contract manufacturer that is handling the engineering. How are they making these changes? How are they controlling the security of the update? How are they then pushing it out to the device? Is that connection secure? Is the internal development environment secure? Is the device verifying the integrity of this update package through a CRC or something around that?
There are a lot of different things that need to get covered depending on what your update process looks like. And it is not really a one-size-fits-all solution. Sometimes updates need to be done on the device physically by a service technician. Sometimes it is done by actually changing it on the chip level through a JTAG port. Sometimes it is just through a cloud update, and it happens in the background with nobody even needing to interact with it. So, however the update procedure is handled, you need to address security with it and make sure that you are covering all of your bases depending on your specific implementation.
Yeah. And this is an area I feel like there is not enough scrutiny in, even by the FDA. I, I feel like the biggest concern with this area, we have seen this with some of our clients, is a lack of a secure environment to develop the update and push the update to the device. I, we have had clients that their update environment was connected to their corporate network, which made it inherently insecure. So I, I don't, I don't think the FDA has that looks at enough scrutiny as far as the environment which pushes the updates to the device.
I think they focus more on like you said, a CRC or some sort of hash to make sure that that update has integrity. But if the update is, is compromised, it is a source. The integrity doesn't matter because you are verifying a malicious update basically. I mean, what are your, what are your thoughts around that or am I going too deep in the rabbit hole on this topic? No, it is important to think about if we are looking at the manufacturer's environment themselves, it, it is still part of that total product life cycle. It is the end-to-end security considerations. And so if the manufacturer's environment is inherently insecure, insecure, unsecure, I feel like we go back and forth on this every time.
Well, I, I prefer unsecure, but you, you called me out on it in one episode and said it is not actually a word. But since we are talking about updatability and patchability, maybe we should go back to unsecure. I think we can. If the FDA can make up words, so can we. We will, we will go with unsecure. So, if your development environment is inherently unsecure, then it is very easy for risk to get introduced into the system.
We see a lot of different compromises across all sorts of different industries. We have seen, you know, major casino hacks. We have seen healthcare infrastructure hacks, finance systems, um, even, you know, critical infrastructure, oil pipelines, things like that. And more often than not, this comes from social engineering against an adjacent system. So maybe trying to interact with the help desk or a sales representative, taking their credentials and seeing that that network, that environment is inherently just poorly designed.
There are a lot of opportunities for an attacker to move through the system. So how can you verify that your development environment, what it is connected to, who else is interacting with it, is secure, is safe, and is going to be essentially, um, you know, hardened against any cybersecurity attacks? So it does go down to a fairly deep level since if an attacker is able to modify that development environment without the proper checks in place or the restrictions to prevent them from getting that access, they might be able to do something pretty nasty. Yeah, especially if it is an OTA or over-the-air update which is just automatically pushed out there.
Cool. So, we covered the Global System View, the Updatability and Patchability View, and the next view I think we should cover is the Multi-Patient Harm View. It is kind of by the name of the view, uh, inherent, what that talk, what that means, but let us, let us dive a little bit deeper into that. I think that this is a pretty easy one to get confused with since it is a very loose definition.
What we are looking at with a Multi-Patient Harm View are what scenarios can allow the compromise of one device, one user to stem into multiple devices, multiple users depending on your device. This can be done in a hundred different ways. And so it really does depend on what your product is. We can take an example of a network connected device. Something that is going to integrate into your active directory in a healthcare delivery organization. Something that interoperates with printers, with workstations, something that a clinician can interact with from their own separate workstation.
There is a lot of risk there for trying to take out Active Directory credentials, using it as a jump point to get into the rest of the network and potentially have a risk for ransomware. So that would be what we are looking at as our general approach through a Multi-Patient Harm View. Under this lens, we can take a different example. We have a clearly or a purely cloud-based solution. It isn't interacting with anything else as far as in a hospital environment, someone just visits it through their web browser, but we have the PHI and credentials for thousands of different users on it.
And what if that access control isn't properly set up? We can jump from one user to another and take their information or maybe even move up to an administrative role and take everyone's information. There can be a lot of risk there. And even moving down to a device that isn't connected into any network and only has a Bluetooth connection to a mobile app, what if we have hard-coded credentials on that device? We are able to strip out the hard-coded credentials and then use it to compromise any other devices if those credentials are the same across all devices.
So, these are just a few examples. You know, the rabbit hole can go very deep on this conversation. How can you define a Multi-Patient Harm situation, but that is why it is so important to really address and really think about the different scenarios that can come up based on your specific device? Yeah, I think it is more of the focus like if this device is compromised, how many patients can be harmed? The multi-patient, right?
So, if it is a standalone device that helps with like bronchial decongestion or something and it is not connected to anything, if that device is compromised and it is at a patient's house, it may only affect that one patient. But, as you alluded to, if it is an IVD system that is on the cloud and there are a thousand HDOs using that same system, it could affect a thousand plus patients, right? If that system is compromised on the cloud. So we have to look at like what is the scope of the compromise and what is the scale of the compromise if that device is compromised. And how are we securely, how do we have controls in place to prevent that multi-patient harm? And maybe if it is on the cloud, we have separate instances of the device, one for each HDO as an example, that you can't go from one to the other one versus all of them on the same system.
Right, yeah, there are definitely a lot of different ways to get across any specific problem. And you bring up a great point about all of this should be considered from an actual risk-based approach and an impact-based approach. It is one thing to compromise a device, but what does that actually mean? Are you in the case of, you mentioned, like a bronchial decongestant that is not going to be the, it is likely not going to lead to any direct harm if that device gets compromised in any meaningful way, since usually that is a device that is used for convenience or for the alleviation of systems, but it is not a critical care machine.
Where if we are looking at a critical care machine that gets compromised either directly or indirectly, that can lead to the immediate harm of a patient, even potentially the loss of patient life. And so we really do have to consider what is the actual worst-case scenario when we are looking at this device. And this stems into risk assessments against a medical device in general. But when we are looking at that Multi-Patient Harm scenario, it is especially relevant.
Yeah, and that is a good scenario you mentioned. I, I remember a client we had before had a hard-coded password on all of their devices like you kind of alluded to. So if somebody compromised that password, basically every single device deployed, and there were like 40,000, could be compromised. So obviously the Multi-Patient Harm View on that one is, is pretty extravagant. Right? Yeah. And that is why it can be super, so many different ways that it can happen. For sure.
So we covered Global System View, Updatability, Patchability View, and the Multi-Patient Harm View. The last view according to the FDA is the Security Use Case Views. And this one is the one that is most commonly misunderstood, I think, is what you said earlier. It sort of acts as a catch-all for what does not fit into the other buckets.
When the FDA is talking about Security Use Case Views, they have an appendix at the bottom of the FDA guidance, which is a list of just a ton of different examples and it probably covers 80, 90 different items as just examples of here are things you may want to consider as part of your Secure Use Case Views. What we are really looking at are any specific functionalities, data flows, processes at states of the device, and how security is addressed.
So how are we seeing different views for security on power up versus power down, in operation while it is at rest? If you are adding a new patient to a database, if you are removing a patient to a database. So this is such a broad area. It is such a wide net that you are casting depending on your device. It is essentially for anything the device can do, how are you addressing security? And so the use case of device, like a use case is I power it up or I shut it down, right? So how is security addressing that scenario, right?
Exactly. And as I'm sure you can imagine, every device does something different. So there is no one-size-fits-all solution for this, right? And some devices have thousands of use cases, and some devices only have a few. So yes, this is a very difficult one, but if you just logically think through every, like you said, function or use case of the device that someone interacts with it or it transmits data or it is getting an update or it is delivering a therapy, you know, how is all that done securely?
That is basically we are looking at how is everything on the device done securely and that includes a user interaction. If a user is interacting with a touchscreen, how is this touchscreen secured? So the user cannot do a certain pattern to get to the administrator mode as example. Exactly. I think sort of a little cheat code that some manufacturers can use is as you are designing your product, you have your functional requirements. What does the device need to do?
And you can use those as a good starting point to figure out what your Secure Use Cases need to do. Every single one of these states of these functionalities of these data flows is going to be covered in your functional requirements for the device. And so you take those requirements and you say for each functional requirement, how can we build in security requirements? And that is going to essentially allow you to build these Secure Use Case Views out as you are going through the process, which I think is an easy way to prevent a lot of regression work and going back and trying to retrofit all these use case views and having to retrofit security to begin with.
If you identify that you have a functionality where security isn't covered, that may lead the FDA to saying, "Well, we don't think that you are addressing security seriously enough, and so we are not going to accept the device in this current state." Exactly. So we covered the four cases, uh, or sorry, four security architecture views. The Global System View, Updatability, Patchability View, Multi-Patient Harm View, and the Security Use Case View.
And I think we covered those in enough detail. Are there any parting words here before we wrap up on security architecture views or any advice for anybody trying to create these views that you would recommend? I think the main thing is understand what your device is and what your device does, which seems a lot more simple than you would think. But really understanding what is the barrier of your device. Where does it begin? Where does it end? What is everything that you can control involved in your product? And then for each of those components, how are you addressing security based on what the component does? And so if you are thinking about it from that lens as you are designing the device, these Security Use Case Views and the, or the Security Architecture Views and the Security Use Case Views are essentially going to build themselves out. And so it is not going to be this big panicked rush at the end to try to figure it out. I agree. Awesome. Well, I think we will wrap up this episode for so thanks everyone for tuning in to the Med Device Cyber Podcast. We hope to see you on the next one.