Cortez Frazier Jr. from Fossa joins the podcast to unpack the world of Software Bill of Materials (SBOMs), shedding light on common misconceptions, risks, and benefits for product security teams, regulatory leads, and engineers in the medical device industry. This episode delves into the evolution of SBOMs from simple inventory lists to essential tools for proactive cybersecurity, particularly following significant supply chain attacks like SolarWinds. The discussion highlights the critical role of machine-readable SBOM formats such as SPDX and CycloneDX in efficient vulnerability management. Cortez and the hosts explore various prioritization methods for vulnerabilities, including CVEs, CISA's Known Exploited Vulnerabilities list, and the Exploit Prediction Scoring System (EPSS), emphasizing the need to move beyond basic critical and high severity ratings to assess true exploitability. The episode also touches on the unique challenges of SBOM management in the medical device sector, considering regulations like IEC 62304, the complexities of
Key Takeaways
01SBOMs are essential for identifying open-source and commercial components in medical devices, aiding in proactive security and risk management.
02Prioritize vulnerabilities using methods like CISA's Known Exploited Vulnerabilities list and the Exploit Prediction Scoring System (EPSS) to focus on truly exploitable threats.
03Transparency in sharing SBOMs does not inherently compromise intellectual property or create a
04Addressing license compliance is a critical aspect of SBOM management, as certain copyleft licenses can mandate open-sourcing proprietary code if not handled correctly.
05The FDA currently requires SBOMs for medical devices, and the industry is moving towards more operationalized SBOM ingestion for ongoing vulnerability lookups.
06Proactive use of SBOMs, including integrating them into development workflows and risk management processes, is crucial for maintaining a strong security posture and meeting regulatory expectations.
Frequently Asked Questions
Quick answers drawn from this episode.
Cortez Frazier Jr. from Fossa joins the podcast to unpack the world of Software Bill of Materials (SBOMs), shedding light on common misconceptions, risks, and benefits for product security teams, regulatory leads, and engineers in the medical device industry.
SBOMs are essential for identifying open-source and commercial components in medical devices, aiding in proactive security and risk management. Prioritize vulnerabilities using methods like CISA's Known Exploited Vulnerabilities list and the Exploit Prediction Scoring System (EPSS) to focus on truly exploitable threats. Transparency in sharing SBOMs does...
This episode covers SBOM Management. It's part of The Med Device Cyber Podcast, hosted by Blue Goat Cyber, focused on practical medical device cybersecurity guidance for MedTech teams.
The discussion highlights the critical role of machine-readable SBOM formats such as SPDX and CycloneDX in efficient vulnerability management. It's most useful for medical device manufacturers, cybersecurity engineers, regulatory affairs professionals, and MedTech founders preparing for FDA review.
SBOMs are essential for identifying open-source and commercial components in medical devices, aiding in proactive security and risk management.
Listeners also asked
Quick answers pulled from related episodes.
What does Episode 54 cover about "Untangling Software Composition Analysis for MedTech Teams"?
Episode 54 of The Med Device Cyber Podcast covers Untangling Software Composition Analysis for MedTech Teams.
Pre-fills with: "SBOMs are essential for identifying open-source and commercial components in medical devices, aiding in proactive security and risk management."
Welcome back to The Med Device Cyber Podcast. I am joined here by our co-host, Christian, and then our guest, Cortez from Fossa. How's your day going, Cortez?
My day is going fantastic. I really appreciate you, Trevor and Christian, for having me on. Obviously, Trevor, you came and did a fantastic webinar for us, and so I'm super excited to be able to kind of return the favor a little bit. FYI, apparently there's a whole bunch of winter storms coming up here very, very quickly. As someone who lives in the South, I typically don't get to see snow that often, so I'm excited to maybe get an opportunity to build a snowman. We'll see.
Where are you based out of?
I am based out of Atlanta, Georgia. Very rarely do we get snow, but we got it a couple of weeks ago, which is pretty exciting.
Yeah, I'm in the Phoenix area. I don't think it's ever snowed here in the valley. I think it snowed in like 1921 once, probably. Awesome. Well, thanks for being a guest, Cortez, and maybe tell us a little bit about what you do and what Fossa does.
Yeah, I would love to kind of give a bit of a background and happy to go in as much detail as you'd like. So, obviously my name is Cortez. Thank you for the introduction. I'm a Principal Product Manager at Fossa. Fossa got its roots as a traditional, I'd say, SCA company, Software Composition Analysis. We started in the license compliance space, and then because we were already doing such high-quality component analysis, it was a very natural leap to then get into the security side of that.
Also, very similarly, having already generated an accurate list of components and component relationships and license information vulnerabilities, that then is also a very nice transition and natural step into Software Bill of Materials, which I love when the regulation calls things like an inventory of bespoke assets that we have to try to understand. So that's Fossa at a high level. For my personal background, prior to working at Fossa, I worked as a Product Manager in a few other companies. Mostly Puppet, if you're familiar with that. They were kind of a DevOps and automation suite of tooling.
And then prior to that is actually where I got my start in the cybersecurity space working for what was GE Power at the time, now Genovas. I was a cybersecurity architect there, so responsible for about 1,800 developers, about 600 applications. That's where I really started to get very intimate with some of the problems that my customers now deal with. And after doing that as a practitioner for a few years, I decided I will, for lack of a better term, go to the dark side and see if I can help make some of these products a bit better, rather than just complaining about them. So that's a bit of my background, but I look forward to diving much deeper into kind of what Fossa does, and how I think the SBOM and medical device security space in general is starting to grow.
Cool. I think we'll zoom out for a second because this term SBOM, not everybody probably understands SBOM, but it's interesting because I've been involved with MedTech cybersecurity since like 2014, and even back then there was the SBOM, and people were doing SBOM, but it seems like now it's just becoming more of a, I guess, mandate that people do it. So can we just unpack like what exactly is an SBOM and what are some of the concerns with SBOMs? Because I know a lot of our clients and prospects don't want to make the SBOM public. They have some concerns about it. So what can we just, you know, I guess dissect that a little bit, unpack it?
Yeah, let me take an initial leap, and then, Trevor, feel free to add any additional comments that you have from your end. So from my perspective, you're 100% right, Christian, that an SBOM stands for Software Bill of Materials. Bills of materials have been a thing for a long time. You've got different companies that have used this with various assets. I actually feel like Cybersecurity 101 is you cannot defend or protect something that you don't know exists. And so getting an accurate and up-to-date inventory is always priority number one.
And so my read on how the industry has reacted is for a long time, a lot of people were maintaining these lists of both open-source and commercial components that they were using in very archaic ways. Typically Excel sheets, which I love Excel, it runs the world, no problems with that, but it is really difficult for people to ingest Excel sheets at scale and then be able to do any type of ongoing analytics about that. In particular, as we know, new vulnerabilities appear every day for existing open-source or closed-source packages.
And so my read is that, unfortunately, when the SolarWinds event happened, that kind of put the world on notice that, hey, oftentimes we're relying on these third-party software, what they would call the software supply chain if you will. And that is actually like a big area of risk a lot of people are not actively paying attention to, or maybe they were paying attention to it, but oftentimes it was this initial security review that would happen, and you do a third-party risk assessment, you go through all the architecture diagrams, you'd have them show vulnerability reports and SOC 2 reports and all that good stuff, but that doesn't help you from an ongoing maintenance perspective.
And so the executive order, I think, 14028, came out, really kind of kicked off this first initiative that, hey, we have to start paying attention to some of these third-party risks that we were kind of consuming, and there were no really good ways to do it at scale at that time. And so that is where SPDX and CycloneDX, I think, really started to gain traction. And these, for those unaware, are two what they call machine-readable formats for SBOMs. So taking what we were doing in Excel sheets for a really, really long time, putting that in a structured format in the form of SPDX or CycloneDX, which then allows for, whether you're distributing that to someone or receiving an SBOM, to actually ingest this in a machine-readable way and then put that into other things that you're doing from a vulnerability risk management standpoint. So that's kind of my read on how SBOMs kind of gained traction and popularity. Would love to hear any of your thoughts there as well, Trevor.
Well, I just wanted to say, I thought the, I think it was the Shellshock attack, was a good impetus for SBOMs as well, because a lot of medical device manufacturers didn't even realize they were relying on Bash, for instance, and a lot of other organizations. And what do you think, Trevor?
Yeah, I think that, I mean, there are a dozen if not, you know, hundreds and thousands of different vulnerabilities you can point to in an SBOM where I don't think that manufacturers are always aware how many problems there are. I'd say by far the most common vulnerable component that pops up with the companies we're working with is Log4j, like from whenever that was, six years ago when the first Log4j problem started coming out. Manufacturers are still using these out-of-date versions, and they just didn't know it was a problem. Or they've been developing their product for so long, they just never bothered to deal with all these legacy components that were built in initially, and then they eventually became vulnerable. They're just, it's a very constantly evolving landscape, kind of like you mentioned, Cortez, SBOMs contain a list of everything, and everything is going to get a vulnerability at one point or another.
I think one distinction that is becoming a little bit more clear is what is significant out of the SBOM, what vulnerabilities really matter, since realistically, it's impossible to have perfect security. I know one of our testers at Blue Goat did his master's thesis on proving a piece of code to be 100% secure, and it was like 70 pages of proof for three lines of code or something ridiculous like that. It's impossible to have perfect security, so understanding what are, what are the big targets, how can we get to that, you know, 99% mark, I think is something key. And I know one initiative is the CISA Known Exploited Vulnerabilities list, which is showing the CVEs that have actually been exploited, CVEs being like a known vulnerability in a certain component, and then a subset of those are the Known Exploited Vulnerabilities, so ones that are actively exploited in the wild, ones that threat actors are using to try to gain access to systems. Like Log4j, I think there are numerous vulnerabilities in Log4j in that list, and then just about every firewall, network appliance you can think of is in that list. So understanding what are the big bullet items, and I'm curious to hear your thoughts on how to prioritize, you know, when you have a sea of problems, how do you figure out what's the focus?
Absolutely, I have many, many, many thoughts on this particular topic, and so you'll have to cut me off if I go too deep. But I like to think about exploitability in a few orders of magnitude, and I think the least, or the lowest fidelity, is just a CVE in and of itself. And that's, you know, no shade in any way, shape or form to MITRE or any of the security researchers doing all this awesome work, but determining that a potential vulnerability exists is one thing and that it can be exploited. And so you have a CVE that's generated, meaning I have an open-source or closed-source dependency that I'm using, a vulnerability has been researched, and someone has posted that vulnerability publicly. That is kind of the lowest fidelity in my opinion, but at least starts to give you a signal that you may need to watch out for this.
From there, you then have what you're alluding to, Trevor, which is things that are on the CISA KEV list, and there are a few other like Exploit DB and others where we say there's either an active threat campaign out for this particular vulnerability or there's been a proven exploit in the wild that you can just go download from Exploit DB and try out yourself if you would like. And so that is now an order of magnitude likelihood of exploitability that I would be really concerned about. From there, you then start to get into things that are more difficult to do from a pure SBOM perspective, but you can do if you can start to get access to source code, which are going to be things like reachability analysis.
You may have heard this term before, which is not just is this particular package in use, which is like one form of reachability, but also is this package in use and am I using the vulnerable component of that particular package? And so that could be a vulnerable function or symbol or, you know, called network configuration, all that kind of good stuff. And then lastly, the holy grail in my opinion, is where you can then say, yes, I'm using a vulnerable package, yes, an active threat campaign is active for it, yes, I am also using a vulnerable function or symbol that's associated with it, and I'm using it in a vulnerable way. And that last piece, that last mile is oftentimes forgotten about, and that's where a lot of developers get really mad at us security professionals for, you know, making them research and evaluate. And that's where your 70-page thesis typically come from to prove that that is actually not the case.
And so having that framing, for a second, there's then like, oh, well, how do you start to get some of these additional levels of fidelity for you to be able to make some of these decisions? I think there are some fairly naive ways that I think are fine because you don't really have too many options. I think that's what you'll see things like, oh, well, let's focus on, you know, direct dependencies because those are the ones that we can most directly influence from like a remediation standpoint. We'll focus on, you know, criticals and highs because those feel like the most urgent, and we will do that only in our, you know, business-critical applications. They're internet-facing, they, you know, rely on some super critical system for us, they help us make money, some of those type of things. And I think that is actually perfectly fine. It's not the highest level of fidelity, but I think it's a great starting point, and if I was someone drowning in CVEs, that's absolutely where I would start.
I think there are some other prioritization methods that are really, really interesting to start to consider. One of them that I'm a really big fan of is EPSS. If you're not familiar with it, it's the Exploit Predictability Scoring System. I like to think about CVE as, or the CVSS, the scoring which drives CVEs from like a 0 to 10, 10 being most critical, zero being, you know, least critical. That is an estimate of the possibility of a particular exploit, where EPSS attempts to quantify the probability of a particular vulnerability being exploited, and it changes pretty rapidly in time. And so there's actually a really, a lot of really awesome research about how you can manage EPSS thresholds or the score if you will, to then be able to have the most coverage from an overall exploitability standpoint, with the least amount of effort, meaning the least amount of time wasted on false positive vulnerabilities, which will then equate into this kind of efficiency equation, which I think from the research that I've seen, tends to be more accurate at actually dealing with true positive vulnerabilities as compared to purely looking at those that are critical and highs.
And so far, if I was trying to get to like a second order of fidelity, I would do that based off an EPSS threshold, still focus on those direct dependencies, still focus on those applications that have some type of criticality, right, internet-facing, highly relied upon. I'll pause there for a second because I know I just threw a lot of information at you. There are a few others that I'd probably think about, but I want to make sure we have an opportunity to digest it.
Yeah, I have a question based on what you're saying, Christian. I used to work in the DOD, and you know, we, it's like there's a CVE, there's a risk level associated with it, and then you got to patch everything, right? But it sounds like maybe the industry is not really migrating, maybe, you know, we're just aware of these exploitability factors and other things, but, you know, in the Department of Defense is like we just patched things based on, it wasn't exploitability at all, it was really just on CVEs alone. But it sounds like, I don't know what the percentage is that the majority of CVEs don't actually have an exploit that can take advantage of it, so we're like not prioritizing correctly, at least the DOD wasn't, and they probably still aren't, probably most organizations are not.
But is the industry, do you feel as a whole, like shifting more towards these other factors so we can better prioritize like Trevor said, this sea of vulnerabilities or sea of CVEs out there?
That is absolutely what I'm saying, both in our customer base and in, you know, potential prospects, which from an overall standpoint, vulnerability detection is a relatively solved problem, meaning there are all kinds of high-quality SCA tools out there, there are all kinds of high-quality tooling that will surface vulnerabilities for you. Where the two next horizons are, one, how do I qualify out vulnerabilities that I'm actually not exploitable in? You have some improper input validation CVE, but that particular application you're using doesn't take any user-provided inputs whatsoever. And so unless you're going to perform a SQL injection on yourself, you probably don't have to worry about it, right?
And it's like those type of scenarios that you're starting to see a lot of people spend a lot of time on, and that's actually where, you know, things like VEX, the Vulnerability Exploitability Exchange, and VDR, the Vulnerability Disclosure Reports, which are these additions to a Software Bill of Material where you can actually communicate, yes, the CVE exists, but there, it's not, we're not affected by this particular CVE, and here's a justification why. I think even beyond that though, which is, you know, not just qualifying out, but then also how can we start to do auto-remediations? Trevor alluded to a point that I think is super critical, which is oftentimes when, you know, a big celebrity like Log4j event happens, it's not so much the impact of the vulnerability that's massive, it's that you can't remediate it because to upgrade that package might require a complete refactor of your application depending on how legacy it happens to be.
And so a lot of customers, and what we're really championing, is how do we try to get you into doing more updates, patching, you know, you kind of call that Christian, in updating your packages prior to the celebrity CVEs coming out, so that way you're not putting yourself in a position where your only ability to remediate those is to do some type of out-of-band remediation, you know, some type of firewall protection, isolate the asset, those type of things.
I think one thing, especially in the medical space, with medical devices, there are, you know, Class 1, 2, and 3, with Class 1 having the least potential impact, Class 3 having the most potential impact if something goes wrong. And all of these are governed under IEC 62304, which is Software Lifecycle Design Considerations for a Medical Device. And these talk about that exact thing that you were mentioning, Cortez, where it's looking at what is your policy, what is your procedure for ingesting a third-party component, how are you maintaining it and tracking it? In medical devices in general, security is a sliding scale. The less complex of devices, the less considerations need to be placed in the device. With these more critical devices, the thought that needs to go in before you're just taking any component to do a function, is pretty significant.
There's a heavy lift that needs to be done when you're vetting a third-party component and making sure that they stay maintained continuously. It's great if you find a safe component, you think it's going to be maintained for a while, so you're not concerned about it from like an onboarding perspective. But if you just leave it and forget about it for two years, naturally it's going to come up with some problems that are going to need to be addressed. If you're letting it get to that point too, shifting up two major release versions is often going to be an impossible task, and it's going to require a refactor of the code, like you said. So trying to stay on top of it, making little changes every time there's a minor version is going to be a lot easier of a shift, as opposed to making these major overhauls anytime there's a major significant problem.
Absolutely, and the medical device space in particular is even more challenging. I would put the medical device space and maybe automotive and a few other embedded system spaces as particularly challenging because once you ship a device to a customer, the effort to bulk remediate a vulnerability is immense. You're talking about complete recall levels, or having to go site by site in order to actually ship these particular updates. And particularly because a lot of these devices are not fully internet-connected, meaning that they may have a little bit of internet connection in order to manage, you know, say a SaaS application that goes along with it or something of that nature, but often times there's no formal OTA process for a lot of these medical devices.
And so there's actually another framework that it's a bit too rigid in my opinion, but I think it's a great starting point, called SSVC, the Stakeholder Specific Vulnerability Categorization. It's essentially a decision tree that allows for you to walk through decision points like is this vulnerability exploitable, is this particular asset highly critical, is there, you know, other proof? And you kind of walk through to then make a determination on if you need to act immediately on this vulnerability or not. And I think that, you know, having really good policies and frameworks in place for making that decision-making is actually almost even more important than remediation if your medical device manufacturer, giving the impact that'll have if you do have to make something that that critical of a decision.
Yes, something I know that in the medical device space, there's a big push towards transparency, and Trevor and I have had this discussion many times, and we've had it with manufacturers, like, you know, if I, if I go buy a car, I think I have a right to know like the Bill of Materials, like we're talking about, like, who makes the brake, who makes the carburetor, you know, who makes the spark plugs. Then I can make an informed decision about the car, and I know where things are coming from. And it's the same thing with if I buy a medical device, I should have known where all the components of the software come from. A lot of our manufacturers, or our clients and a lot of manufacturers we deal with have like this resistance about providing a Bill of Materials, the SBOM to one of their clients. And can you maybe touch upon like why they, why is there that resistance? Because I, I, I think it's kind of unfounded, but I just wanted to hear your, your both of your opinions on that.
Yeah, I'll take an initial step at this one. I'd love to hear others thoughts. So, I hear this one a lot actually, and it's not even just in the medical device space, just about every industry, and I think it comes from a bit of an older mindset of, well, we cannot give people access to our IP in any way, shape or form, and everything that we've either developed or put our hands on is intellectual property as far as we're concerned. And I think that's fair. I think that's a fair, you know, perspective. I think in practice, if you analyze what an SBOM truly is, oftentimes it is a list of components. You'll have some metadata about the SBOM itself, who generated it, what tool generated it, the name of the project, the component, you have a component name, a component version, maybe a unique identifier if you're lucky in the form of a package URL or a CPE, and some license information, and that's it. Maybe you have some, there are a few other fields obviously, I'm oversimplifying here, but for the most part, that is the only information that is provided.
And so, from my perspective and from my client's perspective, your intellectual property is not your usage of, you know, ANSI colors and node.js. Your intellectual property is the way that you're using ANSI colors to then process some form of data over here. And that's a terrible example because ANSI colors has nothing to do with data processing. But really what I'm trying to get at is the way that you're using both closed-source and open-source components is much more important, and that is pure source code at that point in time.
I can assure you that your developers are doing what all developers are doing, which we're all Googling and hopping on Stack Overflow and maybe now using Azure and Cursor and a few of these other tooling to kind of get us there. And none of that is proprietary whatsoever. Now, I will say that there is a, and in the kind of second piece of this that I want to address is, oftentimes I'll get pushback to say that, okay, well, IP is one thing, but are we not giving potential threat actors a roadmap for how they can go and, you know, exploit us because they know all of our open-source components and they know what vulnerabilities are there.
And that is also, in my opinion, a bit of a misunderstanding about how threat actors tend to operate, meaning very rarely are they looking to almost, yeah, I call it vulnerability sniping where you're looking for just this one hyper-specific CVE to take advantage of. It's typically much more of a broad approach where you're looking to take advantage of a thousand different exploits, and you want to see of those thousand different exploits, which ones are going to work on this particular environment, particularly if you're trying to do privilege escalation, which is the typical entry point, so that way you can then, you know, exhale some data or install some type of script that you can then take full control over a system. None of that is valuable or can really be perceived from a Software Bill of Materials in and of itself. And so, per usual, in security, I don't want to say that there's zero risk, because that would probably be unwise, but I do think that the risk is extremely small, I would say less than 10%, and by no means outweighs the benefit of having that transparency for the rest of the industry. So that's kind of my take on it. I would love to hear more about yours.
Yeah, I wanted to just say one thing. I'll let Trevor take this take, take, take it, but we, we've had a client basically in the past go as far as telling us that if they published the SBOM, it's a playbook for the attackers. So you, you could, you can go address that question, this topic, Trevor.
You know, I think if the SBOM is in fact the playbook for attackers, you have bigger problems. If your security posture is so bad that your SBOM is giving away that information, you're going to get hacked either way. I used to do a lot of bug bounty hunting, and I would do a similar approach to what Cortez was saying. I would do massive widespread attacks. I'd pick like three common CVEs that are churning up everyone, and then I would hit it against any company with a bug bounty program, and 99.9% of them wouldn't hit. And then that 0.1% would hit. If they had that SBOM open, it doesn't matter. I'm not going to look at that SBOM to try to do a targeted attack. It's indiscriminate, it's widespread, it's grab anything you can grab onto and see what happens.
And so that's going to be the approach that threat actors take. If you have a vulnerability in your SBOM, something that lets you get from the outside into a device, to a system, to a car, whatever it is, it's not going to take that SBOM to let a threat actor know, they're going to find out either way. And so, now, even if that SBOM is the decision factor that leads the attacker down the path to get into the system, your SBOM shouldn't have these vulnerabilities. Part of that transparency is so that if you can look at it and go, hey, wait a minute, guys, your SBOM has 80 CVEs, and you have no rationale for why they're there, what's going on? That's the purpose of having an SBOM out there is to have that transparency, make sure that buyers are making an informed decision. So your SBOM shouldn't be a problem to begin with. If you're taking care of proper cyber hygiene, you're keeping your packages up to date, you have a secure policy in place for vetting third-party components and packages, this should never come up as a problem.
I totally agree. Sorry, go ahead, Christian.
Oh, I, I, I wanted to touch on one thing because we've, we've neglected to talk about it a little bit here, and it's a topic that is a little bit confusing for people. So we talked about the SBOM, and we talked about closed-source and open-source software. Open-source is, you know, you can read the source code, it's on the internet somewhere. A closed-source, it's intellectual property, typically. One of the things that we've encountered quite a bit is, is, is soup, right? Software of Unknown Providence, or Unknown Pedigree, whichever term you want to use. And I consider soup like a subset of an SBOM. We've got this code that we don't really know where it came from, that we can't tie it to anything, but it somehow got into our code. I just want to hear your, your perspective on, on that, and how people should be addressing that, that the soup, I guess.
Yeah, that one's a really interesting one, and this is actually where generating SBOMs in the embedded space in particular is really, really challenging, to be quite honest, because most of, you know, medical device manufacturers, and in particular if you're, you know, deploying like a fully contained, you know, binary or something of that nature, rolling your own custom Linux distribution, all that kind of good stuff, you actually aren't, you know, managing packages with a package manager in the same way that you would do like NPM or, you know, Maven, which makes it really clean and easy to do. And so that's where you'll get a lot of these soups. I think that's a great, now I've definitely heard the acronym before.
Because oftentimes developers may literally be copying and pasting whole packages off of the internet into, you know, your kind of codebase, or even more complex, you have, you know, 10 different teams managing one final device, and, you know, one of those 10 teams could definitely, you know, start to introduce that. And so the best way to manage it, in my personal opinion, does take a little bit of effort, but if you think about some of these embedded systems, like Yocto or buildroot, or, you know, some of these other ecosystems, they tend to have pretty decent manifests where you have to declare what these packages are, even if they're not open-source, right? Even if they're not open-source, they're not closed-source, you have a name, you have some type of version, you're making a declaration there. You can actually then parse these manifest files, which is slightly different than, say, like a pom.xml or package.json, which I think is a little bit more straightforward, to start to build that inventory of assets.
Now, can you relate vulnerability to them? Probably not. Can you associate licenses with them? Also, probably not. But I think that maintaining that inventory is still highly important, because between Hacker News and Twitter and all these other vulnerability sources, when something extreme does happen, you do want to make sure that you understand what that package is, to then be able to start to associate some of that stuff. So it's a really tricky problem indeed.
I have one kind of interesting war story from a pentest against a medical device. So it was an API connection between a binary on your desktop and then a cloud component. And when testing that API connection, I found out that I was able to crack the key used to sign bearer tokens. I went, wow, that's, it was the first time, you know, I've ever been able to find that problem. It's not very common to be able to crack a signing key, because usually they're super complex. And this one was long, it was like 32 characters, really big, complex, randomly generated key, but it had been previously cracked before. And so when I was testing it, I saw, oh, this is a known vulnerable key.
I went to the manufacturer, I told them the problem. The engineers immediately went after me for it, they were not happy about this, as engineers usually get. But I went into, I was, you know, really curious, they were just like, there's no way this could have gotten into the code. And I said, well, I don't know what to tell you, it did. Moved on, I wanted to figure out how that happened. And so I looked into it, and when looking at, you know, how to generate code for signing a bearer token, I found a lot of examples on Stack Overflow and one verbatim with this vulnerable key right there on Stack Overflow from like 2011. So, you know, of course, there's no way to confirm, but I think we can all take a guess what happened there.
Oh, 100%. And that's actually, where some of these AI coding assistants are also going to be interesting, right? Because they tend to all be getting a lot of their information from the same sources as well, from the same Stack Overflow posts. I don't think there's been any big incidents yet, but I suspect something will happen over the course of the next decade where a lot of companies are going to be impacted by the same snippet of code that was generated across, you know, Cursor Copilot and all these others. And I love these tools. That is not, I'm not an anti-AI person by any means, but I do see risk in how much information is being synthesized in just very few sources. And there's a whole other philosophical conversation I'll have to do another podcast for you about.
Yeah, for sure. One thing that comes up a little bit, and maybe you can talk about your experience with this, Cortez, is the licensing issues. We haven't really addressed the licensing issues, but if I've got like third-party components of my embedded system or my medical device, and there is a licensing issue, in some cases, you may have to divulge your source code if you're using a license. Is that, I know that's the theory, but have you actually seen that happen? Or I'm just curious what, what both your thoughts are on that?
Yes, I will definitely talk to this one because this is obviously where Fossa kind of got our roots. And so there's really many layers to this that I want to discuss. And the one is actually kind of circling back to a point that you said earlier, which is, you know, how ridiculous it is in my opinion to have concerns about IP theft or vulnerability blueprints just off of a list of components, because the reality is, we've actually been doing this for a long time in the form of license attribution. You've had to publicly post your list of components, the licenses that they're using, and so a lot of this information has actually already been out there.
Now, there are scenarios where you are going to be required to open-source your code if you're doing certain what they call copyleft licenses. And so you have copyleft licenses, you have strong copyleft and weak copyleft. Strong copyleft meaning that you have to open-source absolutely everything. Weak copyleft meaning that there's some really specific circumstances by which that you would be required to do so. Then you have permissive licenses, you can think of, and sorry, on the copyleft side that's often times LGPL, GPL, AGPL, a lot of the GPL variants. There are a few other licenses that you have to be worried about there as well.
And then on the permissive side, you have like MIT and Apache where you can almost, you know, have free rein as long as you're reproducing that license attribution. We've actually seen a lot of this in the wild. Actually, there was a recent legal case. I'll have to find it for you, and we can include it with some of the assets as we distribute, where an open-source developer was using an LGPL, I think, licensed package. If you are using an LGPL licensed package and you're using it statically linked, specifically statically linked, then you're required to open-source your source code.
Now, you do have options which can make things easier. You can rip out that package and stop using it, which is oftentimes a very expensive action, or you can just choose to provide the source. This particular developer was brought to court and they had to provide their source code because of that. And so that was actually a win, I think, for the open-source community to make sure that people are actually providing back and giving back to developers. But there are a lot of really interesting scenarios that we've seen, and I would say in the embedded space, so medical device manufacturers, automotive manufacturers, all of those kind of things, that is where it's most concerning because a lot of the GPL variants in particular, those obligations kick off whenever you're distributing a product.
SaaS products tend to not really have too many license obligations, but if you're distributing like a, you know, a binary for dynamically versus statically linking, if you're modifying a package in any shape or form, these are the type of actions that then start to trigger that kind of copyleft requirement. And I, you know, I'll, I'll, I'll get off my high horse here in a second, but I 100% view this as a security problem. Oftentimes this is like put down to the license compliance team, but if you are, you know, a medical device manufacturer, you've already shipped thousands of devices to hospitals, and it's later determined that you have, you know, some type of LGPL variant and you're using it in a statically linked way, you have no ability to rip out that package. You will be required to open-source your code at that point. And now, you know, to bring the topic full circle, now you have a blueprint in order to have, you know, an attack path. The SBOM, you're not worried about having to open-source your source code. That, that's a completely different animal there, and attackers will take advantage. So yes, the the license obligations piece is an incredible risk and one that I think most people don't fully underestimate if you're not delivering products to customers.
Yeah, really. I'm kind of curious just on, you know, kind of get your thoughts on like another topic that I've been kind of struggling with or seeing a bit more out in the wild is, you know, what are people truly doing with SBOMs, right? I think we kind of reached this point where everyone started requiring SBOMs, the FDA in particular, I've had a few conversations with the FDA. I don't think they're yet at the point where they have a true operational approach to actually ingesting those and looking up vulnerabilities on an ongoing basis. But I do think that that's naturally where the industry is going to be growing to and kind of maturing to. I'd be curious to see, you know, within your customer base if you're starting to see some of the operationalizing, oh, it's a tough word there, of SBOMs and actually using them beyond purely generating them.
I think in the medical space, a lot of our clients come to us kind of begrudgingly. Their regulatory consultant told them, hey, what are you doing for your cybersecurity? And the client said, what cybersecurity? And then that's where we come in. So at that point, the SBOM is a box to tick, and I don't know how much they're doing after the fact with it. Now, having said that, we do get a lot of clients who are very passionate and very proactive about security. They're coming in early, they're trying to design security into their product, ensure that it's really safe. And so doing a lot of, I would say, not like automated remediation, but they're setting up a lot of, you know, triggers based on a certain event happening, with those events typically being problems in the SBOM.
So they're using a tool like Fossa to take a look at what's really going on in their product so that they can make informed decisions on how to make it safer. I think that's the direction we need to see everyone going. It's not a box to tick. Well, it is a box to tick, but it shouldn't be a box to tick. It's got to be a lot more than that. And so when we see these proactive manufacturers that are, you know, setting up so that they have a problem in their SBOM that automatically creates a Jira ticket, goes to their development team to figure out what's going on, they're taking a very proactive approach to it. They're staying on top of these problems. And someone in that, you know, mindset, someone really, really enforcing security at the policy level, at the operation level, they're not going to get hacked.
Hackers are looking for, you know, we were talking about it earlier, hackers attack indiscriminately. They look for the lowest hanging fruit. They aren't going to spend, you know, 700 hours trying to research an oxygen pump to figure out how to hack into it. They're looking for the oxygen pump with a nasty vulnerability bleeding out into the open. So these manufacturers using the SBOM proactively to really drive security are getting into a much better position. And when it comes to the regulatory side of things, they're ready to go when the FDA has these questions. What's your process look like for this? What's your risk management approach? How do you handle your SBOM? How do you handle, you know, continued support for SBOM components? They have these answers ready to go, right? For the FDA, and the FDA is happy about that.
I absolutely love that. And I, I would say just one thing I would comment on that is what I've seen is often times if you provide an SBOM to someone, whether that's the FDA, whether that's to a customer of yours, whether you're a supplier, you have to provide it to an OEM because that OEM is actually a supplier for someone else, right? And it kind of continues to work its way up the chain, is often times those SBOMs are then imported into a tool like Fossa, and a whole bunch of vulnerabilities are discovered. But as we were kind of starting with the earlier conversation, there's no context there at that point in time.
So you don't actually know if those vulnerabilities are exploitable in that particular context. And so I think that's actually another way that SBOMs can provide a bit of harassment reduction is what I kind of call it. Because then that OEM doesn't have to reach back out to you and say, you know, oh my goodness, look how terrible your code is because of all these vulnerabilities. And you immediately have that evidence to be able to say, no, you don't need to worry about, you know, 98% of these because they're, you know, not exploitable. And those other 2% we have some inline mitigation or something else in place in order to take advantage of that.
Yeah, I think we're coming up on time here. So I'll throw it over to you two for any parting words of wisdom about SBOMs, which seems like we've been talking about SBOMs quite a bit. Yeah, so I'll start with you, Cortez, any parting words of wisdom for people out there?
Yeah, I, I do have some parting words of wisdom, which is, one, I like Trevor's point, which is it's a checkbox today, but I think we as an industry have a real opportunity to improve everyone's security posture by moving it beyond a checkbox. And I, I would really encourage everyone to start thinking about how you can generate SBOMs as kind of your first step, but even beyond that, how can you start importing those SBOMs so you can then look across your supply chain and just make high-quality decisions even beyond vulnerability management? We see a lot of end-of-life, end-of-support, end-of-maintenance status recommendations, which will give you signals into when that next celebrity vulnerability really kicks off. Are you prepared to actually triage that? Are you prepared to actually remediate that?
And the last kind of words of wisdom I will leave is I don't see it today, but I highly suspect that even hospitals themselves and purchasers of medical device manufacturers are going to start requiring SBOMs as they, you know, get burned in of themselves. And so the FDA is kind of driving it today, but I think that the more that medical device manufacturers put themselves into a place to be prepared for when their customers start requesting them, which all other aspects of the industry is doing, then you put yourself in a position where you don't have to do a scramble activity. I'd be remiss not to mention that there is a tool named Fossa that would be happy to help you with all those things if I get an opportunity. But even beyond that, I literally just love, I live and breathe, you know, not just SBOMs, but the SCA space in general, and I'm happy to have any more conversations about it.
I think we covered a lot of really good points today. I think if anyone has a takeaway from this, is that an SBOM is not going to compromise your IP, it's not going to compromise your infrastructure, it's not going to compromise your network. I know we talked a lot about SBOMs being a path for attackers, and we've also heard in the past clients complaining about, well, isn't this essentially giving away the secret sauce of how we make our product? No. If you're gluing together a whole bunch of components, and that's all you have, sure, it might be a little bit of an indication. But usually, the secrets of what you're doing and what makes you have such a great product, is what you're doing yourself. These components are supplementary.
So I think that just making sure that manufacturers, customers, everyone really in the healthcare space and in all industries in the security space are becoming aware of all the good sides of SBOMs and trying to push out these misconceptions about what they can do wrong, since for the most part, it's just not true. SBOMs are a helpful tool, it's great to, you know, let everyone know what they're buying, know what goes into a different device or component, network, whatever it is. And yeah, just making sure that everyone, everyone isn't getting too scared of putting their SBOM out there. Awesome. Well, thanks so much, Cortez, for being a guest on our podcast, and thanks everyone for tuning in.