In this episode of The Med Device Cyber Podcast, hosts Trevor Slatterie and Christian Espinosa tackle the often-controversial topic of bridging the gap between medical device developers and cybersecurity experts. They explore scenarios where development teams become defensive after vulnerability assessments, particularly when conducted close to FDA submission deadlines. The discussion highlights the inherent tension between developers focused on functionality and UI, and cybersecurity professionals dedicated to discovering vulnerabilities. The hosts emphasize the critical role of emotional intelligence in navigating these interactions, stressing that penetration testers' primary goal is to help secure products, not to attack developers' work.They delve into the challenges of achieving truly secure development, acknowledging that while it's possible for developers to understand both development and security, the rapid evolution of both fields makes it unrealistic for one individual to master both. The conversation touches on the lack of widespread adoption of secure software development pipelines, despite the availability of tools and methodologies like OWASP guidelines and static/dynamic application security testing. A significant portion of the episode is dedicated to the impact of unrealistic timelines and budget constraints, which often lead to security being deprioritized. The hosts also draw an interesting analogy between cybersecurity and dental visits, portraying both as necessary evils that are more cost-effective and less painful when approached preventatively. This episode is essential listening for product security teams, regulatory leads, and engineers seeking to foster better collaboration and implement more robust security practices within medical device development.
Key Takeaways
01Effective communication and emotional intelligence are crucial for cybersecurity experts when presenting vulnerabilities to development teams to avoid defensiveness.
02Integrating security practices early in the Software Development Life Cycle (SDLC), including threat modeling and rigorous security requirements, is essential for building secure medical devices.
03Unrealistic business timelines and budget constraints frequently lead to the deprioritization of cybersecurity, highlighting a significant challenge in the medical device industry.
04While full mastery of both development and cybersecurity is difficult, developers can significantly reduce vulnerabilities by implementing basic secure coding practices and leveraging specialized cybersecurity expertise for complex issues.
05Preventative cybersecurity measures, akin to regular dental check-ups, are ultimately more cost-effective and less painful than reactive incident response and remedial fixes.
06Most major data breaches are caused by misconfigurations and human error, rather than complex coding exploits, underscoring the importance of basic security hygiene and awareness.
07Tools like Static Application Security Testing (SAST) are effective at identifying common, low-hanging fruit vulnerabilities, but penetration testing remains critical for uncovering deeper, more subtle flaws like those resulting from copied code with compromised keys.
08Organizations should consult OWASP guides and other resources to establish secure coding practices and integrate security into their CI/CD pipelines from the outset, rather than attempting to retrofit security into existing, established systems.
09The regulatory landscape, including mandates from bodies like the FDA and EUMDR, is a primary driver for cybersecurity adoption in the medical device sector, pushing organizations to address security concerns they might otherwise overlook.
Frequently Asked Questions
Quick answers drawn from this episode.
In this episode of The Med Device Cyber Podcast, hosts Trevor Slatterie and Christian Espinosa tackle the often-controversial topic of bridging the gap between medical device developers and cybersecurity experts.
Effective communication and emotional intelligence are crucial for cybersecurity experts when presenting vulnerabilities to development teams to avoid defensiveness. Integrating security practices early in the Software Development Life Cycle (SDLC), including threat modeling and rigorous security requirements, is essential for building secure medical...
This episode covers Threat Modeling and Penetration Testing. It's part of The Med Device Cyber Podcast, hosted by Blue Goat Cyber, focused on practical medical device cybersecurity guidance for MedTech teams.
The discussion highlights the inherent tension between developers focused on functionality and UI, and cybersecurity professionals dedicated to discovering vulnerabilities. It's most useful for medical device manufacturers, cybersecurity engineers, regulatory affairs professionals, and MedTech founders preparing for FDA review.
Effective communication and emotional intelligence are crucial for cybersecurity experts when presenting vulnerabilities to development teams to avoid defensiveness.
Listeners also asked
Quick answers pulled from related episodes.
What does Episode 4 cover about "Navigating the Regulatory Landscape of Medical Device Cybersecurity"?
Episode 4 of The Med Device Cyber Podcast covers Navigating the Regulatory Landscape of Medical Device Cybersecurity.
Pre-fills with: "Effective communication and emotional intelligence are crucial for cybersecurity experts when presenting vulnerabilities to development teams to avoid defensiveness."
Welcome back to The Med Device Cyber Podcast. I'm Trevor Slatterie, here with co-host Christian Espinosa, and today we're talking about a little bit of a controversial topic: bridging the gap between developers and cybersecurity. How are you doing today, Christian? I'm not doing that great. I have a cold, and it's like one hour I'll feel really good, next hour I'll be coughing, and my nose is running. So, you know, it depends, but I'm glad to be here today, talking about this controversial topic because I think it's a fundamental problem that we are not making much progress on. You know, someone should write a book on that. It seems like a good topic to cover. Yeah, I did write a book on that. Are you talking about my book? I am talking about In the Room. Yeah, it's about the egos with high IQ individuals and how they want to protect their identity at all costs. They want to be smarter than everybody because that's their identity, which doesn't make for good collaboration or communication typically. No, and that can be a problem with developers and cybersecurity experts. If someone's really good at making something or really good at breaking something, they're going to be pretty proud of that fact, and it can lead to clashes here and there. Yeah, and since we're talking about developers, I think it's interesting because we've had this scenario come up quite a bit. A medical device manufacturer waits until, say, 60 days before their FDA submission. They come to us, and we test everything and find like a thousand vulnerabilities. Then we tell them the vulnerabilities, and their software developers are like, "there's no way you could do this," right? They get defensive and argue that there's no way we could have done what we did. We have to have some emotional intelligence, which my book is about, to navigate that scenario. But do you think that's more the norm or just kind of a rare scenario where the developers get defensive and say there's no way you could do this? What do you think, based on our interactions? It's pretty much a coin flip every time you have this interaction, whether or not they're going to go, "Oh yeah, that's really good to know. Cool. Let's fix it," or if they'll be a little bit defensive. It makes sense. That's their product. They built it. They're proud of it, and you're saying, "Here are a thousand things you did wrong." That can be a little bit upsetting for some people. I guess that's true. I mean, I was talking to Melissa the other day, and I pointed out a couple of things that she didn't improve on. She's like, "All you're doing is complaining about all the stuff I did wrong, not the stuff I did right." I imagine a developer feels the same way. Well, there was your first mistake: ever pointing out anything wrong. Yeah, I'm constantly trying to make improvements. Sometimes I don't learn from my mistakes until the hammer hits me over the head, unfortunately. It's like I have to make the same mistake a few times. I try not to, but I'm not as smart as I think I am sometimes. Yeah, and I think with developers making a product, those two different responses do require navigating the situation a little bit differently. This isn't to say that every time we talk to developers, they immediately come to attack us because they think we're attacking them, which we aren't. We're just trying to help them build a secure product. We're trying to secure their devices, and we're there to help. A lot of the time, people recognize that from the get-go. We're there to help. We're not trying to cause problems. We're not trying to make them feel bad or attack them. We're just saying, "Hey, this is what you can work on to design a safer product." And they're excited. They're enthusiastic. They like to see this come out, and they can't wait to get the fixes done. That's the interaction that we always love to see. And I think that those are obviously really easy interactions, but even when someone is saying, "Well, how are you finding all these problems? I don't think this is fair. Like, I can't believe you find all this," there's a right way to navigate it. We can point out, "Hey, this isn't us trying to poke apart your product. This is the standard that we have to follow. This is what you guys have to follow. We're just helping you follow that." That's all we're doing here. Yeah, and we've had a couple of interesting scenarios come up since we're talking a little bit about emotional intelligence and navigating this. Didn't we have a client that we pointed something out to them? I think it was a way we could expose some sensitive data, and they didn't really fix it, but they tried to trick us, didn't they? They tried to say for the one record, "if you look at this one, it's fixed," but if you looked at a different record, it wasn't fixed. I don't know how we navigated that or what happened. Can you explain that a little bit? Yeah, situations like that can come up where, and in a similar vein, we'll have people just disagree with findings. They'll say, "We don't think this is actually a problem." And we say, "Well, you know, according to ISO 27001, it is a problem. This is a key cybersecurity consideration that you missed." But sometimes fixes, I think there can be two situations where that issue comes up with the partial fix. Either developers don't exactly understand what the problem is, and when that's the case, that's our fault. Our job as penetration testers is primarily to deliver a good report. It doesn't matter how good you are at hacking into something if you can't convey that in an easy-to-understand way. That's the number one job of our testers: deliverable. I agree. And from my experience, most people don't like writing reports. No, which is the challenge, right? Yeah, that's actually all that matters with a pen test. It doesn't matter how great your hack was; it's how well it was conveyed in the report and how well it was conveyed how to fix this thing. Yeah, no, exactly. If you go to the developer like, "Let me tell you about how cool this hack was," they'll go, "Oh, how do I fix it?" And if you go, "I don't know. Good luck," then that's useless. You're providing no value to them. So that can be an issue that comes up. If the developer doesn't understand what the problem is and they don't understand how to properly fix it, then that's on us for not explaining it well enough. Now, the opposite situation can come up where they understand perfectly well what's going on, and they just choose not to, or they make an insufficient fix even if they know what the proper one is, or it can be a technical constraint where they just can't fix it all the way. When that comes up, we have to find the right balance of, if there's a technical constraint, "Okay, how can we get it good enough so there's still a little risk, but it's heavily reduced?" If they're doing it wrong or if they're not actively just trying to get it out of the way, then we just have to keep working through it until it's fixed. Ultimately, risk acceptance is decided by the client. We can't accept risk for them. We can only explain the risk to them. So if they say, "We don't care about this," then that's their decision; they can just leave it. But we just have to inform them of what risk they are taking on because of that. Right, do you feel this challenge is solvable? Because, from my perspective, I'd have to look at it through the lens of a software developer. Their job is to develop software, this product that's functional, that has a nice UI, and that's mainly their job. And cybersecurity's job is to figure out all the ways to break this product. So it's a very different lens we're looking at the code through. Do you think it's possible for a software developer to actually understand both how to develop the software but then also how to develop it securely? Because they have to have both perspectives. Do you think that's feasible, or do we always have to have a cybersecurity person or team working in conjunction with the software developers? I do think it's possible, but I don't think that it's really that realistic. Developing really great software and really great products is extremely hard. That's why all these top developers out in San Francisco are paid, you know, a million bucks a year to write code. They're creating really cool, impactful products, and they're doing it in a way that they're fixing problems with solutions that other people can't come up with. That takes a lot of skill, that takes a lot of practice, that takes, you know, 20 years of experience writing code. In the same way that super talented hackers, they can get into anything, and they spend years refining their craft. All they do is live and breathe cybersecurity. And so I think to be truly great at both of those, there just aren't enough hours in the day. The fields are evolving too fast. Every time you turn around, there's a new exploit or a new hacking technique out there. There are new development tools, processes, and best practices, it seems, every other day. So being able to keep up with all of that is really, really hard, and I don't think that most people have the bandwidth to do everything. So that's why it's good to have a specialty and kind of focus on one thing. Yeah. So there are some ways to improve that, like a secure CI/CD pipeline or secure software development pipeline where you have gates. And the gate would be, I do this unit of code, and then I run it through this tool, and if it's not secure, I go back and fix the code before it goes to the next iteration or the next, you know, the system as a whole. Do you feel like OWASP has a guide on how to implement a secure software development pipeline? Do you feel like most people have a secure software development pipeline? No, I'd say it is the tiny minority that do. And there's a lot that goes into a real secure software development life cycle. There are a lot of steps that have to be considered. There's a lot of review and testing, and I think that developers with standard development practice, like maintaining version control, most developers know that. I'd say pretty much any developer knows that. Unit testing, pretty much any developer knows that, even if they hate it. Writing code documentation, everyone knows it, even if they don't do it. And so those are some of the main core concepts of software development, but some security-specific things, like where are your security requirements before you even touch the keyboard to do any code development? What security requirements need to be built into this device? As soon as you have an idea for a product, how can you threat model that device immediately before you've done any development? Figure out what the problems are before you introduce them into the system. These steps have to happen really early in the development life cycle, way earlier than I think most developers are doing this. Well, I don't think anyone knows about those steps, like they should even do threat modeling. Yeah, it's just not as common knowledge. It's not very commonly taught. I always like to use the example: when I was in college, I went to school for cybersecurity. I had a lot of friends that were software engineers. I had to take secure development classes, so I had to learn about secure development practices, developing a secure software development life cycle, CI/CD pipelines, all that stuff. They did not. And they were the ones actually writing code. So it can just partially be an education problem. Developers don't know what they don't know, but it's just an awareness problem too. It's not a very forecasted issue. Nobody really talks about cybersecurity outside of the cybersecurity bubble, and that leads to developers building insecure CI/CD pipelines. Yeah, I don't want to put all the blame on the developers. I think one of the challenges, because I know a lot of developers, and we work with a lot of them, is unrealistic timelines as well. I know when I worked as a director a long time ago, I oversaw a big group of software developers, and we had a roadmap for our product we were developing. Let's say this was released in June, and it's January. One day, the CEO of the company came to me, and this is like in January, he said, "We sold a bunch of units; we have to ship them by the end of February." I'm like, "Well, we haven't done all of our security testing. We didn't do this. We didn't do that." He's like, "I don't care. Just package up whatever got done and ship it out the door." I'm like, "Okay, it's got a lot of bugs in it, and it's got a lot of problems." But I'm sure that same scenario happens all the time everywhere. So, even if the developers had a plan to secure the software, there's a business decision made that's like, "Screw it. Just ship it out the door, right? Or let's just submit it to the FDA, submit it to EUMDR. We don't care. We got to keep the timeline going." Do you think that's realistic? That happens a lot. Oh, all the time. And I think that's a really big problem because, like, you know, we say it all the time, security is not anyone's favorite budget allocation. As soon as they're budgeting, it's a necessary evil, as you like to say. It's a necessary evil. I know when all the big tech layoffs happen, cybersecurity teams seem to get hit the hardest often because as soon as there are budget constraints, everyone wants to get rid of security. It's annoying. They don't like them anyway. So, cybersecurity people are difficult to deal with most of the time. Yeah, they're frustrating, annoying, and they all have huge egos. It's super complicated. And so, you see that it's never the priority. It's always on the back burner. And if there is any reason why it needs to go, it's gone. And so, budget constraints, cybersecurity is gone. Timelines, cybersecurity is gone. It's always the first thing to go, which, you know, it's a huge security problem, but it's also a regulatory problem, which is what I think cybersecurity is at its core with medical devices. It's a regulatory problem more so than anything else. And so, if you're not— but it's only a regulatory problem because there's a mandate to force cybersecurity. I feel like if there wasn't a mandate, which is a compliance driver, nobody would care about cybersecurity. Totally. Yeah. If you don't have to do it, you don't want to do it. You know, cybersecurity is very expensive. Getting all this done takes a lot of time. It frustrates the developers because they have to go back and fix all these problems. And so, if there wasn't any mandate around it, if there were no laws around cybersecurity, if there were no punishments if you get PHI breached or something, nobody would do it. Nobody would care about it. And so regulations are what drives cybersecurity being an industry, really. Yeah, 100%. And it's like the necessary evil, like you said. And I know, ironically, I don't remember what episode we were talking about, comparing going to the dentist to cybersecurity, because it's a necessary evil. I hate the dentist, and I tell you, never go. But now I have like part of my filling, a filling in my tooth, that's falling out. So it keeps cutting my inside of my mouth. So I'm going to have to go to the dentist, but I don't want to do it. Huh? I think I remember five or six months ago, bugging you to go to the dentist after this conversation. I know. And maybe I got so worried about it, I was grinding my teeth at night or something, and it created the problem, you know. But now I have to go, but I don't have a dentist. I have to find a freaking dentist, which is another challenge in itself. Just like choosing a cybersecurity vendor, I have to find a dentist that I can trust. Well, I know a great one in Scottsdale, so I'll send you his number. Okay. That's a good example. So, you know, having to deal with this problem, you're fixing a problem. You have something that is frustrating, annoying. It's already cut your mouth and continually cutting my mouth. Yes. Yeah. It's continually cutting your mouth, and you know, it's probably going to be more expensive to fix than it would have been to prevent. And so I feel like that's a perfect analogy for how cybersecurity works. Preventative cybersecurity is going to be cheaper than incident response fines and dealing with your problem. If you fix it, and probably less painful, and probably that's why I want to go. I know it's going to hurt. Yeah. If you -- I'm not a wuss, but last time they messed up with anesthesia, and I almost passed out. It hurt so bad. I was like gripping the chair so hard, I thought I was going to rip the chair off, and tears were coming down my eyes. I'm just trying to gut it out, but I had to like raise my hand and say, "I can't handle any more." You know, it was a traumatic experience. Yeah, that sounds like a good reason to not want to go to the dentist. That's probably how people feel about cybersecurity, though. Yeah. Yeah. Yeah, if they have a bad auditor, a bad consultant, or penetration tester who doesn't know how to deliver a report, then they go, "Great, we just wasted all this money. We still have problems." And why did we do that in the first place? Yeah. It was a painful experience for them, kind of like the dentist for me. Yep. So, if we're trying to bridge the gap, which is the, you know, the topic between software developers and cybersecurity, I know there's this push, there's been this push forever with DevSecOps, you know, the secure software development life cycle. And I know when at first it was dev developers and then operations, like they need to integrate together. So, we actually develop software that people can use properly. And now we put, you know, you see these Venn diagrams, security in there. So DevSecOps, but I still feel like, and we've been talking about this for like 20 years, we haven't made much progress. And I think a lot of it has to deal with, maybe we have made some, but like you said, it costs more to add cybersecurity, and some manager or the leadership in a company made a decision just to ship it whenever it was done. So I think that's one problem. But I think the other problem you alluded to in college, we don't really teach software developers how to develop secure code, even though there's a data breach every day because often it's unsecured code or insecure code. And often it's a misconfiguration. It's like a combination of those two. And I would say it's probably 70% unsecured code and 30% misconfiguration. I don't know the exact stats, but it seems like if the majority of breaches are caused by unsecured code or insecure code, some people like to say, then how come we don't start teaching this stuff in school, in coding school, to software developers? Or is it like we talked about earlier, is this even something that is feasible? I think it is feasible, and it's feasible to an extent. And we can go back to the original point where to be truly excellent at cybersecurity or development, I think that needs to be your sole focus. You can be decent at one and great at the other. If you know general core cybersecurity concepts, data validation, how to prevent, you know, race conditions, eliminating timing attacks, basic configuration stuff, you're going to get rid of 80% of problems, maybe even 90%. And then what's leftover, that's where you have someone doing targeted specific consulting. You have a penetration tester where they live and breathe this stuff. But if you've already covered 90% of the problems, they're only picking apart a few things that you get a report back with three or four items. That's an easy fix compared to if you get that report back with 20 items. And those, you know, most of reports, we see the same vulnerabilities in the same products all the time. Anytime we're dealing with a web app, I can, before even taking a look at it, before even knowing what the app does, I can predict three findings that are going to be on it because they're always on it. They're always just the low-hanging fruits. An actually really interesting statistic in big data breaches, in like organizational compromises, only 3% of breaches are because of a coding problem in an application that lets you get all the way through the app into an internal network. It's only 3%. It's close to 80% are human error due to a misconfiguration or due to like a phishing attack. And when I say misconfiguration, it can also include a network appliance, so like a VPN gateway or a firewall. Those can have misconfigurations or vulnerabilities in them pretty commonly. But only 3% of the time it's an actual exploit in a web application. So it's not super common to really have massive compromise from that because attackers are going for those low-hanging fruits. There's a misconfiguration. So you think the statistics are different than I said? It's the reverse. It's more misconfiguration than coding errors. Well, the reason for that is coding errors are harder to find, really. And misconfigurations are super, super easy to find. You can run a tool like Shodan, which just scans across the internet, and it says, "These eight servers are misconfigured. Go get one of them." If you're looking for a targeted attack, if someone like a nation-state actor is trying to take down one website, they're going to find problems. They're going to do it. They're going to get through. But it's not the easy solution. And so usually the use case is like a nation-state attack or terrorism or a targeted attack instead of just trying to get money through ransomware. And the terrorism angle or like the very targeted attack, it would be more prevalent in medical space as opposed to like finance or something like that. Yeah, I would think with like GitHub and Bitbucket and the integrations with like SAST tools, static application security testing tools, and DAST tools, dynamic tools, that it would be easier to set up a secure software development pipeline. What are your thoughts on that? You know, honestly, it's not that hard to set it up. It takes time. It takes planning. And part of why it's not so prevalent is developers will have been in their flow. They've been developing code this way for 10 years, and they've had their pipeline for 10 years. Changing that up, people don't like change. People don't want to change their systems, especially when it works. So if you're standing up a new development pipeline or a new project, a new company, whatever it is, it's really not that hard to do it right the first time. You build that pipeline in, you have everyone stick to it, and it's going to fix again, that 90% of problems, as long as you just stick to that pipeline. It's integrating it into an existing system that's often complex and difficult. Yeah, I guess that's true. People don't like to change that much. But we have the tools to make it a lot easier. Static application security tools have gotten better, but they're also like largely prone to false positives, which is another challenge within itself, right? Yeah, it can definitely be an issue, but, you know, luckily they aren't very prone to false negatives. Static application security testing tools are looking for just patterns that lead to vulnerabilities. They're not going to catch everything like they won't catch logic flaws where the actual operation of the device causes a problem, or they won't catch necessarily something like a race condition where a race condition is if you do a certain action and immediately follow it with another action. It might try to do two things at once, or it'll swap the order, and that can lead to a problem. It's not going to catch things like that, but it's going to say you missed all these hard-coded credentials. It's really good at finding those. It's going to say you can have a memory problem here. You're not validating your data input here. It looks for stuff like that, the low-hanging fruit that's super common, easily exploited, and prolific. And so these static testing tools do a really good job at getting that wide coverage. Sure, they might say a lot of things like, "Oh, you're not checking. You're not freeing this memory here when you do it in the next line, and so it's just finding something fake." But it's not going to miss it if you aren't actually freeing that memory. It's not going to skip right over it. It should catch it most of the time. So I've known software developers, we've had a few clients where the developer didn't know how to write a specific function. They searched the internet, found that function, put it in their code. How are those things detected? Because I've seen it before where they left like the comments, it's exactly from the internet, even like passwords and things. How do we, what's the way to find those things, and how do we solve that challenge? That's a pretty good question. I'm sure that that could be a pretty cool AI solution. I'm not aware of any tools that are going to like scan your code and say, "Hey, I found it in Stack Overflow over here. You got to write this yourself." But that would be, like, if we run it through a Software Bill of Materials, that kind of does some of that, but like for the SOU perspective, the software of unknown provenance, but I mean, would that detect it? That would just look at the components. So if your component has a known vulnerability, it'll find it, but it isn't, it's not really looking to see if developers are just copy-pasting code. I know we had one example where there was a key for authenticating to an API server, and that functionality was published on Stack Overflow, how to write out the code. And they left like the signing key in that code. It obviously got breached forever ago, like 10 years ago, but the code as a general concept for creating that signing functionality still worked. And so the team just took that code, stuck it right into their codebase with the key still in it. And so when we were testing it, we captured the key, and that got flagged as a known breached key. And we go, "Okay, we tried to decrypt it. Sure enough, we were able to decrypt all communications and authenticate ourselves as anything to this product." And so, if they had just switched the key for something else, it would have been totally fine. But they just ripped code out of Stack Overflow, stuck it straight into the product without reading what the code did. In that scenario, SAST wouldn't have caught that. The SBOM wouldn't have caught that. The penetration test is what caught that. Exactly. So the SAST is going to say, like, "You know, oh, you're hard-coding a credential here," but it's not going to tell you that's a bad credential. And so then you take that credential and you stick it into an environment variable. Boom. SAST doesn't catch it anymore. The SBOM isn't going to catch it 'cause it's technically in the code that you wrote, but it's not in a third-party component. What's going to test that is a penetration tester catching that key, taking a look at that key, and going, "Oh, this is a breached key. I can decrypt this because I already know what the key says." And so you just match it against that, and then sure enough, you have the decrypted information, and it's the keys to the kingdom at that point. For sure. Well, we're coming up on time here. So what's some advice or words of wisdom to a software developer that has all these challenges? You know, they have like these timelines. They may not know cybersecurity. You know, what are some pieces of advice you think would be useful for them? I think it's about striking the balance. So cybersecurity, we always say it, there's a big awareness problem. People aren't aware of the requirements. People aren't aware of what can go wrong. That's not an easy solution, for sure. That requires a lot more effort than I think we can provide here. But for any developers listening to what they can do, it's find the balance. What can you do to understand 80% of the vulnerabilities that'll pop up? Input validation. Don't hardcode your credentials. Just general best practices for cybersecurity. You don't need to go all the way down the rabbit hole. Just get good enough so that you can get rid of 90% of problems. And that 10% is when you go to specialized cybersecurity help. Yeah, and I know like owasp.org has some guides on how to develop secure code based on the language you're writing the code in, some cheat sheets. So I would recommend going there as well. And, um, yeah, I guess we'll wrap up here, and I'm going to go see the dentist. I don't know. I'm going to put it off for a while. But I feel like I have a second, what were we talking about in the previous podcast? A second-order attack, which is my tooth, which is then attacking my inside of my mouth. So it's making it bleed. So I got to, I got to do something about it. Yeah, you got to get that fixed. Go to the dentist. Okay. All right. Well, thanks everyone for tuning in. I hope you found value in this episode, and we hope to see you on the next one.