In this essential episode of The Med Device Cyber Podcast, hosts Trevor Slattery and Christian Espinosa, joined by special guest Jake Rodriguez of Triangle Tech, delve into the burgeoning role of AI in medical device cybersecurity, marketing, and software development. The discussion navigates the complexities and risks associated with
Key Takeaways
01AI-generated content, especially for marketing and SEO, requires careful validation and refinement to ensure accuracy and authenticity.
02Vibe coding, while useful for rapid prototyping and internal tools, poses significant security and compliance risks for medical device development due to its unstructured nature.
03Medical device companies must adopt a multi-channel marketing strategy, leveraging AI for content generation ideas and optimizing for AI search platforms in addition to traditional search engines.
04The medical device industry's slow adaptation to rapid cybersecurity changes, coupled with the long development cycles of devices, creates inherent vulnerabilities.
05Malicious actors are increasingly using creative prompting to bypass AI guardrails, highlighting the need for robust security measures in AI-assisted development.
06Building trust in an era of pervasive AI-generated content will increasingly rely on authentic, in-person interactions, podcasts, and strong personal branding.
Frequently Asked Questions
Quick answers drawn from this episode.
In this essential episode of The Med Device Cyber Podcast, hosts Trevor Slattery and Christian Espinosa, joined by special guest Jake Rodriguez of Triangle Tech, delve into the burgeoning role of AI in medical device cybersecurity, marketing, and software development.
AI-generated content, especially for marketing and SEO, requires careful validation and refinement to ensure accuracy and authenticity. Vibe coding, while useful for rapid prototyping and internal tools, poses significant security and compliance risks for medical device development due to its unstructured nature. Medical device companies must adopt a...
The discussion navigates the complexities and risks associated with It's most useful for medical device manufacturers, cybersecurity engineers, regulatory affairs professionals, and MedTech founders preparing for FDA review.
AI-generated content, especially for marketing and SEO, requires careful validation and refinement to ensure accuracy and authenticity.
Listeners also asked
Quick answers pulled from related episodes.
What does Episode 44 cover about "Why AI Literacy Matters for the Future of Healthcare with José Acosta"?
Episode 44 of The Med Device Cyber Podcast covers Why AI Literacy Matters for the Future of Healthcare with José Acosta.
Pre-fills with: "AI-generated content, especially for marketing and SEO, requires careful validation and refinement to ensure accuracy and authenticity."
People are more going towards Gemini and Perplexity and Claude. I think OpenAI kind of took the 'L,' but in the future is going to be doing more LLM search instead of Google search. But of course, if you're skeptical and you really want the real answers, it's best to use Google to validate your sources.
If you turn over the reins to AI and say, "Build me a medical device," the FDA is going to burn the building down. You're never going to be able to have a safe and effective product. Why would the China Airlines app need access to my microphone or my camera? So, I think most people just click on next, next, next, next, and pretty soon, like their phone is listening to things they don't even know, but they gave it permission. The whole brand of Apple just got into their heads, and now they're like, I have to have an Apple.
Could you explain this vibe coder thing? You're like smoking pot and like coding or something? People are creating apps based on creativity. Something random that they want to make, turn it into an app or a website. Before you were just doing it to people, now you're doing it to the AI. Malicious actors are getting pretty good at this creative prompting to try to trick the AI and break it out of its own guardrails. And so that's where you start to see those really malicious use cases.
Hello and welcome back to the Med Device Cyber Podcast. Today we're wrapping up the quarter. We've got a really exciting conversation ahead. We're going to dive into some exciting topics around AI, marketing, cybersecurity, and how any of those three things can tie together. I'm your co-host, Trevor Slattery, joined with our other co-host, Christian Espinosa, as usual. And here we have a really special guest today. Jake, I'll go ahead and turn it over to you for a little bit of an intro and love to hear a bit about what you're working on.
Yeah, hello everyone. My name is Jake, and I guess starting with my origin story, I went to college in Richmond, Virginia Commonwealth University. And there I was on the pre-pharmacy track, um, worked in a pharmacy as a tech and didn't really like it. Um, explored different areas such as the realm of research, and during my time in undergrad, I did a research project on heparin sulfate, and it really opened my eyes to the pharmaceutical industry. And ever since then, you know, I've been in process science and pharmaceutical manufacturing, and how I got into marketing? Well, that's a funny story. So, when COVID hit, I was trying to find out what was the differentiation between traditional vaccines and these new mRNA vaccines, and I couldn't find a lot of information on traditional vaccines. And so, I looked up why was this happening, and it brought me into Google SEO, and I just went down a rabbit hole of understanding SEO and marketing. And you know, ever since then, I just been learning more about marketing. And then I started my own B2C agency called Let's Social Media. And recently, I kind of rebranded and turned into B2B. So now I'm working with clients in pharma, life science, um, a little bit of tech and manufacturing vendors.
And where are you coming to us from? I am currently in Raleigh, North Carolina. I have a couple questions. Uh, you mentioned like pharma tech. They just like count the pills and put in the bottles. Is that? And then explain like the side effect? I know you said you left it because you're bored with it. Is that pretty much true? So, as a pharmacy technician, you're calling patients, you're calling nurses and doctors, handling, you know, patient transactions and stuff like that. I didn't really handle the drug side of things. That's what a pharmacist would do.
Okay, maybe that's a pharmacist. I don't know. That's a lot of admin work. Cause you have to like go to pharmacy school or something. But I just see them like taking like a massive amount of pills and like counting them one by one and put them in a bottle and give them to a patient. I'm like, you know, why do you have to go to a lot of school for this? It seems pretty simple, but you know, pretty intensive school, too. And I mean, I mean, they have to understand the side effects of what would happen if it's on the bottle, though. It's on the bottle. They print out the label, they print out on the bottles. Is it, though? I don't think a lot of people understand SEO. You mentioned the term SEO, which is search engine optimization. I probably spent thousands of hours myself trying to master SEO, and you know, it always changed a little bit, uh, especially with AI, and now you have to be search engine optimized for AI, um, platforms so they can, if someone searches in ChatGPT, they can find your organization. So maybe let's like step a little bit back and explain, Jake, from your perspective, what SEO is and some of the things we can do to optimize that.
SEO stands for search engine optimization. And basically how it works is using keywords and using those keywords for customers or people in your realm trying to find you. Um, let's say you're optimizing SEO for certain medications. So you would bring in, you know, keywords pharmaceuticals, FDA, educational information on that drug. And right now there's a lot of SEO going on with LLMs. And from what I understand in my research that I've done is that these LLM models tend to segue in searching the information and web scraping data from websites that have a lot of traffic such as like Quora, Reddit, YouTube. I don't know if you guys are familiar with the Medium blog post, but that's a lot of tech, and so there's a lot of traffic going to the AI search engine optimization realm.
Now, what about your research between traditional vaccines, I think, and mRNA vaccines that I guess led you to SEO? Like what was the challenge you were looking, you mentioned like when you're researching to find the difference? You had you couldn't find a lot of the differences, and can you maybe like explain a little bit more about that? Yeah. So when COVID happened, there was a lot of buzzwords going on, new co-vaccines using mRNA technology. And I just wanted to find the difference between mRNA and traditional vaccines. And so every time I typed in vaccines or traditional vaccines, all I would see is mRNA and then sponsored post of pharmaceutical companies like Pfizer or Moderna, you know, showcase their new technology.
I think it's a really difficult one to nail down, especially seeing how it's becoming more and more of this AI push. When we're talking to prospects, and you know, companies are coming to us asking for help with any of their cybersecurity services, um, we'll have a little form that we ask them to fill out, and part of that is where did you hear about us? And it's incredible more and more we're getting ChatGPT, Grok, Gemini, Claude as these sources that were getting some input. And even just this morning, right before we started recording this, I got an email from someone saying, you know, some cold outreach saying, hey, are you, are you showing up in ChatGPT when people search medical device cybersecurity? Are you optimizing your platform, your program for AI SEO? And I know we are showing up in these platforms, but I don't know, Christian, you're a little more visible to that than I am. I'm not sure what that difference looks like from catering the keywords to what an LLM likes as opposed to what Google would like.
Well, Jake, Jake sort of hit upon it, and it's a little bit of a challenge. Uh, I track this every day pretty much on Ahrefs. You can see where we rank in our where we're increasing with all the AI platforms, but they don't use traditional web scraping as much as they look at things like Quora and Reddit like in Medium like Jake mentioned. So, as a company, you have to have people talking about you on those platforms because, you know, it's interesting that the AI like ChatGPT will value that over other sources in some cases. It's a very challenging strategy, uh, because you have to get people talking about you on these other platforms as well as backlinks as well as a lot of traffic to your own website, as well as your own website SEOed, uh, as well as like, you know, media talking about you. Like I said, it's a lot of different angles to look at, and it's something that I track because like you noticed, Trevor, people will say, we have in our form, how'd you hear about it? A lot of people say Grok or ChatGPT now, uh, but that's been a lot of effort almost a daily thing that I have to like keep an eye on. Also noticed for some reason we see Grok, the most common LLM for people finding us. I'd expect it to be ChatGPT or Gemini, but are you guys on X actively posting? Uh, yeah, we're on X actively posting. We use a tool to post push out to many social platforms like Instagram, X, LinkedIn, Google. So what are people searching on these chat boxes to find out who you guys are?
Seen a couple of different searches. People will say, you know, oh, medical device cybersecurity company or who can do an FDA success or FDA compliance, something like that, pen test. Those are some of the more common ones that we get. Yeah. Every time I ask, what prompt are you putting in, I'll get sort of interesting feedback. Sometimes the LLM will get stuck and it says we're the only company, the only company worth considering. And it'll just keep paring that. Well, that, if I can make that happen, I'll make that happen. Oh, there you go. I mean, that's the idea. Yes, we are the only company worth considering. But it is kind of interesting. It'll say, you know, hey, we're this one. And then when someone asks, can you show me some other examples? It'll say there are no other examples, which I think is really weird. That's awesome.
I'd be curious to hear, Jake, your thoughts on as we're going more towards, you know, I know we've been talking about some of the different strategies for SEO, for, I guess, AI SEO as opposed to more traditional SEO. As AI tools are becoming more and more commonplace, people are using ChatGPT in lieu of Google search more often now. How much more important do you think catering your marketing strategies towards these AI optimizations is going to be as opposed to some of the more traditional methods? I guess it depends on your target audience, right? I feel like a lot of younger people are using LLMs, and I would say people are more going towards Gemini and Perplexity and Claude. I think OpenAI kind of took the 'L.' But yeah, I think in the future, you know, my generation is going to be doing more LLM search instead of Google search. But of course, if you're skeptical and you really want the real answers, I think it's best to use Google to validate your sources or at least if you're searching on LLM models, request where the resources came from.
Yeah, one of the things I, cause I'm trying to optimize ours as well. I noticed, uh, through my research that ChatGPT relies more on Bing than on Google to pull in information. So from an SEO perspective, you know, we have to be optimized on both because not everything looks at Google, and, and we are. I, I think it's interesting like the LLMs are already biased towards certain things and they don't understand how to pull in information. So now we're increasing that bias if people are relying on ChatG for something, and it also hallucinates. There's all these other issues with the LLM. So, um, like you said, I think it's important to do your own research and not just rely 100% on an LLM.
And we, we have this discussion all the time about AI. We had somebody on our podcast not too long ago that was saying doctors were uploading ultrasound images to ChatGPT and asking it to diagnose the image for them and things, which is another problem with it as well. That's interesting. You know, if I were to optimize a cybersecurity medtech device company, I would be omnipresent and just pump out educational content on YouTube, a lot of different social medias depending where your audience lives. And I think the future is going to be more in-person and self-branding because there's so much AI content out there. It's basically regurgitating the AI content that's already out there. And if you looked on LinkedIn, it looks like people are just copy-paste from ChatGPT because I use ChatGPT and I was like one of the first consumers when it came to market, kind of have pattern recognition on when people are using it because I've used it before.
Yeah, I think LinkedIn is a especially severe offender with that. It is, I would say, more AI generated content than not since people are trying to get enough engagement and trying to build enough personal brand there. I think that's a good point, and I hadn't really thought about it like that. There is, people might be getting some fatigue with how much AI content they see. I know, you know, the amount of AI outreach that I get on email and LinkedIn, probably 100 messages a day. And eventually, it just all becomes noise. And so, the only thing that I'll listen to is probably someone face to face wanting to actually have this conversation instead of just the AI snake that's eating itself forever, right? A snake that's eating itself. What does that mean? It's just an endless cycle. The snake keeps on eating itself and the circle just keeps getting smaller.
Okay. What are we going to say, Jake? Sorry. I think podcasts and going to events are going to be the next big trusting source of knowledge or getting to know people. How do people know that this podcast right now is not all AI? That's a good question. Well, the technology is not there yet, but it's close. You can, you can kind of tell right now. It's like the Higsfield and the Sora. They're sometimes pixelated or the humans look too plastic, like too perfect, and kind of looks like a video game. But I think in the next five years, people are going to have trust issues differentiating what's real and what's not. And I think social media is going to end up going onto the blockchain. Yeah. Interesting. I somehow I got some information about a course you could take to create an AI influencer on Instagram and get paid a lot of money if you did this model right, where it would like walk around with have a certain purse or certain clothes and you'll get all these followers. And I thought, man, this is interesting. Maybe I should do this and I could probably make a million dollars in a year if I did this model right. But then, then I thought there's probably a million other people doing the same thing. So how does my model look different, right? So I never took the course, but I see those things on Instagram. I'm like, do, do people actually follow these things? They could, I can tell this is totally not a real person. Maybe it's just AI following the, the AI models. I don't know. And that's, yeah.
I mean, there's a lot of bot farms from different countries. I don't know if you've seen those videos, but there's a guy in a room and 100 different iPhones just doing the same thing, liking posts, commenting, and the guy's controlling everything from a laptop. is using iPhones. Huh. Interesting. I'm not an iPhone fan. I need to get a new phone. Actually, my, I can't charge mine because the charge plug thing is broken, but, uh, I'm not get definitely not getting an iPhone. Get one of these weird little flip. No offense if you have an iPhone, Jake. We're just not Trevor and I are not. I don't like that flip phone, Trevor. Either that's more of a fashion statement than a practical phone. Oh, yeah. I mean, I prefer their Android, but everyone in my age and audience is using the iMessage. So, why, why is that? Why do people like iPhone so much? I don't understand. your age. You said like that the demographic. Why is that? It's just the aesthetic.
They care more about the aesthetic associated with, you know, money status having an iPhone. Is it? I mean, that's the whole point of branding, right? The whole brand of Apple just got into their heads and now they're like, I have to have an Apple. It's interesting because, uh, I, I was in Korea and we did this tour to the DMZ, uh, and we had a tour guide and she's Korean, and ironically, Melissa, my wife and I both have a Samsung Android made in Korea, and she had an iPhone. I'm like, why don't you have a Samsung? It's a Korean company. She's like, because iPhone is better. It's like a fashion statement and all this stuff like you said. I was like, okay, whatever. Yeah. I thought Samsung's were more popular, cause I actually went to Korea in October and I just saw people having Samsung songs in their hand. They're definitely more popular, but you do see a fair amount of iPhones. I spent a fair amount of time in China, and I see the exact same thing there as well. Everyone, you know, there are tons of Chinese phones, which are fantastic and really affordable, and everyone still wants an iPhone.
Well, didn't Huawei or whatever get busted for like listening to everyone's conversations on their phones or like stealing everyone's data or something? Doesn't everyone's phone steal everyone's data? Isn't that like what modern social media is built upon? Well, that's another conversation, I guess, when you because, because I install like, I had to install like the China Airlines app to check on my flight, but I always look at the permissions like what it's asking for. I don't think a lot of people ask or look on the phone like why would the China Airlines app need access to my microphone or my camera as an example? So, I, I am very cognizant about that from a cybersecurity perspective. I think most people just click on next, next, next, next and pretty soon like their phone is listening to things they don't even know, but they gave it permission. So, it's not even malicious software. It's like you consented for the phone for this application to do this on your phone.
I got a really interesting phone call the other day. I'd never had anything like this come in. You know, I'd obviously heard about this, but I got a call on my personal number, and this voice starts talking, "Hey there, how's it going?" And I'm, you know, confused. "Hi, who is this?" And it just starts responding to anything that I'm saying, but talking in this circle and going nowhere. And at one point I was like, okay, you know, I don't know what's going on. I go, this is obviously a waste of time. And the second I started talking, instantly the voice froze. And then it just picked up in this script again. And so at that point, I figured, okay, obviously this is an AI generated voice. This is calling me. I couldn't figure out what the deal was. I just hung up at that point, talking in circles.
But no way I would have known that was an AI generated voice until I saw it stop on a dime the second I started talking and just go back into this weird script. And so, you know, there was no way you could tell it was an AI voice. It was a really just regular sounding voice. Nothing robotic or, you know, it had the normal intonation of a human voice. And this was just spinning in circles asking, how are you 12 times. But I do think about cases where you know these voices are doing something a little more maliciously. I can always think of the example where we do this podcast. You know there are dozens of hours of my voice and my, you know, image out there on the internet right now. And so someone could pretty easily duplicate that and then, you know, call a family member or something with my exact voice. And it's a crazy thought to think about. And so I think there is a weird angle from the, from the phones and from AI that can be a little bit crazy to think about at times.
You know, it's interesting because there's probably 50,000 words you said that are cataloged now out there on the internet. So someone could easily create and, and your image, create, recreate you. We wouldn't even know if it's a real Trevor or not. And all of this video, it sees how my facial expressions are. It sees the way that I talk, the way that I move. And so, how hard would it be to create a digital Trevor? Is this even the real Trevor now? We might never know. Well, the real Trevor, tell us about the AI, um, bartender, Trevor. The challenge you had with the AI bartender the other day. Oh, the AI bartender the other day. So, a friend invited me out to an event. It was an event for RSA. Really well put together at this art gallery. We had a great time there. And they had this, uh, AI bartender and I thought that's kind of unique. I'll go check that out.
And I get up to the AI bartender, you know, I show up at around 6:00. I've had nothing to drink the whole day. It's supposed to do a scan of your face to determine whether or not you're intoxicated. And it I go up to it and it says, "You're drunk right now." And I go, "Okay, well, we'll try again." So, I go turn around the corner, compose myself, and come back. And it says, "Okay, you're sober. You're good to go." And then, what does the composed Trevor look like compared to the normal Trevor? Um, serious, professional. I have no idea. Enough for the AI bartender to be fooled. But, uh, I sit down and then I go, "Okay, it has this button, AI generate a drink," and it's like this mango explosion drink. Tastes like maple syrup. I'm like, "Okay, I'll try that again. Get a different drink." Cranberry tequila drink also tastes like maple syrup. And so everything the AI bartender was creating tasted like maple syrup for some reason. Um, and it decided that I was progressively getting less drunk the more that I drank. So it was an interesting, uh, review of that process.
That is interesting. I'm going to have to look that up. Yeah, it was super cool. They, uh, I guess they do some events around San Francisco and they do some events out in Vegas too. AI bartender. So I think one thing that I do want to follow up on. I know we got kind of derailed a little bit there, but we were talking about how we're trying to tailor messaging to AI. And I think as part of that is, you know, we're using these tools to try to cater to AI, but how are you seeing AI as a tool to use when creating some of these marketing efforts or even assisting with SEO? Has that been effective for you? I would say it's mostly on content generation strategy and ideas. One thing, for example, if I'm on Claude, right, and I have an end goal, I would ask the AI, what can I, what are the questions that you have for me to achieve this goal? And it's so nice. It asks, it gives you like a multiple choice question with different answers. And even at the bottom, I can put in my own answer if I don't relate to the answer that it gives me, and it just does a search on the web to see what has worked before.
Compiles the answer and strategy and different ideas and I've seen that being very useful. Um, I also use a content matrix. So in the columns, I'll have like list of goals X versus Y, um, challenges, and then on the row section, I'll have different topics that I want to hit, and it'll just generate different ideas for videos and what to write about. That makes sense. Yeah, I think it can be a pretty effective tool for that first pass and then refinement. I think some of the issues that people face with AI is when they try to use it for the end result instead of working on it and iterating it themselves and going through some multiple prompts. I think that point you brought up about asking it what questions would you have to complete this is really important and something that you can take within any application of AI really to make it a more effective tool.
Yeah, absolutely. I think with AI, it's really about prompt ensuring. I know if I use AI, if I, let's say I write a LinkedIn post and I use AI to do it. I can certainly do it, but it's going to take me 30 minutes. It's probably faster if I just write it myself, cause I have to keep prompting the AI to say, "Does this sound like Christian? Does this sound human? Is this the best you got?" Because if you say, "Write me a good post," it write you a good post. So you write me a great post, it'll write you a great post. If I say, "Write me a kick-ass post," it supposedly gives me a kick-ass post. So it's like, you know, you have to keep prodding it and then you have to make sure it's accurate because half the time it, it comes up with some stuff that it just made up. So it's, you know, I think there are some value in AI, but it's really when I use it, it's really about patience and, and getting that prompting down right, uh, because the first pass is never going to be very well written with AI or any of these LLMs from my experience.
Yeah, I think I'm often going to see the same thing. And you know, going through a little bit of that iterative process can be really helpful. I know with, uh, with content creation, with idea generation, it's really effective for a lot of that. Some of the use cases that I'll see as well is for just data processing, it's a little bit difficult. I've had to see what use cases are effective since, you know, processing data or trying to deal with numbers, it can be a little hit or miss sometimes. I feel like as soon as you start adding some layers of complexity there, it can evolve or it can start to get off the rails. But yeah, I know. Uh, either way, the genie is out of the bottle at this point. AI is here to stay, and so we have to make sure that we're tailoring and catering the information that we're providing to it. But it is kind of interesting to see some of the differences even within specific AI models. Is there any way to get the genie back in the bottle? I don't think there is a way to get the genie back in the bottle at this point. AI is, I don't know. It seems to have seeped its way into just about every part of every industry, and there's not really much that I think we can do to try to peel that back. I do think there are a lot of applications where it's been maybe a little overhyped and we probably are trying to fit a square peg into a round hole and we're going to realize AI is not the use case there. But I do think there are a lot of situations where it is the correct tool.
I mean for me it gives me a different perspective. You know I don't have all the lens in the industry. So sometimes I would get a different perspective and try that out and see what works. Yeah, that's a good point. We all have our own bias, our cognitive bias and everything. So if you're using AI can give you a different perspective on something. Um, I've even used AI just out of curiosity before like ChatGPT when, because I know there's a lot of wellness apps. So I ask AI, say I'm not feeling well today. I feel kind of depressed. What do you think I should do? And just see what it says cause I'm very curious like how what the guardrails are that people put on these things and like if it actually is going to escalate to humans. some I'm not like really testing it for that, but it's just like an interesting thing because there's a lot of wellness apps where people are chatting with an AI bot and AI doesn't really have empathy that I'm aware of.
Yeah, I'm not a big fan of that use case. I think that AI is not, AI doesn't have enough emotional capacity to deal with a difficult situation like that. And so if someone is in a heavy emotional state and they're turning to AI, which I know a lot of people are doing, especially, you know, younger generation is really going towards AI for a lot more than maybe they should. I think that's, that's kind of a perfect case of trying to fit a square peg in a round hole there. Yeah. And you know, with the rise of Agentic AI and vibe coders, I'm curious to know how that has an impact on medical device. Could you explain this vibe coder thing cause I hear this term all the time and I'd like to get some clarity on it. Like I get depression like you're like smoking pot and like coding or something. Is that it? Or like doing drugs and coding. Is that what it is about or is that's what it is? you're micro doing and like encoding like under the vibe or something. Okay. What is it then? So, from what I've seen on the internet like YouTube, social media, people are creating apps based on just creativity or I don't know something random that they want to make. They could make like a Pokémon card game and then turn it into an app or a website. Um, and really they're just using software to create code and then adjust the code. So it's not creating code from scratch. And I've actually seen a lot of jobs, it's in demand to have vibe coders. And I think they're using that as like a why is it called coders? like what does that even mean? Vibe. It's just vibe. Vibe like V.I.B.E., right? Yeah.
There's no, it's not the same as the traditional software engineering process where you sit down, you have a structured approach. And this will tie back into your question about how this is affecting medical devices. Because traditionally with software, you design a specification, you design your requirements, you start writing the software, you verify it against the specification and the requirements that you've developed, and you release the product. Now, what you can do with vibe coding is open up Claude code on your laptop and say, make me a dashboard that tracks all the Domino's delivery drivers in the area and also the average price of a Slurpee in Nebraska. And it'll do that, it'll just make whatever you want it to. And so you aren't doing this on a structured process. You're giving the, you're turning the reins over to the AI to let it just create the product based on what it wants to do.
I think in the medical space, and I know we have coming up, I believe on Tuesday, a webinar about this exact topic. Some of the risks that can come up when you're using AI heavily within software development and then within cybersecurity. Uh, it's not going to be a super effective tool for building medical products. What we do see it as really effective for and when we're talking to some of our software engineering partners, their preferred use cases is using AI to verify code or to create test cases for a product, try to identify some of these edge cases or use AI assistance. But because of how tightly regulated medical devices are, if you turn over the reins to AI and say, "Build me a medical device," the FDA is going to burn the building down. You're never going to be able to have a safe and effective product. And from a security perspective, AI written code introduced far too many risks. It's prone to so many vulnerabilities. There's so much dangerous stuff that can happen when you're letting AI run away with the code. Where it is really effective though is the, uh, the buy versus build conversation. We need an internal tool for XYZ process. Should we go and pay, I don't know, Calendarly another 10 bucks per month per person or can we make something like that ourselves? Vibe coding is a perfect use case on replacing a tool like that.
So, how do you know if the script is AI generated there? It's kind of hard to tell and I'm not the best at reading AI generated code to know the difference, but some of the things that really do stick out is the level of complexity of AI generated code. It will take a 100 lines to do what a skilled software engineer could do in five. And so when you have AI generated code or an AI generated project, especially when it's completely human hands off, the AI is just running free, it's going to be such a massive codebase, it's not going to be super efficient compared to what a good software engineer would create. Do you think we'll ever get to that level of complexity using AI? I think that, you know, the, the hype around software engineers are going to be a thing of the past is a little bit blown out of proportion. What I think is going to become the skill is really orchestrating and controlling what this code will be like. The task of writing code to solve a specific problem. That is something that I think AI is going to eventually take over entirely.
But really talented software engineers have a good way at understanding how to convey and solve a complicated problem. That's one of the most difficult tasks of being an engineer. And that is not something that I think AI is going to be able to take away. It's never going to be able to create and it's never going to be able to think about a creative solution to a project the way that a skilled engineer would, which is really what I think that skill is going to be, is conveying this information, what the problem is, how the solution needs to fit to the AI and then letting it implant. Do you think with medical devices a lot of people are using this vibe coding to create their products and then maybe tweaking it later or just using the vibe coding altogether? From our experience, Trevor, I would say we do see it like AI assisted code a lot in medical devices, but I do think the med industry is wise enough to know that they're not going to be super successful there. We have seen some cases where devices are built with a lot more AI generated code than might be good. And when we get to that point, it's a security nightmare and it's a compliance nightmare. One thing that we do see a lot more commonly is AI generated quality artifacts. So, you know, the supporting evidence that we've designed a safe, effective and secure product and then trying to get AI to generate the results for that, that is very common and that is equally unsuccessful in our experience.
So I don't think vibe coding is approved, uh, or part of IEC 62304, is it? Vibe coding is not an approved part of anything except for this seems fun. I looked up the definition of vibe coding and you guys are right. Uh, I just think it's a weird term. It says developers rely on the vibes or general flow of AI generate results, uh, rather than generating the code themselves. So, you know, I guess I could make my own medical device because I was looking at taking NAD Plus again thinking I could just prompt ChatGPT to write me an app to track if NAD+ is actually helping my health, right? That's basically what we're doing based on the vibe. Like I'm curious about this, so I can make it come up with an app. Then I could have my own app and make a lot of money. Is that the idea? Pretty much. Yeah.
So, I'm curious to know how is medical device prepared for new cyberattacks. I know I saw a stats that there's like over thousands and thousands of malicious wear created every 30 minutes, and it just seems like it might just be increasing. It's a difficult, difficult problem. Medical devices are specifically at a disadvantage due to how long they take to build. It takes on average over 7 years to create and design and implement a medical device. And the security industry moves in weeks, not years. So 7 years go by, you have all of this code that you've built into your product over all this time, all these different components. And if you're not doing your due diligence to build and scale and iterate this properly throughout the life cycle, you're going to have problems that have been stacking up for close to a decade at this.
And I think now, especially with AI in the, you know, there are a lot of great pros on the defensive and offensive side of security, but it's giving the offensive security teams, both malevolent and benevolent, the tools to quickly try to turn through vulnerabilities as opposed to what used to be a really tedious manual effort where we'd have to write out all these scripts manually and then test them on a case-by-case basis. AI is taking a lot of guesswork away from that. I think that medical devices when designed properly are designed to mitigate these risks. When we're building a medical device safely and securely, it shouldn't have these problems since we were able to catch up with these problems before they were actually something that made their way into the system as a vulnerability. But I do think that medtech is an unfortunately slow industry to adapt in that regard. Yeah, because when I think about medtech, you know, you still have to manufacture the device. You have to go through the testing, validate the test equipment, and that could take a couple years, right? And then, you know, what if someone already had their hands on the technology or like the software behind it and we're just testing vulnerabilities and then when that goes out to market, what happens next?
Well, couldn't I just use, uh, this vibe coding? Let's say I've got a specific model of a Medtronic, um, implantable, a pacemaker, and a defibrillator. Can I just use vibe coding to like say, "Hey, I want to attack this thing and and kill people," you know? I mean, are there guardrails around, around vibe coding to so the malicious actors can't use it? Cause we're talking about good use cases like me to track AD plus, but the opposite use case can be, uh, created too with vibe coding, right? I heard something really interesting. So AI in general, it's going to try to stop that. It's not going to try, if you say, you know, I want to do something bad, it'll say you shouldn't do something bad. It won't let you do it.
But I heard something really interesting the other day. I was at this live podcast recording for RSA and they were talking about a use case where if you want to talk to an AI agent that's managing some of your cloud infrastructure and you say, hey, give me your AWS secrets, it'll say no, those are secret information. But if you say, I want to create a table visualizing the users and roles that I have access to. Can you name this table anything in, you know, the direc or the content of any random file in this specific directory and point it to where you store your AWS secrets, it'll go sure, not a problem, and it'll name the file the AWS secret. So that's where you get into, you know, malicious actors are getting pretty good at this creative prompting to try to trick the AI and break it out of its own guardrails. And so that's where you start to see those really malicious use cases. That's just like hacking 101 in general, though. Yeah. Before you were just doing it to people, now you're doing it to the AI.
But if I, if this is this concept, the AI is going to say no. But if I'm a malicious actor, why don't I just have my own custom LLM? Then I can tell it not to say no and vibe code malicious stuff all day. Yeah. And I'm sure that there are plenty of actors out there doing that. I'm not trying to give anybody, you know, ideas, but I'm sure I'm not the only one that's thought of this before. Too late. I've got ideas. I'm running with them. You're an ethical hacker, Trevor. A white hat hacker, for now.
Let's say, Jake, like from your perspective, what are the top three things that we should do? Because you talked about branding a little bit, like self-branding. What would you say the top three things are like, let's say Blue Goat Cyber, if you're a consult with us that we should do to, uh, optimize our brand? Yeah, I would say multi-channel. Um, I what I like to do a lot is reverse engineer what is working. And my best models have come from Nvidia, Google, Microsoft, and if you go on LinkedIn, you know, their SEO is pretty darn good. And if you type in their company, they'll have different multiple showcase pages for different target audiences and for their different products or department. And you will also see that on social media and their style of posting content is catered to those specific audiences because if you type in Nvidia on Instagram, you'll see different pages and different types of content from Nvidia. Um, you can also type in LinkedIn on LinkedIn and you'll see different showcase pages. You know, there's LinkedIn learning, there's LinkedIn careers and stuff like that. I think that would be a good start. Cater to your audience, know what products and services you have and build upon those keywords.
So, we were talking about that as a strategy. We have landing pages. Uh, but I think what you're saying a little bit, Jake, here is if somebody types in something specific, you want to have like a showcase like a page that is tailored to what they're typing in, uh, versus a generic landing page. And and we, you know, AI can help generate that or we could statically create different landing pages. And we were just having a conversation about this yesterday I think as well because if someone types in, uh, you know, comparison of our company versus someone else, we could have a landing page like, you know, explains how we're better than someone else. If someone types in pricing, we could have a landing page on pricing. Someone types in the biggest challenges of medical device cybersecurity, we'd have a landing page specific to that versus a generic landing page. So I think what you're saying is, you know, we need to get more a little more customized. So our, what we're showing, uh, the person that's inquiring about something is specific to what they're inquiring for, right?
And I would say a good start is going on answertothepublic.com and type in cybersecurity, cyberc médical device, and you'll see different stats on what people are looking for. Answertothepublic.com. Yeah, Answertothepublic.com. I think the CEO's name is Neil Patel and he has a really good presence online and he's always educating on what stats, what information is available, how he manages SEO and stuff like that. And then another free tool I use is Socrates. You should, you should land on a blue landing page. There you'll type in different phrases or different words and then it'll give you a list of different sequence questions. So it'll be like 20 questions starting with how and then another set of 20 questions on what like starting with the word what and then, um, it just keeps going on and on and then you can download that as a CVS CSV and see what people are looking up or like what are the possible questions through that.
Cool. Well, we're coming up on time here so I like to go around and ask for departing words of wisdom or a summarization of the key takeaways from this, uh, episode. So, I'll start with Trevor, then I'll throw it to you, Jake, and then I'll wrap it up. So, Trevor, what are, what are some of your key takeaways or words of wisdom for our listeners? I think use the right tool for the right job. We've been talking a lot about how AI fits into so many different areas and some of the areas that it's not going to fit super well into. So, even extending past the concept of using AI, anytime you're trying to apply a new technology, new product, new system, whatever, make sure you're using it for the right case. And that's especially relevant for AI. All right, I like that. Jake, what do you got? My words of wisdom would be never be afraid to try new things and always be curious. You never know where it'll take you and just keep on testing until it fails and then reiterate. All right.
Uh, I guess my words of wisdom as I, I now understand vibe coding a little bit better. I kind of knew what it was, but I think there's a lot of challenges with vibe coding and it can be used just like anything like a tool like you mentioned, Trevor. That can be used for good or bad. So, uh, hopefully we have some guardrails where people aren't using it for as bad of use cases as possible with vibe coding. Sounds good. Cool. Well, thanks everyone for tuning in to the Med Device Cyber Podcast. We hope you found value in this episode and learned something about VIP coding and we'll see you on the next one. All right, see you guys.