Is generative AI a security team's greatest new weapon or its biggest new vulnerability? This episode dives headfirst into the debate with two leading experts on opposite sides of the AI dragon. We 1st published this episode on Cloud Security Podcast and because of the feedback we received from those diving into all things AI Security, we wanted to bring it to those who haven't probably had the chance to hear it yet on this podcast. On one side, discover how to leverage and "tame" AI for your defense. Jackie Bow explains how Anthropic uses its own powerful LLM, Claude, to revolutionize threat detection and response. Learn how AI can be used to:
- Build investigation and triage tools with incredible speed.
- Break free from the "black box" of traditional security tools, offering more visibility and control.
- Creatively "hallucinate" within set boundaries to uncover investigative paths a human might miss.
- Lower the barrier to entry for security professionals, enabling them to build prototypes and tools without deep coding expertise.
On the other side, Kane Narraway provides a masterclass in threat modeling the new landscape of AI systems. He argues that while AI introduces new challenges, many are amplifications of existing SaaS risks. This conversation covers the critical aspects of securing AI, including:
- Why access, integrations, and authorization are the biggest risk factors in enterprise AI.
- How to approach threat modeling for both in-house and third-party AI tools.
- The security challenges of emerging standards like MCP (Meta-Controller Protocol) and the importance of securing the data AI tools can access.
- The critical need for security teams to adopt AI to keep pace with modern engineering departments.
Questions covered:
00:00 Intro: Slaying or Training the AI Dragon at BSidesSF?
02:22 Meet Jackie Bow (Anthropic): Training AI for Security Defense
02:51 Meet Kane Narraway (Canva): Securing AI Systems & Facing Risks
03:49 Was Traditional Security Ops "Hot Garbage"? Setting the Scene
05:57 The Real Risks: What AI Brings to Your Organisation
06:53 AI in Action: Leveraging AI for Threat Detection & Response
07:46 AI Hallucinations: Bug, Feature, or Security Blind Spot?
08:55 Threat Modeling AI: The Core Challenges & Learnings
12:26 Getting Started: Practical AI Threat Detection First Steps
16:42 AI & Cloud: Integrating AI into Your Existing Environments
25:21 AI vs. Traditional: Is Threat Modeling Different Now?
28:34 Your First Step: Where to Begin with AI Threat Modeling?
31:59 Fun Questions & Final Thoughts on the Future of AI Security
Kane Narraway: [00:00:00] If you are working at a, like a AI provider, you have a very different set of risks than your standard company. A lot of those risks are just SAS risks plus in their own way, so they just add more layers in that can have risks. They add more areas that can be compromised, and they just increase the risk threshold a little bit.
And so I wouldn't say there's anything super specific. But it just makes things worse in general, right? And so it's more effort that you have to put into securing that tool set.
Ashish Rajan: if you're looking at a threat model, it's hallucinating the right way forward. This is a fascinating conversation that I had as a panel on BSidesSF with Jackie from Anthropic as well as Kane from Canva.
We spoke about what's required in threat modeling, ai, gen AI systems, specifically in your organization and how people can use hallucination for their advantage. Now. There is some weight on the whole idea of using AI for good and AI with insecurity.
The two sides you spoke about was a bit of a debate and it was a panel, so I appreciate your patience as we went through the [00:01:00] conversation. This is probably not a regular episode, we had shared this episode on cloud security podcast as well, but this is something which we wanted to bring for the AI security podcast audience.
So if you're someone who's interested in learning how companies like Canva are threat modeling gen AI systems, and what are some of the things they're finding as gaps or the flip of it for the debate for why gen AI should be allowed to hallucinate for security operations team, this is the episode that you've been waiting for.
I hope you enjoy this conversation that I had with Jackie and Kane. , I had so much fun. I can't wait to do this again , at another conference panel sometime soon. I hope you enjoy this episode, and as always, if you are here for a second or third time, I would really appreciate if you can take a few seconds to drop that follow subscribe button.
If you are listening to us on Apple or Spotify, or if you're watching on YouTube, Wellington definitely means a lot. If you take a second to drop that subscribe, follow button to help support the work we do over here. Thank you so much for your support.
I hope you enjoyed this episode with Kane, myself and Jackie. I'll talk to you soon. Peace.
Today is day is about slang. Dragons of got two, esteemed gets with me. [00:02:00] Jackie, if you don't mind taking a few seconds to introduce yourself.
Jackie Bow: Sure. Happy to be here. This is like my favorite conference of the year in like my, I would, it's not my hometown, but I've lived here for long enough that it feels like home.
So yeah. I'm Jackie Bow. I've been working in security for just about 15 years now. Mostly in detection and response, but it bounced around. Currently I am the technical lead of the threat detection engineering platform at Anthropic.
Ashish Rajan: Awesome. And Kain.
Kane Narraway: Yeah. Hello everyone. So I lead the enterprise security team at Canva.
So a lot of that is dealing with zero trust, internal endpoints, that kind of stuff. And a big focus for me, the last or year, two years, has been on securing AI tooling, LLMs, MCP, all that good stuff.
Ashish Rajan: Awesome. And as you can tell, there's a theme already forming that I wanted to tell you guys about. We have a bit of a debate in terms of, we have one slide, which is leveraging ai.
So imagine for people who will be listening to the audio, imagine a video, there's a slide up there. Just pretend there's a slide where there's a dragon on top of my head with an [00:03:00] AI label on it, and we've got people who've watched the How to Train A Dragon Movie. So we've got Jackie, who's the how to train a dragon w working with beside the AI dragon on defending against this big boss dragon ai.
And we've got the knight shining armor can on the other side trying to defend with a shield of the fire flames from the dragon. So people who hear the audio definitely check out the video as well. So you get to hear that on the B setting as well. Okay. I feel like I have more of a wizard vibe than a night.
Oh, okay. Fair. So you have. Staff? Is that what? Yeah. Okay. We'll go for that. So it is a visit guy. It's not a shield, it's a visit with a staff to set the scene, we've got the first question in terms of for, I think I'm gonna start with yourself. You've had security operation experience, security operation.
Now in terms of, I guess a lot of people from a different background here. How has traditional security operation been done?
Jackie Bow: Yeah.
Ashish Rajan: Before the leveraging AI part, if you can set the scene for people. Of course.
Jackie Bow: Yeah. I think in the realm of threat detection response, we have been locked into these monolithic tools or sim security [00:04:00] information event managers, and most of the time these are ones that you purchase wholesale, they're black boxes, and or if you're fortunate enough to work at a company that has custom built their own.
I think most people have experience with things like Splunk or some of the other large SIM providers, but actually AI has come into the picture and tainted, I think a lot of detection response, people's view on ai because for the past, at least 10 years we've been sold this idea of AI powered like machine learning detection and response XDR, and it's all trash.
It's all hot garbage. Wait,
Ashish Rajan: she has had some much extra proteins. Really exciting breakfast.
Jackie Bow: I did. Yeah,
Ashish Rajan: that was our breakfast. If she's with Fife, that's the reason why.
Jackie Bow: Yes. Let's blame. We'll blame the protein. It's definitely the protein. Yeah. But yeah, we've been sold this idea that, oh, this black box model can do detection and response for you for this low premium of a high subscription cost to a vendor.
And I think up until this point, defenders, I think are rightfully like skeptical of AI because they're like, [00:05:00] this just gives me more false positives.
Ashish Rajan: That's really, that's a good point because the moment when people talk about leveraging AI into an organization first thing that comes up is hallucination.
Oh yeah. There's a lot of things that goes into it. Yeah. Put a pin on it for a second. I'm gonna come back to Kane. Since you've been securing the visit of, what are some of the risks that are being introduced by AI systems and organizations that come to top of mind for you?
Kane Narraway: Yeah, like I, I feel like it depends on what angle you're looking at.
If you are working at like a AI provider, you have a very different set of risks than your standard company. A lot of those risks are just SaaS risks plus in their own way, so they just add. More layers in can have risks. They add more areas that can be compromised, and they just increase the risk threshold a little bit.
And so I wouldn't say there's anything super specific. But it just makes things worse in general. And so it's more effort that you have to put into securing that tool set. And I guess talking about
Ashish Rajan: securing and detection response that go hand in hand where people are trying to figure out, pay to what you were saying earlier, whether it's [00:06:00] Splunk or any other team, all of us are familiar with false positives being like a lot of level one dis spends a lot of time just triaging incidents for, is this even a false positive?
Should I just call Defcon one on this? Yes. So how can an organization leverage ai? Tame that dragon for detection response.
Jackie Bow: Yeah, that is a great question. It's also, can I pump up my talk tomorrow? For
Ashish Rajan: sure. Okay. I was gonna say please do, please go and talk it, but we need to pump up both your talk.
Jackie Bow: Yeah. Tomorrow I'll be presenting with my colleague Peter about some tools that we built with actually using Claude, which is an anthropics LLM cloud code to build tools and then we actually. Do a lot of the investigation and triage using Claude. And so I think for me, the difference in right now leveraging AI is instead of a black box of like alerts go in and something gets spit out and you have no idea how it got there, especially with these models that have extended thinking, you can actually see what prompts go in.
You can tweak those prompts and with the outputs. You actually have more control over seeing what's happening. You can [00:07:00] also leverage things like best of end. So you can have a model with the same prompt, triage, a detection end number of times, and then out of that choose the best response. So I think the power for individual teams to leverage like generative LLMs to do this work, there's just so much more visibility and it's no longer that black box of why am I getting this response?
Ashish Rajan: How do you balance hallucination then? Because to your point, yeah, the moment you got, maybe it's my bias, and I don't know if anyone else has this because we've been hearing about hallucination being like the number one thing people talk about. Yeah, you should use ai, but be careful there'll be hallucination.
Yeah. Yeah. So how do you balance that? Yeah,
Jackie Bow: so we'll talk about this a little bit tomorrow too, but hallucinations are basically the model being super helpful and coming up with convincing sounding answers. And in some cases we actually wanna encourage this. We don't want to encourage the model to. Make up events that have happened, but we actually want the model to break out of like playbook, style, or rigid human thinking and have creativity because any of us who are incident responders [00:08:00] or who work in any open-ended investigation, or even like bug, like fixing bugs, know that most of the times our most like incredible ideas come when we're doing.
Things creatively or not the same way that we did before. So actually encouraging models to think for themselves and hallucinate, maybe investigative actions that you wouldn't have thought of is actually good, but you wanna buck them in a little bit, right? You don't want them to come up with, oh, here are all these like network logs, and they just are completely not true.
Ashish Rajan: So we want models to hallucinate.
Jackie Bow: Yeah. A bit within boundaries. Why let your models have a good time too?
Ashish Rajan: Fair? Wait, how do you even, I'm like, wait, so we told them, let them hallucinate and all these bugs. But to your point, I agree it might bring up some creative things that you may have not, may not have thought of, but I'm with you on that one.
Yeah. In terms of building threat model, 'cause the can talk. Straight after this. By the way, if you wanna join in, how do you even build a threat model for something like an AI system? Or where do you even start? 'cause I, I can [00:09:00] imagine doing a threat model for an AI system is not the same as, I'm sure there are a few AppSec people listening or watching this as well.
It's almost, Hey, what stride model should I use or when I, whatever else thing. How do you even start doing threat modeling for an AI system that let you let hallucinate?
Kane Narraway: Yeah, it's an interesting question, and again, spoiler, that's a lot of what my talk goes into after this. So if you're interested, feel free to come along.
But the high level is that I like to focus on sort of two areas, like whatever model you use is fine. I like to think of it as using access at the beginning. So like how are you interacting with them? What are desktops phones, like, where are you accessing them from? Yeah. And then on the other end, what integrations do you have?
So what is your AI talking to? Is it talking to like your Jira servers or your Salesforce or whatever? And those two things are the things that introduce the most risk in my opinion, because that's increasing the surface area of the things that can go wrong. And this gets even worse when you start connecting it to like customer data [00:10:00] stores and doing public customer support.
'cause then it's not your employees, it's like a an unknown third party that can potentially do weird things.
Ashish Rajan: Could you expand on the, could you expand on the whole authorization because now seems like you can't spend a day on the internet without talking about MCP and A two A and whatever else comes with it.
What does that, how does that play a role in your threat model and the
Kane Narraway: authentication authorization part? Yeah, it's interesting 'cause, especially with MCP, they've got a spec for authorization in the model, which a lot of people have had. Problems with, let's say, and there's definitely a few blog posts on that are worth reading, but there are ways you can encapsulate it as well, right?
There's a bunch of vendors, I think CloudFlare did one a few weeks ago. I think Merch has one now where you can host them like on a public service. So it's not like a thing running on your workstation anymore. It is like a public thing where all of your employees are accessing it. And rather than having thousands and thousands of agents across all your laptops, you just have this one server that goes and connects to everything, which from a security point of view.
I prefer the threat model one [00:11:00] thing, rather than a thousand different versions of open source code that people are running.
Jackie Bow: Just like on MCP servers, I think I totally agree with having an open standard is a great first step. And then having the first, like the first pass at authorization or identity for agents and for MCP servers.
And I think I really love what I'm seeing, like coming out of CloudFlare and like it encourages the maturation of this technology, especially by security practitioners so we can actually get the standard that is the most secure.
Kane Narraway: And you've gotta start somewhere. That's the thing at the end of the day.
And even taking this beyond into. Using it. What we found is that we can build sort of triage bots using us to then threat model our AI tools. Yes. And so building up like a corpus of info that you can ingest in and then have the AI do the triage rather than you do it yourself?
Ashish Rajan: Yes.
Kane Narraway: Especially since every tool is an AI tool.
Now, I don't wanna have to do this hundreds of times
Ashish Rajan: that every
Kane Narraway: vendor
Ashish Rajan: I use, the next Gen AI agent, that's all good, everywhere feels like everything is a next Gen AI agent these days. I think that's an [00:12:00] interesting point about building capability as well. So I guess in your talk, and we've been talking about protection response for infrastructure.
That is potentially using AI systems or still running AI systems in your case? Yeah. Yeah. How do you even, like where do you start? And especially if you already have a team, like a many of us may already have a security operations team. You would like to do threat detection, but sometimes you don't have the resources of the time.
Yeah.
Jackie Bow: So I think one of the most important things. That we found is a like base technical stack that really allows integration with these tooling and it basically is set up your technical stack. So it is engineering forward because models you can think of as software engineers that you give them tools to use.
And their efficacy is how open your stack is. So are you using common programming languages? Are you using either open source tools or well-documented tools? Are you using tools that have like very good APIs? Because when you think about giving. Model the ability to do work like on your behalf. You actually need to give it like [00:13:00] hands or like access to things, which you know is the MCP servers.
It is tools. And so I think a good place to start if you're starting from square one, which honestly is I think a lot of us dream about coming into a company and being like, oh, I can just build this from scratch rather than, here's the legacy sim, good luck. But if you are in that position, I think really focusing on tooling that is like open, that has very well documented standards, if you can.
Use a sim that uses an open detection standard like Sigma rules better than if you know you're using a SIM that has a proprietary not well-known format. And like for us, we built most of our tooling using Claude Code, which is a coding agent that is really the collaborator. So like we use Claude in how we do triage and investigations, but we also use Claude to build like our Terraform and our infrastructure and yeah.
Ashish Rajan: I don't know about you, but I personally fall in the camp of security people who don't code. So I feel like with NOS and I, you wanna go
Jackie Bow: outside? [00:14:00]
Ashish Rajan: I'm like, I feel like. I heard about vibe coding. I've been hearing a lot about vibe coding the entire day. Oh, I vibe
Jackie Bow: code all day.
Ashish Rajan: Yeah. Which is why I'm like, it makes me nervous that, whoa.
Does that mean that all those ideas that I've had before, that I wish I was a programmer? Yes, exactly. So is that how easy it is? Yeah. Even as a security person?
Jackie Bow: Yes. Okay. So I will say some of the best security people are software engineers, or were software engineers. Because in order to understand how to circumvent a system, understanding how the system works is great.
But I will say that. What I have seen with coding tools, especially like Claude Code, and there's tons out there, there's a copilot cursor, windsurf. These have lovable, these have lowered the barrier to entry from idea ideation to prototyping in a way that makes it so if you have these ideas, you can actually go and create a prototype relatively quickly and we could talk about is this a good thing or a bad thing?
I think it's a good thing I'm on team, build more shit. Can.
Ashish Rajan: How do we chat on this one? [00:15:00]
Kane Narraway: It depends, right? And that's the typical security engineer answer right there As a consultant. Answer I can drop the mic and leave now. So I think people are gonna use it whether we want them to or not at the end of the day.
And I think you've gotta secure it in place the best you can. And so at the moment, a lot of that is through education because there's not a lot of tooling out today, right? That kind of helps this. And there's things like YOLO mode, right? Where you can just ask cursor to go do your thing. What are you gonna say?
Cross your fingers and hope for the best. And you add. Please don't make vulnerabilities. Claude, please at the end and that's how you secure it. But I do think there are some things you can do where. Like I said, if you are connecting to sensitive integrations, that's where you wanna put your effort, because at the end of the day, you're not gonna be able to secure or threat model all of this stuff, right?
Yeah. Yeah. And so really just focusing down on like where is the risk? What data is it ingesting? That's fine. If you're connecting it to, or your log sources, maybe it's fine if it's just telemetry, right? Yeah. In that case, but maybe if you're connecting it to like your customer, RDS or something. Then you are like, oh, [00:16:00] now I need to put a bit more effort into securing this.
Ashish Rajan: I guess to your point, it's focusing on data identity access rather than, Hey, you can wipe code versus you can. The AI agent can do its own thing. MCP, whatever else it comes after.
Kane Narraway: Yeah, exactly. And you might have some guidance on user provided MCP servers don't go out to the internet and just download random ones.
It's a lot of typical stuff in package management. Really? Yeah. That it's proving over time, like it's getting there. Talking about
Ashish Rajan: CPS for detection as well. 'cause I think Kain raised an interesting point about using the right final logs, which is 1 0 1 for instant response detection, all of that.
Obviously on the cloud Security podcast. People have spent years trying to learn AWS Azure cloud logging, all of that now with the AI systems being attached to their existing legacy systems as well. Some of them obviously may have started today building applications, AI from day one, or AI native if you wanna call that, but.
For people who are trying to incorporate detection response in legacy systems, which are running on cloud, how does cloud fit into like a. [00:17:00] Cloud environment. Yeah.
Jackie Bow: I don't think you can separate cloud from most of the modern uses of AI because in Claude's case, you can run cloud on Bedrock, which is AWS, or you can run it on Vertex, which is GCP.
And so you can access the models that way you can access the API. We also have a first party, API, but most of what we build is in the cloud, so it's either in GCP or AWS and. I think one of the great things you mentioned kinda like legacy or like people learning AWS is when I'm writing a detection signature, say for some random thing in A-W-S-A-W-S and GCP come out with new services all the time and you're like, what does this log look like?
I can just ask Claude, okay, what are the fields that I should look for? Clouds like here they are and then I can prototype a detection signature, especially doing detection engineering. I can throw up a PR and have a detection written like five, 10 minutes.
Ashish Rajan: Wow. So the entire life cycle from, we have a new service to, we now have a preventative [00:18:00] control.
And wait, so how do you balance between when to retire the controller? There is a whole question about, yes, you built one, someone's watering the plant, someone's making sure it grows into this big tree. But last time to hopefully not drop down real tree, but yeah, in this context, like retire detection.
Yeah. I would say
Jackie Bow: like the detective, like the detection lifecycle is such a. It's such a, it's a great question because it's like very nuanced and it's very different everywhere you go. And a lot of people have different ideas. But the way that I like to break it down is you have both like alerting detections, which are things that you immediately need a human to look at, like it needs human intervention.
And then you should have a ton of like lower confidence signals. And one of the best things about using AI is I can spin up. Like n number of Claude agents who can just look over all of my non alerting detections and then surface things that are interesting. And one of the things we actually were surprised by is I don't if people have used Claude.
Claude has a bit of a personality and I was running. Claude over a bunch of [00:19:00] detections that we had, and Claude wrote this report for me that was like, I'm seeing this alert happen a lot of times and I worry about the security posture of a program that is still having this as a firing detection. And I was like, oh, okay.
I was like, are you really
Ashish Rajan: working? What are you doing? Yeah. I was like,
Jackie Bow: we're just testing now, Claude. But.
Ashish Rajan: Yeah, imagine sends an email to hr, just I've mentioned to Jackie five times. Yeah. She's not looking at this,
Jackie Bow: why isn't she turning this detection? Yeah,
Ashish Rajan: fair. So to your point, you're able to wait. So are you using MCP connectivity to AWS or and in terms of like the.
Foundational pillar. Yeah. So what pain was talking about as well, I'm curious. Yeah,
Jackie Bow: so MCP, you can think of MCP as like it's an open standard for writing these connectors that you can provide to AI agents. But really, like under the hood, everything is, you can break it down to tool use. So tool use is the ability to give a model.
Actions that it wouldn't normally do or to coax it down a path. And so for us, we [00:20:00] use like a custom tool that we wrote. We could also use an MCP server, but we just wrote a tool that does querying into our data lakes, or Claude wrote a tool that querys our data lakes.
Kane Narraway: I was gonna ask you a question if that's okay.
Yeah. How about how much of your stuff is custom to you, like a snowflake stuff versus how much is possible for. Wider. Yeah. Better audience to use and more. Great question.
Jackie Bow: Yeah, so what we're building is SIM agnostic ish. So if you use a what if you use a sim that treats how it works as a proprietary secret.
I'm sorry, but I would say this is the tooling that we're using. None of this is anthropic secret sauce. Like nothing is things that are only available to us. We're using models that are currently out, and everything we're building is in cloud, and could be, we're using like Postgres databases, data lakes, things that you can have in either G-C-P-A-W-S or Azure.
Yeah. '
Ashish Rajan: cause obviously we've got two camps here for leveraging AI and securing AI as well. I'm curious in terms of, now that we know how to build, we can wipe code, let it [00:21:00] hallucinate. Yeah. With interesting solutions. And hopefully we can figure out a way not to talk to a developer and still be able to figure out what the hell they're doing.
In terms of, I guess my question is, in the existing market that we in with security, AI being this big unknown kind on the side and we are able to leverage something like Claude to our own detection, where is the. What's the starting point for someone to enter into this? To Kim's point? Yeah. Are we just able to use leveraging existing cloud logs existing application logs, putting into a data lake to what you're saying? Yeah. And just go, Claude, go hallucinate on it and hopefully come back.
Jackie Bow: I feel like we're stuck on the hallucination. I was just like,
Ashish Rajan: felt really right to say that. What else do you find here? But is that, that where you going that?
Yeah,
Jackie Bow: basically we found that once you have the logs, which you know is like the first thing that you need to do, then giving Claude access and tools to both like query your logs and also do some processing. Like we have some tools that we've created that write standardized reports based on a detection signature.
And [00:22:00] there's like a lot of ways that you can experiment and create different tools. I think one of the most exciting things for us is the ability to rapidly prototype and run experiments. So we can try different strategies of triage. We can try different, yeah, like different like modalities and we can have an idea of, okay, I wanna have.
A thousand Claude agents go and look over every log versus I want only to surface like I want it only to look at the alerting detections and then to give me like really clear reports. It's just, yeah, it's very exciting because the ability to just have an idea, prototype, go out and test, and then get results.
I've never had this kind of power. Yeah. Before it
Ashish Rajan: sounded like a visit already, but my visit over here on the other hand, I was gonna say now I love the passion E Energy Jackie has about how AI is amazing. Without giving up too much about your talk, what can you share about some of the things you found about doing threat modeling across AI systems, both the SaaS ones and the in-house ones?
If you could share [00:23:00] that as well, because I wanna balance the picture as well. As much as I'm excited about AI and we can leverage to two amazing things. Curious to know about what you had found in the whole threat modeling that you did.
Kane Narraway: Yeah. What I find is that, so my talk is about enterprise search. So if you've used Clean Atlassian Rvo, the Slack has one, every vendor has one now basically.
And so it was looking at some of those tools and like what some of the problems are. And what you find is that when you are doing these things, there's like limitations. And so the biggest limitation is of course authorization. And so they've had to build, like all the vendors have to build on top of the already existing SAS APIs, right?
Ashish Rajan: Yep.
Kane Narraway: But those SaaS APIs aren't always goods, and then the layer on top of it isn't always goods. And so what happens is when you're building auth on top of auth. Bad things usually happen. And so I find the issues are not like you'll read about things like prompt injection and you'll realize this is the worst thing possible and we need to look at it.
But really it only matters if you're building sort of public [00:24:00] facing platforms, right? Yeah. Yeah. The bigger risk with a lot of this stuff is who has access to what?
Ashish Rajan: Yeah.
Kane Narraway: How you are getting like thousands of service accounts now to connect all of this stuff together. And so again, it's a lot of the existing stuff that just gets amped up to 11 in that regards.
Ashish Rajan: 'cause to your point with the threat modeling space. Traditionally, we have looked at, oh, what threats am I looking out for? A lot of the conversation around threat modeling, AI systems going with, oh, it's a dynamic system. I don't know what Kane would say next. Yeah, it's put in the chatbot or whatever.
But what you're saying also is that what's the true reality of in internals of this, or I guess inside and organization, there's a lot of SaaS, which is using ai. I'm just gonna use, throw a few words. Salesforce has an ai, Atlassian has an AI camera, has an ai, everyone has a customer facing ai. And that's obviously being used by other customers on the other side.
So you being on the other side of this way, you're obviously part of the consuming safe space yourself. You have your own SaaS AI that you're looking at and the green and everything else you mentioned as well. Is there what's missing in the current approach [00:25:00] for threat modeling? Or is it the same way to approach AI systems as well in terms of, 'cause a lot of people would be thinking, am I learning something new here Completely?
Kane Narraway: Yeah.
Ashish Rajan: Or am I able to just leverage what I know?
Kane Narraway: There's a few new bugs and things like that. There's things like I said, like YOLO mode and stuff like that, which is all like brand new. But again, like I think if you threat modeled a lot of SaaS tools in the past, you'll pick this up pretty quickly. I do think that probably what we'll see will be interesting, like MCP at the moment is like a layer for our lms.
That kind of sits in front of our already existing APIs. Yeah. I do wonder how long it will be until we go, we just don't need. Like those original APIs and we just LM it. And at that point that's much more scary because MCP is basically taking wide swaying prompts that I'm putting in and it's turning it into specific API actions that I can usually see.
And so yeah, when I can't see those things in the future, that will be really interesting. Yeah, for sure.
Ashish Rajan: We can audit them as well. Is that what you
Jackie Bow: Yeah, like one of, or the [00:26:00] thing that Kane is getting at and from like the, like incident response and investigation side is like we. We need to keep logs of what is happening.
'cause I'm very excited about, look at how much I can do, how much, like my work is amplified by using lms. But you take that for every person at a company and even people who maybe have not historically interacted with like infrastructure or technical systems, but they're able to now and. We still are having this like really beginning forming idea of what is identity when it comes to like agentic workflows.
And so the ability to trace back like where actions are coming from, I think is gonna be more and more critical. Especially for me, if I'm looking at an incident of like, why did this server go down and it's, oh, this API call that came from like bedrock. It came from like an AI that doesn't actually help me.
I need to know where it actually came from. So like tracing actions I think is critical.
Ashish Rajan: Are you able to use AI for those, like going into rabbit holes as well?
Jackie Bow: [00:27:00] I would say AI is pretty good at going into rabbit holes, but yeah, I think you need to guide it. Yeah. Yeah. It's been
Kane Narraway: pretty classic as well.
The security teams don't really scale with engineering departments generally.
Ashish Rajan: Yes.
Kane Narraway: Yeah. And so like I feel like we, we have to, if you do not want to, even if you are like one of the doomers who is, no, I will do everything manually, I feel like. Learn the hard way. If your engineers are doing it, then like you are going to fall further and further behind.
Jackie Bow: I think. I think that's such a good point. And like I think the position I have is we are not going to be able to keep up as defenders if we are not willing to use this technology. Yeah. I think if we are. Only on the side of oh MCP servers are vulnerable, or this technology is, let's only talk about prompt injection, which is something like, I feel this community sometimes gets stuck a little bit in as like the hacking or the breaking.
We won't be able to scale with offensive capabilities and offensive technologies if we are just waiting and blocking ourselves on. We'll wait until it's more secure. We'll wait [00:28:00] until it's better.
Ashish Rajan: Here we are 16 years later, still talking about cloud adoption, so some things would always be slow.
I imagine. I think it's. I love both the perspective, but I also want people to walk away with a starting point for they heard how passionate you are about cloud code and people should definitely go and try, even if you've never coded before open up a visual code or whatever your favorite editor is.
What's a good starting point for someone who's basically inspired after hearing this to start leveraging ai?
Jackie Bow: Yeah, I feel like trying out some of the coding assistance, there's a ton of resources out there. I feel like Philanthropics documentation is pretty great. No bias. Completely. Yeah, I'm completely not biased at all.
And there's, yeah, there's a lot of YouTube videos and things to talk you through. And if you have an idea on something that you would've liked to build, prototype it yourself. Stand up your own like AWS or GCP account. And I don't recommend you do this on like your. Corporate, like on air production system getting livestream.
Ashish Rajan: So I'm glad you mentioned it.
Jackie Bow: But yeah, in a sandbox environment or if where you work provides [00:29:00] like nice sandboxing so you can have a playground. But I would say definitely don't be afraid just to try things.
Ashish Rajan: So is there a I, I don't know, like a. S3 bucket, going to internet or whatever. Is there like a, is there a thread that comes to mind?
That's probably the easiest one to start with as well, is with this kind of a white coding.
Jackie Bow: So in order, I think one interesting thing is like just throwing logs at a, like at an LM and asking it to come up with patterns. Or if you have an idea about a detection signature. But also if you're like, I want to create a system to collect logs or I want to, one of the interesting ones for me is I want to run some kind of analysis over a bunch of files and set setting up like infrastructure and systems to do that.
I found Cloud Code to be really helpful with that.
Ashish Rajan: Yeah. And how do you scale something like that? 'cause there's one thing making one,
Jackie Bow: yes. Yeah. And now
Ashish Rajan: you're like, okay, now that I do this across 300 plus A WS accounts or G account.
Jackie Bow: I think so what we start with is an idea and [00:30:00] then we will, we talk to Claude and then we come up with a design doc.
And then in, in a good design doc you have components about scalability and it's really, I found it's really collaborative to like with my colleagues will. Come up with these design docs, we'll iterate on them. Then we will start broad and then move into the specificity of an actual technical deployment, and then we'll go into the actual vibe coding with Claude, where we take the design doc in like a markdown format, drop it into a repo, and then let Claude Cook.
Yeah. Guidance. We're
Ashish Rajan: hallucinating. We are cooking. I don't know. We should need to change, update the vocabulary a bit. Okay. On the same flip side for threat modeling as well, what's a good starting point for AppSec folks or people who've been on the other side? Yeah. And how can they scale that as well?
Kane Narraway: Yeah. Here's like my kind of opposite take I guess, which is if you are like a cloud security engineer and. You are building stuff, you should still learn to code manually and connect APIs and do your day-to-day manually so that you really [00:31:00] understand that because if you use vibe coding to do that, you're stealing that learning away from yourself.
However, say you need to build a UI and like you're not a front end developer, like you just need something to show. Say
Jackie Bow: you need to make a button.
Kane Narraway: Yeah. Then go nuts. Yeah, do that five code that, and I feel like that way you will gain knowledge in your domain and you'll keep that. And so specifically with something like threat modeling, do loads of it build up a big piece of like knowledge base when it comes to that stuff.
Keep doing it so that you are good at it, and then you can tell what the AI is good and bad at in that regards. Yeah. And that way you can use it as a triage step and say, look, this one's high risk. It's done half the work for me, but here's a bunch of stuff I still need to do. And I needed to do the learning first to do that.
Jackie Bow: Yeah, that's such a good point. Yeah.
Ashish Rajan: I've got a couple more questions coming towards the tail end of the episode as well. I was gonna say, if this is a fun question, we've been very serious so far. We're totally not having fun [00:32:00] at all. So I'm, I've got some fun questions to just lighten up the mood a bit. No, AI was involved with this.
I didn't let them ate. First question was, or is if AI could protect one thing in your life besides your passwords. What would you want it to guard?
Jackie Bow: So this, my answer I think Kane might have feelings about this too, is I would want something to protect my dog. I have a Pomeranian and sometimes I can't be home with her.
So I would like something that just
Ashish Rajan: like shield walks around with it.
Jackie Bow: Yeah. And make sure she's okay and that she's has enrichment time. Yeah.
Ashish Rajan: Plays with her as well when she's get
Kane Narraway: bored. Fair?
Ashish Rajan: Yeah, that's a good one.
Kane Narraway: Way you can, I can't steal that one. I have a little, I have a little eight week old Pomeranian.
Oh, okay. So here's a funny story, right? My friend lives in rural Australia and he has like a big, like homestead, right? And he has these fat wombats that come and steal his strawberries every night. And so he's shown me videos of them. So I think we need to make a startup for like womba detection and response or something.
For the
Ashish Rajan: non-Australian vomit like giant [00:33:00] rats or raccoons for lack of a better, but if people have not seen them, just, this is everywhere in Australia. Okay, good. Great answer. I've got the second question. What's one to totally ridiculous thing you think AI should have security for? I think I would love for it to protect my crypto wallet, but Oh, what's a ridiculous thing?
Jackie Bow: Oh man. I'm thinking of, oh, this is, I'm just gonna say it. I have a no filter at this point. Like I talk to AI a lot as like a therapist or like for interpersonal things. And so I would like protection for like, when I'm talking to LLMs about emotions or they're, yeah. To keep that private, yeah.
Yeah. To
Ashish Rajan: remind you that hey. Everything you put here is not like you may hear answers, which I may be hallucinating. Yeah. So don't take my advice. Seriously. Party life choices
Kane Narraway: fair or about you came. I, it's really hard to follow with Wombat answer. Yeah. I feel like that is done one by itself. So fair. Okay.
So that's
Ashish Rajan: really most ridiculous thing. Fair. I
Kane Narraway: think that's pretty ridiculous.
Ashish Rajan: I would think. Wait, so the next question is if your AI [00:34:00] could be a your spirit animal. Walk around with you. Don't say warm bad. Now, should we
Jackie Bow: answer at the same time?
Ashish Rajan: Oh. Oh, okay. So if people are ready for it, the question here is if you, if AI security could be your sidekick, what kind of animal form it should take, which animal would you pick and why?
Yeah, just same. I'll let you come to the mic as well.
Jackie Bow: One. Two, three Pomeranian.
Ashish Rajan: Ah, yeah. All right. I'm gonna, I'm gonna say a different one. Golden doodles. Like another in there. Someone used to stand up for the golden doodles out there.
Jackie Bow: Chicken nugget. But
Ashish Rajan: wait, why Pomeranian? Why your sidekick needs to be.
Your best friend in life.
Jackie Bow: Yeah, I just think since I have a Pomeranian and she is, she's already my sidekick that I would, yeah,
Ashish Rajan: This just like dogs' are best friend at this point. That should just end the show at this one point. Yeah. We're actually
Jackie Bow: gonna do a slideshow of our dogs now.
Ashish Rajan: We've got airdrop going on here, so she come with this dog show off.
But that was the episode that we wanted to record. Thank you so much for everyone who joined us and at the overflow rooms as well. [00:35:00] Thank you for engaging us in the conversation. Hopefully. I don't know if we have been able to sway you on the and side of security where you are test still testing the ground, or happy to test AI behind the passionate Jackie that we have here, or still threat modeling your way with the visit can that we have.
Hopefully you can display some dragons, AI dragons for the rest of the conference as well. But thank you so much for joining us and being part of the podcast live as well. Thank you so much.
MC: Thank you folks. That was a great conversation.
Ashish Rajan: Thanks everyone. Thanks. Thank you so much for listening and watching this episode of AI Cybersecurity Podcast.
If you want to hear more episodes like these or watch them, you can definitely find them on our YouTube for AI Cybersecurity podcast or also on our website. www.aicybersecuritypodcast.com. And if you are interested in cloud, which is also assisted podcast called Cloud Security Podcast, where on a weekly basis we talk to cloud security practitioners, leaders who are trying to solve different clients cloud security challenges at scale across the three most popular cloud wider.
You can find more information about Cloud Security Podcast on www.cloud security podcast or jv. Thank you again for [00:36:00] supporting us. I'll see you next time. Peace.