Should you build your own AI security tools or buy from a vendor? In this episode, Ashish Rajan and Caleb Sima dive deep into the "Build vs. Buy" debate, sparked by Google DeepMind's release of **CodeMender**, an AI agent that autonomously finds, root-causes, and patches software vulnerabilities .While building an impressive AI prototype is easy, maintaining and scaling it into a production-grade security product is "very, very difficult" and often leads to failure after 18 months of hidden costs and consistency issues . We get into the incentives driving internal "AI sprawl," where security teams build tools just to secure budget and promotions, potentially fueling an AI bubble waiting to pop .We also discuss the "overhyped" state of AI security marketing, why nobody can articulate the specific risks of "agentic AI," and the future where third-party security products use AI to automatically personalize themselves to your environment, eliminating the need for manual tuning .
Questions asked:
00:00 Introduction
01:40 DeepMind's CodeMender: Autonomously Finding & Patching Vulnerabilities
05:00 The "Build vs. Buy" Debate: Can You Just Slap an LLM on It?
06:50 The Prototype Trap: Why Internal AI Tools Fail at Scale
11:15 The "Data Lake" Argument: Can You Replace a SIEM with DIY AI?
14:30 Bank of America vs. Capital One: Are Banks Building AI Products?
18:30 The Failure of Traditional Threat Intel & Building Your Own
23:00 Perverse Incentives: Why Teams Build AI Tools for Promotions & Budget
26:30 The Coming AI Bubble Pop & The Fate of "AI Wrapper" Startups
31:30 AI Sprawl: Repeating the Mistakes of Cloud Adoption
33:15 The Frustration with "Agentic AI" Hype & Buzzwords
38:30 The Future: AI Platforms & Auto-Personalized Security Products
46:20 Secure Coding as a Black Box: The End of DevSecOps?
Caleb Sima: [00:00:00] It's really easy to build an amazing prototype. Maintaining it and scaling it is very, very difficult.
Ashish Rajan: A lot of people believe that we should be able to slap an LLM onto one of these native solutions and just call it a day. If it's not AI
Caleb Sima: related, you don't get budget. So what does, what do people do?
They just slap AI on top of everything, and they call it ai. That AI tool is the AI strategy. That's not your strategy. Can you tell me. An exact risk at which AI agents bring, and nobody can tell me an answer to this.
Ashish Rajan: Hello. Welcome to another episode of AI Security Podcast. We have an interesting topic today, I think Caleb, to start off with.
Caleb Sima: Every episode's an interesting topic.
Ashish Rajan: Yeah. I mean I'm sure she'll be to put that in, but yeah, it is an interesting topic because AI is evolving. Actually. Can we ever catch up with ai? It's almost like the last EE episode sounds like, oh, we now that was interesting.
We as like the Apple announcement, this is the most innovative [00:01:00] Apple product ever.
Caleb Sima: Ever. And we've, we've done it by changing the edges of the iPhone to the exact same edges of the iPhone two years ago. Yeah.
Ashish Rajan: So, so this is the most interesting episode until the next episode comes out.
Caleb Sima: Okay,
Ashish Rajan: got it.
Got it. So that's why we're going with, but. To set some context, we were talking about the CodeMender announcement I to, and to give some context to, to the audience. Uh, DeepMind recently released their AI agent for code security. They call it CodeMender. It's basically, they're calling it as like their version of, Hey, improve your code security automatically installing patches to new software vulnerabilities.
Basically, it's the for people who from, who are from an application security world. It's that whole SCA dependency open source library, that traditional conversation. Um, I, before I, I share my thoughts. Love to hear what your thoughts are on this, Caleb.
Caleb Sima: Yeah.
Ashish Rajan: I mean,
Caleb Sima: you know, first I guess we should probably explain a little bit about.
What it is. So [00:02:00] you know, what DeepMind has released is not just a sort of static analysis of fine SQL injection in your code, although it does do this, it does go and say, Hey, this looks like SQL injection. Let's propose that this needs to be fixed. But it also does two other stages, which is pretty interesting, which is first.
It does sort of a root cause analysis. It doesn't just say, Hey, I see SQL injection here, but it may say, Hey, like I noticed that the root cause of this is that you are not just ad hocing all your SQL statements, but that it's all coming from the single library. And so we should do two things. We should.
Fix your SQL injection problem here. And we should also add input validation area coming from like this library, right? So it's, it's much more comprehensive than just identifying a single issue. [00:03:00] It looks for root cause issues, and not only does it do that, it will always offer, of course. The ability to patch it, it will create the patch for you and it will implement that patch for you, and it will validate and test to see whether the patch was correctly implemented.
So it does all of these things. This is. This is no longer the Snyk's fortifies of the nineties generation two thousands generation. This is now the thing at which we have all been waiting for a long time to see. And it's fascinating. And obviously we see a lot of open source projects that, and initiatives that are sort of focusing on doing these kinds of things.
Obviously we've seen, we have lots of products that. Proposed patches so that you can just integrate but not many that automatically do it, validate it and test it.
Ashish Rajan: A hundred percent. The only thing I'll add is, and I guess to what you said, [00:04:00] it's almost like, it's an AI agent for AppSec application security in a way.
But that first layer, to be fair, would that, would that be a fair
Caleb Sima: No, I think it's just, it's software. It's software vulnerabilities, not just application level vulnerabilities. It's buffer overflows, it's, you know, app level issues. All of those are included.
Ashish Rajan: Buffer overflow and stuff as well because I, I didn't read that part, but I think it, because I read the whole fuzz, it can do fuzzing as well, it can do dynamic scannings, uh, anyway source code debugger.
But what was interesting for me outside of the whole context that it is, uh, interesting agent to have, it kind of has sprung up the conversation around the native solutions from your providers now, where if your LLM providers start giving you. Capabilities to start having, say in this case a software security AI agent, which you can plug and play and can reason with the, uh, non-trivial or can even write patches from what I [00:05:00] read on that as well.
Uh, it can, it can write patches. It opens up the conversation for that build versus buy debate that we've had. Traditionally in security for a long time where a lot of the conversations that I've had with a lot of people in that advisory that I run with Tech Riot is a lot of people believe that we should be able to slap an LLM onto one of these native solutions and just call it a day.
And that's, yeah, our replacement of. A SOC or a application security or a vulnerable team management or whatever reasoning, you wanna put that in there because these days LLMs are smart. The cost is reducing off token, well, not completely, but it is heading in that direction where the cost would be considerably lower.
What are your thought and considering you actually work with a lot of people who are building all these AI security companies out there. How do you kind of talk about the build versus buy in this particular conversation when everyone's drinking the Kool-Aid from Sam Altman, that it's so simple. I have an AI agent builder for you guys.
Go build your own [00:06:00] agents. You guys got this. What are your thoughts on this particular topic of build versus buy when it comes to the whole AI landscape?
Caleb Sima: You know, okay, so switching off the topic of the code analysis perspective. Yeah. But just in general, um, you know, I think the build versus buy is going to be a thing that happens in every single phase of sort of new technology.
And again, like I think many many listeners have probably already. Remember how I wrote I make a lot of analogies to cloud and like everyone at the beginning of cloud, uh, when they were doing it, you know, all the big enterprises were like, well, we're just gonna host our own internal version of cloud, right?
Yeah. We're gonna do, you know, like they they kind of went through this process of, of doing it and managing it themselves. I think similar. I think with AI you're going to see the exact same thing, which is that everyone is going to say, oh, well I can just connect [00:07:00] MCP to my whatever X project or technology visual studio, and then I can just do this myself, which they're not wrong.
You can absolutely do this yourself, and you can go and just take, let's take our source code example. I can absolutely just connect an LLM. Two visual studio, whatever AI agent I say and say, look for SQL injection, identify it. And then it will find it. And by the way, it will look pretty good. Like you're gonna look through this and you're like, oh, this is pretty accurate.
Like 80 to 90% probably what you find is gonna be pretty good. And then what's gonna happen is then you're gonna start running into. Oh, let's try a bigger code base. Let's try a more complex code base. And then you start noticing the problems. Oh, well there's a lot of false positives here. Oh, it didn't.
It lists. There's a lot of false negatives here. Oh, well, it found, if I run it once and I find these volts and I run it again, it finds a different set of volts. And if I run it again, it runs a different set of [00:08:00] volts. Okay. So there's no consistency. Like, well, I should be able to run a code vulnerability scanner.
50 times and get 50 results on the same code base, assuming it doesn't change the code base. But that is not what happens when you do this with AI out of the box. Yeah. And, and I think that this is the problem that ends up coming to play is it's really easy to build an amazing prototype. Like you can build an amazing prototype, an amazing V one, almost, I wouldn't even call it v let's just call it prototype.
And it's phenomenal. And there's some, um, just crazy things you can do with it. And it's so easy. And then what you start doing is you start building scaffolding. Around this to make it of a real product, right? And then what happens is then you start changing your data sources. In our example, it's different code bases and bigger.
And then you start running into, oh wait a minute, A little tweak here, a little tweak there. And then you start running into, well, if I run it more than once, I don't get the same results. And then you start running in this consistency problem, and then you start [00:09:00] realizing that this is not production quality.
And all, all over this time, the amount of time that it takes you. To learn about ai, to figure out the tweaking and then figure out the data problem and then figure out the consistency problem and then figure out the scalability. And then by the way, when you start running into real problems then you start realizing there's a cost, like actually AI's really expensive.
Yeah. And then you, then you, that's right before you do a V one and you're like. This is not production that will take you 18 months to go through this period of figuring all of this out. And so I think what you're gonna have is you're gonna have a lot of people who are gonna say, we can do this because every single CEO and executive is gonna be like, you do something with ai.
So every internal team is gonna build their own AI versions of every vendor product, right? And then they're all gonna say, and vendors are gonna go in a dip. They're gonna not, sales will get [00:10:00] lost, revenue will go down. People are trying to do, oh, I can just do this with ai. Yeah, and then it will come back and forth because people will figure out, oh, just like every other piece of damn software building it is easy in a prototype.
Maintaining it and scaling it is very, very difficult.
Ashish Rajan: That's where the magic is.
Caleb Sima: And so like look you know, I think with all of these products you see the same thing. Building software is not easy and AI is nowhere near at least its current capability of being able to do that by, by itself. Now, like in five years, do I think AI can iterate on itself to understand its own consistency problems, understand its own scalability problems, understand its own maintenance and feature update problems and software plausible, right?
Like it's plausible that in five years maybe problem might be solvable. But, uh, [00:11:00] you know, today definitely not.
Ashish Rajan: So I'll, I'll add another flavor to this every time I've had the build versus buy conversation, and I try and explain what's involved to what you said. Right? One, one side of the argument is, hey I understand that I have an internal data lake that my organization has built because we've been doing AI ML for a long time, I can start pumping my security logs into my data lake insert whatever provider or cloud provider, non-cloud provider they give me security capability as well, but I should be able to just extend, uh, and so I could potentially just replace my SIEM with a, with a data lake and start putting my detection into this data lake.
And that becomes my quote unquote detection, uh, engineering. I could also take that step further and go, I have, uh, the ability to understand. SQL injection cases that are relevant for myself personally. Based on all the pen tests that I've had for the past 10, 15 years, [00:12:00] I've got the documentation. I feed that into a knowledge source that becomes my internal knowledge source or rag.
If that you hook up with a rag and an M, and maybe that becomes my source of at least an understanding of everything I had so far with the right context of the application. To be able to identify what's happened in the past. And then my pen testing is that supplementary thing that each year a shiho, whoever is a pen testing company, comes in, goes through business logic flaws, and finds out all these true positives for me, and I keep feeding that to my LLM.
And so I guess that is one side. The other is where a lot of people have gone down the path of, you are almost looking at. And I agree to what you said, you are almost becoming a data scientist at this point in time. If you're dealing with data lakes and configurations and rag, how do I even, uh, normalize my data?
How do I build a data pipeline? It's not a cybersecurity conversation at that point in time. You're, you're talking data, architecture, data flow, [00:13:00] very different kind of skill sets. Unless people start hiring for data specialized people in their org, in their teams, the build versus buy in my mind.
It's a half bake thought because you don't even have a, even if you have the pipeline, you may not have the right people who understand, how do I transform this existing data into a cybersecurity problem? Because they don't have, the data person may not have a cybersecurity context. You don't have the data context.
You have to work with them to even make this into something important. But even if you make this, and I kind of feel we are on this kind of e even if you will get to a point where AI model is improving itself. We would probably probably be at a place where would a, let's just say, and a financial organization start building an AI product because that's where you're heading.
If you start building this, you're basically, isn't it like you're almost building a AI product inside your organization for security. Which I do not see. Why, of, [00:14:00] let's just say, I don't know. Bank of America has a new, spends the engineering money builds the AI product inside. They're not gonna put down their revenue thing.
Like why are we turning into, I don't know, a cybersecurity company? Or maybe they.
Caleb Sima: Well, you know, to f first. Okay, so there's a lot here to uh, deconstruct. So first of all, let's take any decent sized enterprise. Bank of America is sort of what you, you brought up. Number one, you are right that.
You need a lot more in this AI world. You need a lot more quote unquote specialists, right? Yeah. People understand data pipelines, people who understand, machine learning, people who understand sort of a lot of these avenues, which by the way, there in any bank bank of America included, they had lots of investment in these things prior to AI and transformers, right?
So. Tons of people are already well embedded in this. Now because of ai, we're obviously seeing an explosion of more and more people. Getting involved in this space [00:15:00] and we are absolutely going to have, you know, data engineers, data scientists, these people are, you know, are all going to be almost a part of software engineering, right?
In fact, if you're a software engineer in the next five years that you don't understand, uh, data science and you don't understand data ingestion and pipelining like then you are probably not a very good software engineer, would be my guess. Yes. So this is gonna become a well adopted part of your culture inside of Bank of America?
Absolutely. They are building their own versions. They need to build pipelines. They need to build this because they have. Tons and tons of confidential data inside of Bank of America that they want to make use of. And so they are going to ingest that data, understand that data, and build products off of that data, and whether that comes out of obviously, products that consumers use or internal processes that the bank uses or Ashish to your point.
The security team itself will want to be able to create [00:16:00] things off of that data and for sure. The one thing that is amazing about AI is the ability for us to create software applications quickly. That allow us to prototype or build something is really awesome. So we will see the cybersecurity team and Bank of America with, I'm for sure they've already been doing this, but the Bank of America's cybersecurity team is already building their own V ones of cybersecurity type.
Prototypes and products that work off of their current data. The big question is, are they building entire brand new SIEMs? Are they building entire, you know, like code static, code analyzers? You know, like are they building products in order to replace vendors or are they building more proprietary, unique products that are focused on.
The value propositions or problems that are unique to Bank of America is a good question. [00:17:00] Actually
Ashish Rajan: we should call out. I, I have no idea if that's what Bank of America is doing. In case they want do want to come and talk, we are more than happy to welcome.
Caleb Sima: Purely just saying, yeah. Bank of America as an example.
Yeah. I could tell you from my experience at Capital One, all of that was
Ashish Rajan: Oh yeah. Fair. Sure. I mean, so there is that, but I think. Uh, and I agree with you because if you were to take a step back, uh, the build versus buy to what you said has been a conversation that's been there for age, just in tech in general, not just cybersecurity.
And no matter how far you go, you go cloud on premise. They've all had this conversation with, with vendors specifically, why am I buying a vendor when I can build this internally because I have an engineering team or whatever I think. It's worthwhile calling out that as we have kind of gone into the space with AI and people having been building data lakes, there is a breadth of organizations who are, like I've spoken to a few scale ups who probably have a very small team and the choice they're making to what you said, because execs are saying, Hey, we need more ai.
The choice they have to make is, should I hire a new person [00:18:00] for 200 300 k? Or should I just replace that with a product, which is probably half the price and it's kind of there. So I use my existing team, give them more AI capability and use that as a way to fast forward, because they probably never worked on building an ML pipeline or never got the funding for there.
There is that side as well, which I want to call out, but focusing just on the ones which have the, the means and the capability, the mud, the budget already had the pipeline. Even in that context. To what? To what you said as well, and a lot of conversations I've had. A lot of people are still only building the augmentation there.
Like you can't have threat intel. You can subscribe to a threat intel, but you're not gonna build a threat Intel yourself. That just almost sounds silly that you're gonna build your own popular threat intel service.
Caleb Sima: You know, it's, you know, it's funny is oddly enough, I used to think that too. And then, you know, one of our portfolio companies in my fund you know, we were like, oh, let's go sit.
We, we [00:19:00] need threat intel to build the product. So they went and then signed up for the best of the best of threat intel, and then learned that none of this. All of it is pointless. It's terrible. Really?
Ashish Rajan: Tel is
Caleb Sima: terrible. It's really bad. Wait,
Ashish Rajan: as in pre AI version or in general Tel? No,
Caleb Sima: no. It's like today. Like today.
Oh yeah. Like they ended up having to build their own threat intel. Because none like, and actually then they started finding, 'cause you know, obviously all these startups, they all kind of, you know, these AI guys, they all like to talk to each other, right? Yeah, yeah. In industry about all this. And they started finding out like, actually all everyone else is complaining about the same thing.
Is actually like, and I'm only calling that this specific example just because it is a true exam. Actually when you find out threat intel today is not very good at all. And so there are people who are like, oh, we gotta build our own version of threat intel. Which they did, right? Like they actually created their own threat intel service.
Yeah, [00:20:00] because like, it just it was one of the biggest pains. And actually what's really interesting about ai, the reason you, you bring up threat intel also happens to be coinciding is that AI requires knowledge and context. Yeah. Yeah. Right? And so threat intel for cybersecurity companies becomes really highly more relevant in an AI world.
Because you need this understanding of what are the attackers doing, what's going on in the ecosystem? Like how are the economics working, all this other stuff. And you need that in the right format and the right capability and delivery mechanism to be relevant to, you know, security AI systems to make decisions on.
Yeah, and so like threat intel actually kind of rises to the top as one of the first most relevant and required thing for AI security companies.
Ashish Rajan: So is it easy to build a threat? Intel?
Caleb Sima: I would say no, but uh, I would say it's easier than it was before ai. So the aspect of being [00:21:00] able to, going back to this buy versus build Yeah.
Vendors themselves, uh, I would say it, they can now create their own version of threat intel, more custom tailored for their solutions and what they need in an easier way than they could have ever done it before. AI.
Ashish Rajan: Okay. I see what you mean. Yeah, because, and maybe I, correct me if I'm going down the wrong path, because threat intel general has been more like news.
You get everything. It's not like you get only cloud, you got only AppSec, you get only whatever insert problem area here. You get the whole lot. 'cause that's understandable from the licensing model they would've had. They don't want you to piece by piece. They want you the entire thing or you don't get nothing.
Caleb Sima: And it's, and you and, and it is a conglomerate of everything at, by the way, at whatever level of detail at which they have decided to format it,
Ashish Rajan: right? Yeah. Yeah. So they've normalized it with their own whatever they think is the right way to do it. Shit. [00:22:00] Because there was, I mean, I, people would assume that I can just plug into CV and build my own, but coming back to the whole build versus buy before I go down that rabbit hole of star intel, I guess where I was going with that was.
It it, the, the point being, if people are thinking of building this a yes, the data capability, uh, is required in your organization. Uh, you may not be the Bank of America Capital One, whatever, but even if you are a scale up trying to do this yourself and go, I have the, I, I may be, I don't know, the Netflix engineering team, again, this is not an example from Netflix, but just saying, You're just someone who is capable, has engineering capability in your team to do this, you would still need to plug this gap of data somehow to be able to build it into a product. And then maintain it and hopefully have redundancy as the person never leaves who figured this out? Yeah. Well,
Caleb Sima: I mean, that's the, that's obviously the problem with any product is who's gonna maintain it and Yeah.
Uh, and scale it and keep it up to date with features. Very rarely can you [00:23:00] go and like, like you see this everywhere. I've, I saw this at all my companies. There, there is this per, let's, first of all, let's talk about in build versus buy, and I'll segway into this conversation just a minute. Yep.
About the perverse incentive structure. So we have to address this, which is. Today, the incentive structure is that every company believes they need to be an AI company because that is where the money is going. Yeah. So if I want money, I need to be ai, and so CEOs and exec staff from down. Tell all the company, if it's not AI related, then you don't get budget.
That's right. Right? Yep. That's the case. So everyone, so what is, what do people do? They just slap AI on top of everything and they call it ai, right? AI finance, AI marketing, AI people, AI security, right? Like everything gets ai. Because that's how you get budgeting. And by the way, then from those leaders of those teams, that also, [00:24:00] that message also conveys hey, we need to do more with less by using ai.
Yeah. And we will promote, we will recognize, and we will fund things of where you are doing AI related work. You need to be up to speed and up to date. And if you can't learn about AI or understand or prove that you're doing something with ai, then you're getting fired. Yeah, right? Like that is, let's just say that is the way, right now all companies, quote unquote, all companies are active.
Ashish Rajan: Yeah.
Caleb Sima: Okay. So now look at the security team. So let's just take the security team. The security team is clearly going to go well, fuck yeah, that's that. So now what I'm going to do is I'm gonna pick any project that I can. And I'm gonna do AI on it. I'm gonna make slides, I'm gonna show prototypes, and then I'm gonna get promoted and I'm gonna get funded.
Right? Yeah. And even on the CISO side, the CISO can get, now go to the exec staff and say. Look at the five AI internal security projects and what we're doing is we're saving money by replacing [00:25:00] X vendor and Y vendor by building and our team internally as being AI applied, right? Yeah. So now our teams are AI experts and so these guys are gonna go in and and plug MCP and they're gonna plug it into your sim and they're gonna say, look.
We have an IFI detection program, an AI soc, and we can do all these, we don't have to do any of these vendors anymore. And look, and then by the way, what's gonna happen is the CISOs gonna get more budget. The CEO's gonna go, oh, that's awesome. I'll actually give you more budget from more head count so that you can build more AI things.
And then, so this is gonna all happen internally and everyone is rising up that ladder. Yeah. And then, so what you're gonna see, of course, going back to our, our internal version of it is they're gonna go through, they're gonna build amazing progress in their prototype, V ones, V two prototypes, and it's gonna look great.
They're gonna say no to vendors. They're gonna say, we can just MCP this, we can AI this. Blah, blah, blah, blah. [00:26:00] And then all, and they're gonna spend their money on like AI fundamentals. Like, Hey, we're gonna become AI engineers now. We want, you know, pipelines, we want lang chain, we want like, we're gonna pay for that, but we're not paying for anything else.
And then over a period of a year. They're gonna put this thing out, they're gonna deploy it, they're gonna continue to get budget for it, and they're gonna find out the shit just doesn't work. And, and by the way, it just doesn't. And in the meantime, a lot of these vendors, and I think this is my prediction is a lot of these vendors we're gonna see a dip in this stuff because people are all gonna say they can AI it.
But then after they've actually produced things, which we've seen even internally on the very early adopters of this stuff. Like we, we have a we have a couple companies in the portfolio where they talk to these design partners and at the very beginning, and the design partners are like, no, we're just gonna, we can just see this.
Ashish Rajan: I see where you're going. And then
Caleb Sima: they came back like a year later and they said, we can't do this. So like, and they are the most [00:27:00] cutting edge, like they are the ones that are doing it. And actually then we get into their environment, right? This is the funny part. We get into their environment and then not only do we see what they're doing right, and then, and the things that they've built, but we can see like it is flat out failing.
Like it's not even great at all. Like the progress that we've made. And, and the thing is like, this is the same story that was played out I think everywhere else, right? Yeah. Web application development, cloud, like it is just the same damn. Pattern and you're gonna see this exact same thing play out.
And then that, and what really kind of worries me just as a higher, maybe, economic scale is that that bubble is going to pop, that AI bubble is going to pop, right? Yep. Where we, we are seeing like, hey, like there's a lot of work that's going into this and you can't just tell everyone to learn about AI and then see all these things and invest money, and somehow that's all going to work because you're seeing it already that it's not that, [00:28:00] that these products are, and even if they do, by the way, Ashish, get it to a point where it can be productized.
Yeah. They're finding out the costs are really high. That's right. And so there's, there is this thing coming down the road here where that will go down and the AI money will go down with it, and then that the economy will go down with it. But like I, you know, in our internal world, we are seeing that security thing play.
We're seeing that already happening in these rotations. Yeah,
Ashish Rajan: I think I, I have the same so many people that I spoke to who wanted to build this on their own, and this is kind of one of the reasons why I did want to have this conversation on this as well, because a lot of them, a year later or a year and a half later, because when I spoke to them a year ago, it was almost like everyone was so confident that they can build this themselves, and rightly so.
It does give you that feeling that you can do this on your own when you spend enough time on it. Just like anything else, this is not the only thing you're working on, right? You can give an incentive, and I, I, I [00:29:00] love the example that you gave of that, how it trickles from the top to the top down where, Hey, I need to AI I to get the funding.
And it goes all the way to the point that leaders would start putting initiatives or a quote unquote AI initiatives as an individual contributor to go, Hey, by the way, security engineers, security architect, if you use ai, that's gonna be one extra point for your promotion next year. Yeah. Yeah. And, and suddenly now you look, you're going, people are figuring out, it's like the vibe coding episode we did.
The AI just figures out a way to gain the system. Yeah. So all the humans are gonna find a way to gain the system. 'cause everyone wants a promotion. Everyone wants to be like getting that extra salary bump or whatever. And no one wants to be fired. 'cause we, we've been running a few workshops with the advisory and we weren't.
I was so surprised a lot of people actually showed worried worry that they're gonna lose their job. That's why they were attending the workshop. The only reason they want to attend the workshop of, Hey, how do I enable my team to use my ai? How do I enable AI integration into [00:30:00] application? Literally, because they're worried they're gonna lose their jobs.
'cause all the layoffs happening everywhere. Yeah. Yeah. To me. Sorry. It's real. Yeah. Yeah. It's real. And I'm just going, wait, so just the fear is driving people into spending thousands of dollars on themselves. So like, I don't wanna get fired tomorrow, but, and, and so a hundred percent agree on that. I, I've seen patterns of that as well.
I also find that as you kind of go down in this rabbit hole for building yourself and coming back one year, it's also maybe people who are pretending to be AI services or AI products, I think they're gonna, they're gonna. Because they probably would not have found customers, or if they find the initial spike of customers.
Sooner or later there will be a equalizer where there will be quote unquote standard for what's expected from an AI capability perspective in an organization, from a product like at the moment, basic cybersecurity product, you kind of know what you expect. You expected to do whatever the top three things are.
We haven't defined that for AI security products today, [00:31:00] and maybe because it's an evolving problem. But tomorrow when we do have that, to your point, that dip where suddenly. All these people start going outta the market to buy a product to solve that problem. 'cause they thought they could solve it. They would start finding out who's the, what's the real value versus what's a inflated value service or, or offering that people have.
Is it enough for. Agree.
Caleb Sima: There's a maturity model of our adoption and understanding of AI that allows us to refine the expectations that we have, right? That's right. Yeah. Yeah. Today, you know, again, it's similar to, you know, I'll just point to a cloud, which is in cloud. When cloud first, you know, came out all, everyone was just spinning up anything and everything all over the place, and then there was this whole.
Cloud mess, right? If you remember that. Yeah, exactly.
Ashish Rajan: Cloud sprawl. Yeah,
Caleb Sima: sprawl. Yes. The cloud sprawl. Because everyone ah, and then like obviously clearly people figured out, hey, like we're more mature. You can, it's not just about spinning crap [00:32:00] up to spin it, you know, we need to formalize it. We need to get pipelines.
We need to pay attention to finops. We need to understand these costs. And also, cloud providers became way more advanced. They started seeing all of the. Random stuff, companies spun up and then they just built own versions of their services for those to just use instead of you having to build it yourself.
And similar with ai, I guess, is. If you make something easy and you make something accessible, you're gonna see the sprawl. Everyone and their brother is gonna be coding these prototype apps that probably don't work very well, and they're gonna be everywhere, all over the place. And you're gonna get your AI sprawl, and then you're gonna get your refinement, and then you're gonna get your, your service providers offering the better versions of these.
And then you're gonna get a little good adjustment as to what's going on.
Ashish Rajan: Yeah. Um, yeah, I mean, this is, this is, we are talking about all of this. When we don't have a compliance standard, wait till the compliance standard comes out, then there'll be vendors who are, let me help you get compliant with your AI applications.
Caleb Sima: Yeah, yeah. CI is
Ashish Rajan: 1.2 or [00:33:00] 3.1. Whatever comes out, I guess.
Caleb Sima: Yes. I, you know, and it's I'll, I'll, can I, if I could, you know, stand on my soapbox, I will say, AI compliance and security has been frustrating. It's been frustrating because I feel. Uh, for some reason our industry security people want to overcomplicate shit and overhype shit to a degree that is unhealthy for everything we do.
Hey, in AI we need to name 20 different AI attacks, when in reality there aren't 20 different AI attacks. There are like three maybe fundamentally different from each other. And like, oh, this is an MCP exploit and you look at it and it's not an MCP exploit. Right? And it's just like these kinds of things are damaging to our industry.
I think it hurts and confuses defenders and our organizations like the fact that we overhype overcomplicate. Add new buzzwords and new acronyms for every [00:34:00] single damn thing that comes out of the closet. Like it just drives me insane. Um. Isn't that the
Ashish Rajan: reason why we are all talking about agentic without being agentic?
Caleb Sima: Yeah. Like it's just, and it, and, oh, we need a agentic security, uh, Ashish, can I just tell you something? Yeah. Uh, and I'm gonna, I'm gonna also rant a bit on me. So I'm a, I'm a vc. Yeah. And I get startups pitching me. All the time. The number one topic of course, is agentic security, right? Oh, we're gonna do security for AI agents, and all I do is I ask one simple question, which I say, okay, can you tell me an exact risk at which.
AI agents bring, and nobody can tell me an answer to this. They go all over the bat, they go very vague or like, oh, well, ai, we, we don't know how to control them. We don't know their identity. We don't know, you know, we, we all are. I'm like, okay. Tell me what that looks like if I'm a ciso. And I'm looking at the product [00:35:00] dashboard and you, you tell me, well, you don't know about the AI identity.
What do I see as the critical risk in my life? Nobody can answer that. Nobody. And I'm talking like 10, 15 of of these guys. Yeah. Who are, by the way, I might add really smart founders, really smart entrepreneurs, right? Yeah. But it's just this hype bubble of I have to be an ai, I can get funding, I can get. 10, 12, $30 million in funding.
Just saying, I'm an AI agent security platform. Yeah. This is bad. This is bad for our industry.
Ashish Rajan: And I guess to your point, we'll raise the money so we'll figure it out as we go, because that's what our entrepreneurs Yeah, we'll
Caleb Sima: just figure it out. That's exactly right. Because when I ask, oh, well we don't know yet that it really comes down to, we don't know yet.
We're just gonna learn. We need the money to learn. And I'm like, when we, this is exactly the whole, have you ever, for to a hammer, everything looks like a nail. This is, that this is what's going on right now. Yeah. And it's really bad.
Ashish Rajan: Yeah. And I, I'm, I'm in agreeance with you on [00:36:00] this also because this is confusing the industry for what they should be focusing on.
And I'm not saying that the, like the work people have done at CSA or OWASP is not required. It's still required, but when. It dilutes the water so much that we've gone away from the definition of ai. They've gone away from like the whole ROI conversation. To your point, why am I buying a cybersecurity product?
Just because the, from a value add, I, all I get is like, I have something for AI security. I don't know what it is, but it definitely tick a box. It tick a box. It's in my budget.
Caleb Sima: AI security and like I can talk buzzwords in it.
Ashish Rajan: Yeah. Yeah. A hundred percent. And I think I've been, and I, I, I won't name CISOs here, but I think there, there have been a few people who I spoke to when I talk about, oh, how much, what are you guys using from ai?
I respect it. They all talk, pointed an AI product that they have bought, which is why I made a LinkedIn post yesterday about. AI tool becomes a strategy. That AI tool is the AI strategy. [00:37:00] And I'm like, that's not your, that's not your strategy.
Caleb Sima: That's just a tool. Yeah. But they, they don't understand because number one, they don't know enough about AI or what the risks really are to develop a strategy on their own without a vendor telling them.
And then number two, the vendor will pitch to them, this solves your problem. Yeah. So therefore, that then becomes the checkbox to solve the problem. And like, you know, the one thing I will say I'm proud of that I did in CSA. Which is, one thing we kept things very, very specific on is the approach we want in AI security is only to the enterprise and practical things.
Practically speaking, how do you make decisions about the risk of an AI model? Whether you should accept it or not accept it, right? Practically speaking, what are the actual risk you need to worry about from a threat perspective? When you look at your con, is it SQL injection? No, because that's not ai.
Is it prompt injection? Yes, it is. Is it data poisoning? Probably not. Would be my, my, you know, like, so [00:38:00] you, you've got at least reasonable. Pragmatic decision points in CSA at which they've delivered and said, these, understand the technology first has been their approach. Yeah. And then once you understand the technology, here are the pragmatic, practical things that you need to worry about.
I think we've done a really good job at that.
Ashish Rajan: Yeah, I mean, uh, I'm, I, that's why, I mean, kudos to both the work that CSA and OWASP. I would not discredit that. I think where I'm going with this is definitely to what you were saying earlier. It's gotten diluted to the point that people have used it as a way to get funding without actual value being brought into the ecosystem.
And maybe to your point, the bubble when it pops all this. Devaluation, or whatever you wanna call it. Another prediction that I've been having, at least as more conversation, conversations that I've had is that we would reach a point where, and I've felt signals of this in a lot of conversation, where every organization, the same way they build a cloud platform, they would be an AI platform that would be built [00:39:00] for most organizations where they would have a standard for.
Security, they were to standard for observability, auditability, they'll have standard for all of these things. It would be really interesting if that platform ends up being a third party or if it becomes a internal platform. My, my bet is on the money that it would be an internal platform because you become the context or the knowledge source and that provider or the product becomes the, the enabler with ai, if that makes sense.
Because you may not have a data scientist inhouse, but they have a very specialized. Let's just say cloud security or AI security product that they worked on. Now you just augment your AI internal knowledge context and source match that with what they have without giving them anything sensitive. That whole data conversation would just go out of the window.
I think that's kind of where we are slowly moving towards, but. On the final note for Builder buy, do you feel today as it stands, obviously I, I think I know the answer, but for people for [00:40:00] CSOs who are kind of still on that fence of you know what, great. Listen to you guys, but my team is pretty awesome.
So a, if, if they wanted to do that, they what we kind of spoke about the reality of it. Would, do you reckon this would change in the next few years? So people should keep an eye on it? Maybe today it's not possible? Or do you feel more like We are better off just focusing on building the AI talent in our, in our, uh, organization so that when, because the flip side of buying as well, you need to be able to understand these products when they come.
We'll still have to customize them. We'll still have to probably put some kind of a filter of Bank of America. I'm gonna use that example as people have done traditionally that, Hey, I buy, I getting a vendor. I put my Bank of America filter for Yeah. Outta these five, which according to the product, it's all, it's all high.
I personally think it's low because of the context I have from the inside. That part has always been custom. And maybe, I'm sorry, I'm saying a lot of things, but what I'm trying to get to is. [00:41:00] Today, the customization of the build part already happens where I would have someone in my team or somewhere in the organization add that augmentation layer to an existing product.
Ai, non-AI doesn't really matter because that product does not have the context for what's a low, high or low. What's a low, medium, or high? For me personally, right? I already do the customization today. My feeling is that's gonna be the part that we as an organization would continue to augment with ai.
The other side though, which is the product side of, to your point, building a threat Intel, building an AppSec, building a cloud sec, whatever the thing may be, that would continue to be perhaps, uh, a third party supplier unless the cloud providers or the native LLM providers become that. Clouds provider, like where you can build them, na build them natively.
That's kind of where my thinking is going with build and buy. I don't know where you are. If you were to look forward, where is that going?
Caleb Sima: I actually think that the part that you're talking about is the part at which AI is [00:42:00] quite good at. And so what we're going to see is we're going to see products out of the box replacing the part you're talking about, which is I need the.
Company context and the custom context of what makes my environment unique applied to the out of the box capabilities to make it relevant for me, which is today and in the past required and humans process and tribal knowledge Yeah. To be able to achieve. Like, I'll give you an example.
Like we have a company in detection, engineering. Like when you write a detection, right? However you do this, you need to understand the context of your company. You need to know, well, what's the difference between a dev environment versus a prod environment? What's the difference between a crown jewel versus not a crown jewel?
What's the difference between an attack that's relevant on that system or in that environment versus not, not relevant to that [00:43:00] system or environment? What's the context of whether even that attack is even important enough? For us to spend the time and to create it so that it doesn't have false positives, like this requires a lot of tribal knowledge, a lot of people, a lot of knowledge internal to do that, that can all be absolutely automated today using ai.
Ashish Rajan: Right. So the, the, to your point, what you defined is a process of def finding out detection and the, uh, the, I guess the people involved with it, which is the tribal knowledge, so that part can be automated.
Caleb Sima: Yeah, so I think out of the box, the thing that AI is going to add to a product is. When you install a sim now Yeah, it's, there's today.
It requires people to understand, hey, well, once I ingest this data, you need to understand my risk profile, my threat model, my, what's important in my environment what is relevant in one per place versus another. All of that stuff. Doesn't need is not required anymore. All the actual sim itself will then either ask or [00:44:00] integrate and then determine these things for you, and you can just look at it and then confirm if you, you know, trust but validate.
But you can just look at it and be like, that's right.
Ashish Rajan: Interesting. Oh, um, obviously, yeah, a hundred percent. So you're saying that the third party itself would be able to do it? You don't have, yeah.
Caleb Sima: The third party will be able to accomplish all the things that normally require you as a first party to do.
So it's, think about what AI provides is like, if you think about today there are only, if I wanna be black and white. There are two situations. There is a fixed static solution. That then is given to you, at which then you have to customize and personalize to make it your own, right? Like those are the two stages.
Ashish Rajan: Yeah.
Caleb Sima: But now with ai, what AI allows you and because that's just the way it was done is the third party never had the knowledge to do this or the techs, right?
Ashish Rajan: Yeah.
Caleb Sima: But now, every time you buy a vendor product, yeah. AI is gonna allow it to be [00:45:00] personalized to you automatically. It becomes a personalized, customized version to you.
And that is what, that's the magic part. That little sort of Venn diagram circle between these, these two that I think is what AI provides value in, which is, I can take a sim product, I can now when you get it delivered and set up, it automatically customizes it to you. Um, that's the part that I do think AI is quite good at.
Ashish Rajan: Interesting. And I, I guess to your point, it. It depends on the fact that the organization allows for the data to be consumed by the third party as well.
Caleb Sima: Of course. Which, by the way, applies to any product today,
Ashish Rajan: right? Yeah. Yeah. So if you're comfortable, if you're doing this already in a third party today, whether it doesn't really matter cloud or not cloud, you probably would be okay to do this with your AI provider tomorrow?
Caleb Sima: Yes,
Ashish Rajan: absolutely. Yeah. 'cause the data and the access would not be any different to what it happens in the cloud world
Caleb Sima: today. Yeah. It's the, the only difference is that now the third [00:46:00] party. Can quote unquote, do much more amazing things with the data it already ingests.
Ashish Rajan: Hmm, interesting. And so maybe forward thinking then.
So the, what's the build component that is left for the future then?
Caleb Sima: Uh, you know, I mean obviously there's always going to be build components. Again, I think it's less about the. I think it's less about standard sort of like things that are well known across, like for example, take what I, the example I gave, which is a SIEM that's just customized to you, right?
Yeah. That a sim is something needed by all companies, right? Like quote unquote, all companies need a detection mechanism.
Ashish Rajan: Yeah. Yeah. Um,
Caleb Sima: and then this is just now customized you versus let's take our Bank of America example. Bank of America. Probably has very custom, unique problem spaces in their area that only they have.
Like a great example might be they built custom applications to have their employees have the [00:47:00] ability to query, look up or manage customers, right? These are probably internal applications that are very complicated. Who makes now in the detection world. Who makes the ability to have the right controls, the right detections on that piece of software and where they are, then you probably have some more customized things you've gotta go and figure out,
Ashish Rajan: right?
I think you made me think of something really interesting as well in terms of. You know how DevSecOps kind of and this is probably my final point on this as well, DevSecOps was like an afterthought because we didn't have a capability to because we never included security in DevOps, which is why it led to the whole DevSecOps thing because security is important, but DevOps, by default already had security to begin with all the AI capability that a lot of organizations are building.
And the thinking here is to what you said, if you're, if you're SIEM or whatever security product you use is very personalized to you. And you're the, uh, the augmentation that you're doing is the customer application, which most people have. A lot of them, like a hundred, 200, whatever. [00:48:00] I'll just use the Bank of America example.
Again, it's not one or two products. They'll have hundreds of customer applications that they're running internally that requires their own, like whoever, Ashish the detection engineer or a she secure engineer has to know the context and apply the right kind of filters on this personalized tool to get that value.
I think where it would be really interesting is that if the engineering people. Catch up on this and they start including security as default. Their, in their version of how they do build application where, you know, we come back to the DevOps cycle where, hey, I mean this goes back to the original,
Caleb Sima: Topic of this episode, which is you know, secure coding as an example.
If you are using claude code, there is a slash security that automatically does security detection, vulnerability issues when if you use claude code. Obviously now if you use Google Geminis,
DeepMind CodeMender. Yeah. Yeah. By DeepMind it automatically does this for you. Yeah. Yeah, for sure. And like, uh, I do [00:49:00] believe the future of AI coding to some degree as again, making the assumption that. Uh, we continue to progress the level at which we progress. Yeah. Milestones.
Yeah. That it does become a black box, right? Yeah. Like at the end of the day, going through and actually writing lines of code becomes a more specialist skill than a general skill. Uh, so things like unit testing, security testing, and validation, all of this is just built into the, the template of how to write proper code.
Yeah. And that's, by the way, the way it should be. Like that is Yeah, a hundred percent. That absolutely is the way it should be.
Ashish Rajan: Yeah, and I think maybe where a lot of us will just continue working on the non-AI pieces as the AI continues to evolve into this black box, which yeah, yeah, yeah. Has security by default is fully automated, evolves on its own.
But food for thought for people who are listening and watching this. But I definitely agree with you, man. I think there's definitely patterns where maybe we just finally have that full circle moment in the next few years where security is part of default. We are just building for custom applications.
But for people who are tuning [00:50:00] in, definitely let us know. What do you think if, uh, that's for the future you believe in as well? Drop it as a comment. Doesn't really matter where you are. You should still be able to drop a comment on Spotify as well. But thanks so much for tuning in. Thanks everyone.
Thank you for watching or listening to that episode of AI Security Podcast. This was brought to you by Tech riot.io. If you want to hear or watch more episodes of AI security, check that out on ai security podcast.com. And in case you're interested in learning more about cloud security, you should check out our sister podcast called Cloud Security Podcast, which is available on Cloud Security Podcast tv.
Thank you for tuning in, and I'll see you in the next episode, episode. Peace.

.jpg)

.jpg)


.jpg)
.jpg)

.png)








.png)

.png)
