RSA Conference 2026 is here and the AI agent hype machine is louder than ever. In this episode, Ashish and Caleb cut through the noise and arm CISOs, practitioners, and security teams with a clear-eyed view of what's actually happening in AI security this year.From the vendor floor at RSAC to the future of internal security automation, Caleb and Ashish speak about why 70% of "AI agent security" vendors can't even define what an agent is, why security team consolidation around 2–3 major platforms (plus internal AI capability) may be the most underrated CISO strategy of 2026, and why the window from vulnerability disclosure to live exploitation has collapsed from months to under two days.They also explore the emerging idea of a centralised AI automation function inside security teams and why the future of security isn't buying more point solutions, it's building internal AI capability on top of a standardised vendor stack.
Questions asked:
00:00 Introduction: Preparing for RSAC 2026
03:50 The Year of the "AI Agent" Marketing Hype
06:50 The Secret to AI Context: Enterprise Search (Glean)
09:50 Why Your SOC Needs a Centralized AI Platform Team
13:30 The #1 Question to Ask Vendors at RSAC: API Access
16:50 The Myth of MCP (Model Context Protocol) as the Gold Standard
20:50 Why RSAC is Too Noisy: Vibe Coding & 1,000 New Startups
22:30 Is Capital Raised the Only Signal of Trust?
24:50 Prediction: CISOs Will Fire 500 Vendors and Consolidate
30:50 The Build vs. Buy Debate for AI Security Features
35:50 Surviving RSAC: Sorting Signal from Noise
38:50 The Problem with "End-to-End" AI Agent Claims
41:50 Are AI-Driven Attacks Real?
44:50 The Zero-Day Clock: From 5 Months to 2 Days
48:50 RSAC Events: Live Recordings and CISO Panels
Caleb Sima: [00:00:00] If you go look at any vendor at RSA, no matter what they do, they're all gonna say, we have agents guaranteed a hundred percent. The security researcher published a blog post, and within two days, someone exploited that exact company using that exploit.
Ashish Rajan: When I walk into RSA and I'm having a conversation, what are the API capabilities these?
Does it work with Claude Code or doesn't work with copilot? If someone tells them, Hey, I can track your entire AI agent. End to end, I would think there are definitely holes in that statement.
Caleb Sima: I asked them first, can you define to me what an agent is? I think 70% of the people can't answer that AI itself and Vibe coding has created a thousand more cyber startups.
I cannot tell the difference because all the marketing is the same. What if I were a CISO and said, I'm done with this. I'm done with the 500 vendors that I'm dealing with. I'm gonna clear everything out.
Ashish Rajan: One of the biggest cybersecurity conferences of the year, RSA has come around again, and in this episode Caleb and I are talking about what can you expect as a CISO or a practitioner walking into RSA this year if you are attending, or if you're [00:01:00] not attending as well, but you're considering what your roadmap should look like for the next five years.
Yes. You heard that, right? Even though you're planning for six months, I think we have a possible more for what you can plan for in the next five years as your organization continues to integrate more into AI and you start seeing more AI threats. Considering how short the time window is from realizing there's an exposure because of vulnerability.
All the way to exploiting it is quite short. These days, I'm talking minutes and seconds rather than like months before you actually get exploited. That's the world we're moving towards. So going into RSA or planning for year 2026 Security roadmap should definitely include some of the things we called out in the interview.
If you know someone who is heading to RSA, you wanna check out live recording that Caleb and I are doing at RSA and we have a CISO event that I would post on LinkedIn. So if you're connected with my LinkedIn, you probably see that over there. Uh, but otherwise I look forward to seeing hello to a lot of you at RSA and BSides sf where we will be in a couple of weeks.
Or by the time you hear the recording, we will be there. And I look forward to seeing you all and hearing more about what you're thinking about both AI security podcasts well as Cloud Security Podcast. So I hope you [00:02:00] enjoy this episode and it arms you for the knowledge that you need to look into how you vet security vendors for 2026 and possibly for the next five years.
And if you know someone who was looking into this as well, definitely share this episode with them. And as always, if you are here for a second or third time, I really appreciate the time you're spending with us. Thank you so much for doing that. And if you can take a quick second to hit the subscribe, follow button, it's free for you, but means a lot because the algorithm then pushes us to a lot more people so that we get to help a lot more people as well.
I really appreciate that. We are on all podcast platforms, including Apple, Spotify, YouTube, and LinkedIn. I hope you enjoy this episode and I'll talk to you soon. Peace. Hello and welcome to another sort of a security podcast. This is the RSA special for people Don't know. I don't think I even, I know the field form of RSA, but R SAC is what it's called now.
RSAC, which is probably one of the biggest, our, and the largest cybersecurity conference in the US I'm sure. It's called this competition between RSAC and BlackHat, for which is the largest. Just say for at least one of the largest, we've spoken about this last year and that time of the year again, that it's come back.
We, I'm [00:03:00] not gonna call it the Fringe Festival of, uh, cybersecurity, but what I wanted to focus on today, uh, at least the first half of the conversation, is that we've been talking about ai and I was telling you this offline that. Now both Claude as well as OpenAI have like a security software and we are not gonna go into what does that mean?
And we have a record an episode on that. So I'll let people catch up over there. And you've been going to RSA longer than I have. I am curious from your side as to what do you think would be the biggest things you would see? Obviously you're invest in this space as well, so it'll be really interesting to see what are some of the things that you are expecting to see?
What, oh. People who would be listening and watching this, what can they expect to see a lot of at RSA, obvious being, being ai, but what flavors of AI
Caleb Sima: Do I, I guess my question is like, is that a question? Like, I feel like you see the same thing at RSA.
Ashish Rajan: Do I? Can you repeat of last year? 'Cause you know, last year was supposed to be the year of ai.
And now?
Caleb Sima: Yeah. Well it's all it is. The year of it. It was the year of AI models, agents. I [00:04:00] think it's gonna be more obviously agents this year, right?
Ashish Rajan: Yeah. So last year was primarily on the 'cause. I mean, to put some context, last year we were still talking about hallucination. We were still talking about things like, I'm primarily working with my LLM models.
API security was important coming in 2025. As 2025 progressed between mid the year to like this, early this year, AI agents kind of explored everywhere. That's like the only conversation. I think OWASP came over the top 10 just for that as well. What specifically an AI agent do you think would the focus, we and I, obviously we don't know the company, but I'm just curious in terms of is this like when we say AI agent is the new wave, what is this the new wave for ai, for security or security for ai?
Caleb Sima: Both.
Ashish Rajan: Which one are you banking on more of at RSA or the here? I'm curious.
Caleb Sima: Definitely on the, probably AI security for AI agents is going to be the top thing to talk about. However, any existing vendor or company are all. All [00:05:00] going to put, we also have agents that do the same security. So if you go look at any vendor at RSA, no matter what they do, oh, I'm a firewall.
Oh, I am a detection system. Oh, I'm a scanner. They're all gonna say. Ai. We have agents that scan agents that protect your traffic agents that do this thing guaranteed a hundred percent.
Ashish Rajan: I definitely agree, but I also feel what is top of mind for people and probably would not be covered as much at RSA, would be the AI for security piece in terms of what should teams do to build AI for themselves as an organization.
I guess my hypothesis so far has been. As we keep going with all these companies, building AI agents, providing AI agents to cybersecurity teams, we as defenders or red teamers or whatever who, who are on this side of the world, of the, of the cybersecurity world, where we are consuming these products. We are gonna be building agents or AI capabilities that would consume this information, add our organization's context onto it, [00:06:00] and then do whatever with it.
That part. I think still, I don't think that no one say, no one's talking about it because you can't make money from it, I guess, unless you're a consulting firm.
Caleb Sima: Yeah. No one's, I mean, no one's gonna talk about that because it's not about selling a product, right?
Ashish Rajan: Yeah.
Caleb Sima: Well it is to some degree. I mean, you could state like what you're, what you're stating here is that how do I enable internal security teams to build things themselves better, right? That's right. And faster.
Ashish Rajan: Yep.
Caleb Sima: Um, in order to accomplish these tasks that vendors did before that, now I can do myself.
Ashish Rajan: That's right.
Caleb Sima: And, and the way that you can sell them, quote unquote, developer tools, tooling in order to do that, but you're not gonna see any vendors talking about that.
Right. The, to your point, this is about like, you know, what's the most helpful thing for an internal security team in building a tool for them? Is if they have Glean running inside of their organization.
Ashish Rajan: Right? Ah, yeah. Actually you should send from context for why Glean.
Caleb Sima: Yeah. Well, so Glean is, to those who don't know, Glean is sort of the hot up and coming internal [00:07:00] enterprise search.
Yeah. For your company. This replaces your old school internal wikis portals, SharePoints, you know, different things like this. Yeah. This is AIified version. It you can chat, search your enterprise. Functions and they provide an API. If you're a Glean customer, that obviously internally you can use. And so for those who are AI forward, security teams are connecting using agents to use that to get context.
So you can get things like Slack channel discussions or you know, calendar or document analysis. All of this can come from Glean. And so you can pull this into your agent to make decisions. And so, you know, leaning forward, security people are, are building those things, but you know, you're not gonna see that as a security vendor that is just a AI context helping enterprise products that's
Ashish Rajan: there.
And maybe they may have a different version of it. I think Databricks and others also do a version similar where a lot of people are putting all the data in that connects into your Slack, [00:08:00] connects into your absolutely SharePoint. Um, but then also on the other side,
Caleb Sima: but they don't target Glean specifically.
Yeah. But yes. Yeah. Yeah. So
Ashish Rajan: I guess, yeah what I'm going with this is you need some kind of an enterprise search capability across different data bases, for lack of a better word, data endpoints in your organization. And I guess where it gets a bit muddied is, uh, Atlassian has their own notion, has their own.
So if you're in an Atlassian ecosystem, JIRA Confluence, all of that, or your Windows ecosystem, and maybe this is something we spoke about as well, is there like, obviously. If someone in that, on a security team wanted to go down the path of, uh, say, Hey, I'm gonna go to RSA and have a conversation with someone, but I also need to understand how do I do this for my own team?
You had some interesting thoughts before we started recording around why is there a need for a centralized so security team to have an automation within their team, not just rely on the work
Caleb Sima: that Well, you're talking about the, oh, okay. Yeah. So, prior to this recording starting, I was making a note about a blog post that I should write about.
[00:09:00] About internal security team automation functions. Yep. Yeah, so like maybe we, we, I mentioned this I think in a previous podcast, but how I think forward-leaning enterprises are, or should be going down this sort of AI platform team. Approach similar to how the cloud team or infrastructure team is created.
Yeah. You need standardization, centralization, and management of your ai, similar as you would your cloud, and I think in a security team. So when you think about how a security team is architected today, you have your verticals, you know, largely most security teams are created by the verticals. I have a detection and response team.
I have an AppSec prod SEC team. I have a cloud security team. I have a security engineering team. Et cetera, et cetera. And largely in these teams, I would count, I would say the security engineering team comes the closest to the team that has been responsible for managing potential internal security tooling.
Mm-hmm. And internal security glue and [00:10:00] platforms. But when you start thinking about AI now and what it can accomplish inside of a security team, everyone at least so far is similar to an enterprise company. They're all independently working on their things that help them automate or become, more efficient.
So the detection response team has AI in the SOC or AI in detections or vault management team, AI and scanning ai, red teaming, and they're all sort of disparate. Sort of separate to sorts of things versus actually there needs to be now a centralized function that looks across all of these and finds ways of pulling these things together so that the detection team can automatically know about the, what the vulnerability teams are finding.
How do you manage the AI so that both you're cost efficient, both so that you have abstraction of it so that you can manage it better, so that you can look across the teams and identify the things that no team own. Own like executive reporting, like all of [00:11:00] these kinds of things that, oh, we can use AI to be more efficient at.
And so is there, or should there be sort of centralized functions focused on AI automation and management inside of a security team now? And should CISOs start thinking about. That being a function, uh, or an area that they should invest in.
Ashish Rajan: And I guess to add to this as well, you don't really have to be someone from a programming background to start doing this as well.
Maybe you don't even need to go to the extreme of using cloud code, terminal access, all of that as well. To what you were saying earlier, if you have enterprise search capabilities today, or even if you don't have one with, at least within the cyber security team, you should be able to pull data points from different APIs that you have available to start doing automation.
Would would that be a fair statement?
Caleb Sima: Yeah, I would say well. If I were a CISO in our building, the, the person would have to be technical in the sense that they would have to at least have the ability to understand how to use AI to get the job done, vibe, code, create, manage. All [00:12:00] of those things need to be done.
Yeah. And they should be able to have the capability looking across and then being able to combine or see the benefits. Yeah. Yes. Which by the way, just to bring up to the original topic of this. Call has nothing to do with RSA, uh, none of this is gonna be represented.
Ashish Rajan: RSAI wouldn't bring back to RSA because I, I think, and the reason I say it is relevant is because if people are walking into R rss a without having a knowledge of like, you know, I'll flip the table on this in terms of the product companies that you work with, if they don't have an API available that I like, so imagine someone is listening to, or, or, or hearing this conversation and they go, oh, okay.
So for me to have capability. For the preparing for the future. 'cause I'm building my roadmap as I go to RSA. Now I'm thinking that, oh, my security team needs to have some capable people, even if it's like a handful of people who should be building these connections between say, my detection engineering, my risk management, my reporting, all of that.
But, so then when I walk into RSA I'm, when I'm having a conversation with a potential detection person, I'm asking them questions around what are the API capabilities? [00:13:00] Does it work with Cloud Co or does it work with copilot? I think that's where I feel it's still relevant. For people who are going to RSA to be more informed about the kind of questions.
'cause I imagine like many, like what I did last time or at least last few times as I went as a buyer to RSA, my questions were generally more around, is this a replacement tool or is this something that I already have? Yeah. That was a phrase that I was going with instead of, Hey, does this help me? Plan for the next five years that my team is gonna go down the path.
Caleb Sima: Yeah. Your, your point is like, okay, hey, when you go into RSA and there are vendors now that you're discussing, you have to think ahead. Yeah. Which is, Hey, you need the capability to query an API and have it be quote unquote AI accessible. Right? Yes. And so in order for it to be AI accessible, technically not only does it have to have the API capability to do so, but second to that, you have to look at your costs.
Are, is this vendor charging you more mm-hmm. Based off of usage of that API And if so, what does that look like? Yeah, right. What is, are they able to handle? The [00:14:00] kinds of requests that you're, you're needing what is there as a, as a service. It's not much of a ui, but it's now about, what's your reliability, what's your uptime?
All of these things become more important if your AI agents are going to be using this 24 7 constantly, versus the way most security vendors build their product, which is they don't necessarily, depending on who you talk to, obviously. Yeah. If it's a vuln scanner, they're not thinking about uptime, right?
Yeah. They're not thinking about reliability. And so when these things like VUL scanners turn into services that are now accessible by API, are there cost differences? Are there reliability things you need to consider because you want your internal team to be able to access it via ai.
Ashish Rajan: Yep. And I guess I'll add another layer to this as well.
I think I, I love the vulnerable team management example or vuln scanner example because most of the cybersecurity product industry should idly be moving in that direction of being a service oriented, like I call the service when I need to as an [00:15:00] organization. Like, I just, I dunno why we keep going for Bank of America.
Let's just say if I was Bank of America and I have the, my cyber security team has a need for a vuln scanner. A cloud security product, a uh, detection engineering product. Now the promised client of what cloud used to be that I should be able to switch from one cloud to the other. I feel we are in a similar world where as long as my scaffolding of my organization, which is Bank of America, can work with, say I have built AI capabilities for detection, GRC.
Insert the 10 different things you care about inside your team shouldn't really matter what provider that I'm using on the other end. Yeah. 'cause it should be able to interpret and it's not as simple as, Hey I have an MCP 'cause that could be deceiving. 'cause sometimes it is like an ap uh, for people who may think, oh, MCP is the, the gold standard.
'cause it, all it means is someone has given an API access to their uh, services. And it also means that just like any other API service, they could only have five functions in there. Nothing else. They may even have no data in there. If, but if you have advanced [00:16:00] users who are using cloud skills, we're now talking about marked on files.
We don't even need the as long as we have an API capability, we don't need an MCP to begin with. So the differences like that as well. Would you agree?
Caleb Sima: Yeah. Yeah. I mean, I think most M mc, you know, does MCP become a requirement for products? No. Right? Like you either, you can build your own MCP very, very quickly.
But the key to your point is, well, there needs to be an API and that API has got to be accessible. And so as long as that is there, that's what works. The other thing I was just thinking about is, as these vendors, the benefit of AI for vendors is about. Customizability uniqueness, right?
Personalization is what AI should be able to bring. And so when you think about vendor products in general, you know another great question I was thinking about is, okay, they're gonna say, we do AI agents, or we do ai, blah, blah, whatever it's is. Then the question is, okay, how is that your usage of AI gonna personalize my experience?
Mm-hmm. Right? And let me tell you what I [00:17:00] need to get done. How then does your AI help me get that? Accomplished. Mm-hmm. How do I get a personalized version of your product? Uh, because of your AI statements?
Ashish Rajan: We, what we're describing, you know, the MoltBook thing, uh, malt book, whatever it's called, the, the community of workspace of AI agents as open claw agents are to talking to each other.
I think what you've touched on is an interesting one because you almost need something like that. With the provider where you have an I won't say a workspace, but almost like a shared Slack channel between you and the vendors you work with. So you a, you have an audit log of what the interaction is, but at the same time you realize where the mistakes are.
So your agent picks up on the fact that, by the way, there is no API for these 10 things. So before I go to RSA, or at least say 2027 RSA, I have a list of things that my current vendor does not do. Which, by the way, I'm pretty sure a lot of people today spend weeks trying to figure out what are we missing out?
Hey, which vendor am I meeting so I can have a leadership meeting with them for understand their roadmap and trying to figure out is this something that I want to [00:18:00] continue within this year? Like all those questions would be so easy to answer. And more data driven instead of someone's weeks of effort trying to figure out, hey, uh, what do we actually use?
What is, what do we actually need and what kind of capability do we build towards?
Caleb Sima: And actually, you know, for any company right now, the ability to get that data easily with AI is very easy. So you can come data driven very quickly. You don't need to task a team to quote unquote say, okay, CrowdStrike or EDR or whatever my product is, go run it.
Figure out what the API is and what am I missing in, in order to make it more effective.
Ashish Rajan: Yeah. Yeah, because usage is a massive thing. I think we probably don't talk about this enough in the industry where many CISO of bought products that are still not even used by the entire organization because it's too hard to integrate, or it was just something that we thought would be great, but we lost the person who was gonna implement it and just lies there.
And we have one year contract that we just keep paying money for. There's so many probabilities there. So bring that back to this conversation of [00:19:00] people who are going into RSA now, at least the kind of question they can ask of vendors on the floor, uh, we've covered that as a I guess a practitioner or a CISO.
Is there like a minimum set of understanding I should have? Like we spoke about obviously MCP and AI and all, uh, a APIs as well. Do you think there's like minimum set of, doesn't need to be a deep dive, but at least here, these are a few things people should know before they walk into RSA, so that they don't, so that they can separate the signal from the noise.
For whatever area of cybersecurity they're looking at. Obviously you're investing in these companies and a lot of that AI security space. I'm curious, is there like a, obviously they have the cybersecurity knowledge, so we'll give that as a given more in terms of what would you be seeing as capabilities or things that people should understand so that they can separate the signal from their noise?
Caleb Sima: Honestly, you know, like. I wish I had some great answers to this, I'm gonna answer this the way I would answer this, if there was no recording, right? Yeah. And like there's just practitioners sitting here. Yeah. You can't, there's so much [00:20:00] crap and AI this year. Mm-hmm. My guess is not only was last year's RSA busy, but this year's RSA easily doubled probably in terms of busyness because AI itself and vibe coding has created a thousand more cyber startups, right?
And like. I can't tell the difference, man. Yeah. Like I personally, who have, I am someone who've been in this industry since I would say its inception.
Ashish Rajan: Yeah.
Caleb Sima: And I cannot tell the difference because all the marketing is the same, all the brand is the same. The only thing that stands out is what Gorilla marketing tactic you have decided to use.
RSA in order to get eyeballs, right? Whether it's
Ashish Rajan: without meant by os a.
Caleb Sima: Yeah. Whether it's a flash mob of 50 people wearing your bright yellow t-shirt with your logo on it, to the trucks driving around with annoying music sounds on, you know, digital dashboards or I don't know what's gonna happen next year, but like, it's just the attack on the eyeballs is the only way to [00:21:00] get your brand even somewhat recognized.
So like the thing for me, I feel is. People, uh, you know, take a crap on, analysts like Gartner. Mm-hmm. Right. Or, but the thing that I just continue, Gartner is gonna continue to hold itself more and more relevant because there's so much noise and the noise, the only way you can filter through that noise.
Like if my team was like, Hey, we need a new Vuln scanner.
Ashish Rajan: Mm-hmm.
Caleb Sima: What's the latest, best kind of vuln scanner? Honestly, I'd be like, go freaking talk to the gardener people. Yeah. Or go ask in the security slack groups. Mm-hmm. Those are the only two places I feel you would get some signal out of all of the noise because you sure isn't gonna, you're not gonna run into it at.
Freaking RSA, right. And you're not gonna find it at RSA because there's gonna be like a ton of them, and you can't tell by their sales pitch, their whatever's written on their banner, whether they're different than each other at all. Right? Like, you just can't. And, and I just think it's a waste of time to even try.
And this, and, and by the [00:22:00] way, I'm gonna go a little bit on a, on a rant here, but like, you know, and the other thing that frustrates me about this,
Ashish Rajan: yeah.
Caleb Sima: Is that, of course in marketing, illa marketing, you know, the goal is about getting eyeballs. But the other thing that frustrates me as, as someone who's also sort of in the venture and in this industry is that the other way companies are defining uniqueness is by the amount of capital raised, right?
Oh yeah. So now, if you're a startup and you go and you say, I raised a hundred million dollars seed round. Yeah. Or an A round, and they put it right on their webpage. Yeah. Like you go to their website, it says. A hundred million dollar round raised. Yep. This is the way that they are now. This is what we have come to Mm.
In our industry that separates to you from whether you're good or not, is how much money you've raised in capital. Yeah. And as a customer. It is at least some signal that says, oh, maybe I should look at them because I am now associating money raised with trust. Mm-hmm. [00:23:00] That, well, if someone was able, if someone trusted them enough to give them a hundred million dollars, then they should be good enough to say whether we should evaluate them as a solution to our problem.
Ashish Rajan: Yeah.
Caleb Sima: Right. And it's raising their signal enough that says, out of this market.
Ashish Rajan: But to, to your point, it's may, maybe the, the goal is not a disaster, but it's the, it's a way you get attention for the right people. Passionate about the problem the right way is harder to do today versus say when you would've back in your CSO days or back when your founder as well yourself, if things would've been a lot more easier from that perspective, the noise was not there.
And I guess RSA did not have a lot of restrictions as well. 'cause RSA now has a lot of restrictions for how you can advertise as well and what you can do, what you cannot do. And there are only a handful of things that are left to kind of make yourself stand out to what you said.
Caleb Sima: No, what I see what I mean.
Disaster. I just feel like as a practitioner it's just, it's so frustrating. I, I almost feel. My theory actually, Ashish, I think, I think I may have run [00:24:00] this by you before, it's like we're going to go on the opposite end of the pendulum here at some point where I think CISOs are gonna get so frustrated by the fact that there are literally 50,000 security vendors to solve one independent feature problem.
Ashish Rajan: Yeah,
Caleb Sima: and I think AI is gonna help us do this. Actually, this kind of loops back into our little bit of our earlier conversation. I'm kind of, I'm kind of tying it all together in my rant, right? So is that, I've always had this thought of, you know what, like AI gives me the ability to both customize. Focus deep, be unique in areas that I never was at scale.
Ashish Rajan: Yep.
Caleb Sima: And the thing that that happened with security is why do I have every single different vendor for every different product problem? It's because it's a, we have a sort of, um, we have an arms race problem, right? Yeah. I have a, I have an, I'm a defender and an attacker. Attackers move fast.
Faster than defenders can move. So startups will come to solve a unique [00:25:00] problem that seemingly attackers will create. And as a defender, I need to be on the cutting edge of that attack in order to defend. Yeah. So I'll buy that vendor in order to be best to breed and unique. But I think here is a prediction.
This is gonna, this may rust some feathers, by the way. Mm-hmm. But. What if I were a CISO and said, I'm done with this. I'm done with the 500 vendors that I'm dealing with. I'm going to pick one or two that solve 95% of it. I'm gonna be all in on Palo Alto. Yeah, as an example, I'm gonna clear everything out.
I'm gonna go with CrowdStrike on End Point, Palo Alto for everything else.
Ashish Rajan: Yeah,
Caleb Sima: Palo Alto is a decent enough, good enough product, but what it gives me is I can drive a super low price by getting consistency. In all these areas in my enterprise and where I'll make up the difference, I'll use ai, right?
Like I have a Palo Alto firewall, I have a Palo Alto proxy, I have a Palo Alto Vuln scanner. I have a Palo Alto Code Analysis, a [00:26:00] Palo Alto, whatever. But what I can do is I can use AI to make up where Palo Alto now lacks. But what Palo Alto gives me is the infrastructure, is the hook ins, the delivery model to get my data to change my stuff using AI on top of it.
So Palo Alto may not have the best static code analyzer, as an example. Yeah. But combined with my AI team, I can drive on top of Palo Alto's static. I can make it way better, right? Yeah, I can. Five exit. To make it really good enough to do it. Palo Alto may not have the best proxy analysis, but with my ai, I can absolutely make that proxy analysis and it gives me all of the network.
So my internal security team now is AI focused.
Ashish Rajan: Yep.
Caleb Sima: Building verticals deep enough on top of a standardized architecture. By the way, this is not a Palo Alto commercial, like Palo Alto not, oh, I gonna
Ashish Rajan: say like you should probably buy from stocks in there.
Caleb Sima: But like, you see my theory, right? Yeah. My theory feels like why should I deal with all of these vendors, [00:27:00] you know, almost as an operator when maybe I should just go all in on a couple
Ashish Rajan: Yeah.
Caleb Sima: Drive my cost down with that vendor by me saying, Hey, I will go all in. Yeah. On you. Drive my cost down, have. Three or four vendors to manage and drive everything else that's vertical that I need. That's deep expertise, use AI and build my internal AI team.
Ashish Rajan: Yeah, and I'll go one step further because there are already signals in the market for this, both by CrowdStrike, als, like all the publicly listed companies, they're all making moves.
You can see. So Palo Alto acquired a observability company. And CrowdStrike apply. Uh, obviously they've been acquiring AI security companies as well, so has CrowdStrike.
Caleb Sima: Yep.
Ashish Rajan: Yeah. But you can already see movement in the space where they realize that in the future to what you said, things I would care about, I probably would.
If I don't see that feature from a vendor, I'm just gonna build it myself and just have some source of. Quote, unquote intelligence that feeds the latest threat research for me in that particular space, which is, I mean, if you look at it first principle, all [00:28:00] that we are doing at the end of the day as a cybersecurity team is we are collecting all the latest research on, Hey, how can I be hacked?
And then consolidating that into the top five. That are potentially impacting my organization and I just work through on either removing them and then I repeat the cycle every time a new feature comes out or a new product is released. That, I mean, that's essentially super simplified version of what cybersecurity does.
And to exactly what you said at the moment, before ai, this ecosystem was completely broke down. GRC had a product. AppSec had multiple products, cloud people had multiple products. There was a red team product. But in a world where there are 50,000 wide coded products who are just trying to. Validate their prototype by ASO goes, Hey, I'll give you a cheap deal.
The flip of it is that I would probably lean more on a publicly listed company, which has already publicly listed what, say Palo Alto CrowdStrike as any of the other ones as well that are publicly listed. I would use that as a justification for, hey, they are gonna make more acquisition and probably they continue to make more acquisition.
Where I can justify to the board and to my CIO, CTO hey. [00:29:00] Publicly issued company. They're not white coded, they're not gonna go away tomorrow. But any other start of tomorrow, I don't know if they would be there in six months time or they would not be acquired or something else may happen.
Caleb Sima: And the thing about is, you know, you know like, and we're, we'll pick on PA specifically obviously 'cause we're on that message, but PA's always in the acquisition, innovating space anyways.
Yeah,
Ashish Rajan: yeah.
Caleb Sima: And the thing about it is that from PA's perspective, you're not getting the best product, right? You're not best of breed here, but the question becomes is if I get a mediocre, let's just say a mediocre static code analysis from Palo Alto.
Ashish Rajan: Yeah.
Caleb Sima: The gap between mediocre and good enough. To satisfy what I really need done is that gap big enough that AI can make that up, right?
Mm-hmm. That my Claude Code or whatever my AI orchestration system does, can say I can take Palo Alto's static code analysis.
Ashish Rajan: Yep.
Caleb Sima: And then I can make it up through my customization to get it good enough, right? Yeah. And, and it's customizable to me. Right [00:30:00] at that point. Yeah. And then that other 10%, is that worth it?
That I have to go out to another vendor in order to buy or not? Probably not. And if I can look that across the entire security stack, can I answer that question? Which is, where does Palo Alto get me? Yeah. And then how much further of that gap can my customized AI bring me? And is the rest necessary or not?
Yeah, and like there are some that it is, right? There are probably other areas that you're like, it is worth probably making up some of that. But my guess is you could eliminate a lot of vendors. Yeah, and just eliminate the headache and the choice
Ashish Rajan: of
Caleb Sima: that.
Ashish Rajan: I mean, finance would be so happy if that was the case.
I remembered getting into argument, so why do we need another one of these static code analysis tool, or why do we need another one of these cloud tools? That conversation would just completely go out of the window at that point in time. And yes, maybe to what you were saying, and I guess what both of us are talking about before.
Goes back to the automation for CISO who are probably walking into RSA thinking, I'm planning ahead. 'cause a lot of them are just browsing for what should I plan for in my next [00:31:00] budget? Probably not, not they're not gonna buy straight away, but at least they're planning for it outside their AI capability or API capability for YouTube plugin.
Your own internal ai. I think this is a good metric to go down the path of at least having an understanding for if it was to come down to, I only need one of these. And to be fair, a lot of people are going through this exercise today where cost cutting is important for CISOs. So they are cutting and recontracting at the moment or at least validating, Hey, do I need five of these or can I just do one?
Yeah. What is good enough for me is we are already making this call. So if though, if whatever that quality of made for the new product already has API capability, then I don't know. It sounds pretty reasonable that you don't need another product for a good five, six years after this.
Caleb Sima: Yeah. The question is always gonna be, do I have the talent to make up that gap?
Ashish Rajan: Yeah. Yeah.
Caleb Sima: That is gonna be the obvious question.
Ashish Rajan: I think the argument that the community seems to be making, 'cause I went down the path of old build versus buy 'cause this is kind of in that territory of bill versus buy where.
Caleb Sima: Yeah,
Ashish Rajan: like, you know, obviously the B besides SF happens just before [00:32:00] RSA where there's a lot more technical people and I enjoy that community because you can go really geeky, but the thing is not many people to what you're saying may be capable enough to do something like that for themselves.
Like as an internal, this is my internal, whatever the, the tool that I wanna create. I wonder if there's a product gap there. Is that something that can be. A plug and play. I mean, let's just say MCP for security, for lack of a better word, where any organization can, 'cause if you think about cybersecurity, it's pretty well defined in terms of do I need a GRC team?
Do I need a AppSec team? Do I need a detection team? Do I need a SOC team? We already know the things we need. It's not that we are gonna recreate that. It's just more like, Hey, can I make something? And maybe this is a startup idea, walking out and MCP for AI security teams where they can plug in, they, I mean, you become.
I think there was like a Jesus, there was a a PM wave that came late in between where it was application security, posture management, the idea, so the idea was that, hey, I get it. You have an AppSec product, you have a [00:33:00] Cloud SEC product, but you want to know about product security. So that just bring both of them together and make it a PM.
I feel like this is, I'm not saying this is a as PM play, but more like a, at the end of the day, if someone makes a product that allows me as a non-technical person in a security team just go, Hey, here are my credentials for Slack. Just, you know, and here are my credentials for, let just say Palo Alto, here are my credentials for CrowdStrike.
Just go do your thing. I'll give you the task and come back with what is a thing that I can do and what is a, Hey, you know, the, what's it called? The reinforcement learning where you,
Caleb Sima: yeah. What you're saying is the, the intelligence is good enough to be abstracted now, where actually, if you did do that, it would probably produce fairly reasonable answers or suggestions of what to do.
Right.
Ashish Rajan: That's right. It learns with you. I think there's definitely a gap there where people could just, if anyone should build, I mean, to, to what you said, and I,
but,
Caleb Sima: but by the way, goes directly against every being in fiber and a security person. By the way, when you just said that, here's my [00:34:00] slack creds, here's my CrowdStrike creds.
Ashish Rajan: Oh, yeah, yeah. I mean, I, I'm
Caleb Sima: with you. Just go, just go do it for me. And then every security person is like, yeah, that's the exact opposite of what we are here for.
Ashish Rajan: Think about this way. When Open Cloud was open sourced and people went down the path of just installing it will nilly anyways, how f are they plugged in their Slack credentials?
Everything else? I feel we are at that point where there should be some kind of a tooling available for security teams. And I don't wanna say traditionally security teams has been non-technical. We have, we've always been technical. It's just the level of technicality is different. Like a GRC person probably is not as technical as security engineer who's probably not as technical as Red Teamer, who's probably not as technical as the bug barty hunter, but they're all specialized in.
Niches. I think that whole niche play may be going away, and that's where I feel there is an opportunity for someone to build something which is a plug and play. And it's like meta fluid, right? Or wire shock. It could be open source. Just something simple enough for people to take that and maybe hire a developer or whoever internally gets, [00:35:00] borrow them from another team and customize it to what you want.
Even that doesn't exist today.
Caleb Sima: Yeah. I mean there's a lot of opportunity, right? Yeah. To be able to build some of these things per your, your statement, the goal in, I guess some of this conversation started with RSA, which is how do you tell the difference and what's important to find in vendors? And my conclusion to that is you can't, so therefore let's go to more of this road of, you know, how do you internally.
Increase your capability. Yeah. Um, so that you can rely less and less on multiple vendors and more on a unit, more unified approach. Right.
Ashish Rajan: Maybe that's where the future is headed. I mean, it is headed that way. It's just more like whether you can make the choice. I was gonna say maybe the vendor separation piece for RSA.
Uh, obviously a lot of people would be armed with the data that they have. They, I'm sure people already have meetings booked, events booked. They're like over a hundred events at RSA in a, and for context, people who don't have not been to RSA, it's a three day event and there are hundreds of events already booked.
This is outside the [00:36:00] conference. Yeah. I'm like, it's not even in the conference. Outside the conference, there are hundreds of events.
Caleb Sima: Well, the outside the conference is the con is the real conference.
Ashish Rajan: That's, yeah. Yeah. I mean, and so the outside the conference events are about a hundred events and once you kind of go through that signal noise, you're obviously gonna have a lot of dinners, a lot of entertainment and everything else.
But the bottom line, if you are looking for a vendor, at least, at least in my mind, the minimum criteria should be, it should be API enabled is, uh, is the only criteria, obviously you. Put your context for what you need, but add that layer of, have an API capability so I can plug in my ai 'cause maybe it's not you, but it's your engineering team, the team that makes the AI for security.
It could be, yeah. The, the
Caleb Sima: costs are reasonable and the API is available to the things that which you need, that matter.
Ashish Rajan: Yeah. Actually, and the second thing I would say is observability as well. I mean, there's a whole data place, but the only reason I spoke about the acquisition of the observability company is, uh, by is because I definitely.
Seem to find that AI tools are not the best for observability or just logging [00:37:00] in general for things you care about. Like, you know, we, I think we recorded the episode with Sunil and we were talking about the lack of like markdown files are now executables based on skills.
Caleb Sima: Yeah.
Ashish Rajan: There is no current capability that tells me the intent unless I'm running massive evals consistently, continuously.
I have no idea what the intent is. It's not a SQL injection I'm looking for. I'm just looking for a shishas for $10,000. Does he even have $10,000 in his account? I don't know. And sometimes, obviously I get it, it's an operational risk, not a security risk, but still, where is the line drawn for observability?
We are not talking about what does an incident look like beyond a data leakage. In the case of ai, obviously we are going to a world of AI agents, but identity is a big problem. The chain of custody of your identity is going along. Five, six different hops. Yeah. You don't know the answer to that. And actually maybe that's a good point.
Are there any gaps that come to mind that are, because that's a good signal versus noise. If someone tells them, Hey, I can track your entire AI agent end to end, I would think there are definitely holes in that statement.
Caleb Sima: Well, [00:38:00] you know, to me, I hear that every day. Um, I, the devil's always in the details.
I ask them first, can you define to me what an agent is? And like, I think 70% of the people can't answer that. So then clearly you can't track if you don't even know what an agent is or how to define an agent. And then they, then you, if they do define an agent that like, okay, where is it? When you say end to end, what does that mean?
Is that just,
Ashish Rajan: how are you getting the information that will
Caleb Sima: be, how are you getting the information? Is it running on a laptop for my workforce? Is it in a production environment? You know, in Kubernetes, like I don't, like, tell me what that means. Is a
Ashish Rajan: library that I talk to or what
Caleb Sima: is it? Right. Yeah.
So I think I find, like everyone says that statement, but when you dig into the details, no one can back it up.
Ashish Rajan: So what we saying, AI agent security is still very much. Open book for solutions
Caleb Sima: it is, right? And the other thing that bothers me is just because you can do that, what does that mean? Right?
Mm-hmm. Like, let's just say they, they're telling the truth. I can track everything from end to end in an AI [00:39:00] agent. Yeah. Okay. To what end? What is that going to give me? Does that mean it, you're gonna prevent it from making bad decisions? That's not true, right? Yeah. You can see everything, but it doesn't mean that you can prevent it from making bad decisions.
Does that mean you can tell what is a bad decision versus a good decision? Like that's pretty valuable if you can do that. Yeah. But tell me how you do that. Because actually doing that may or may not require that level of visibility or not, right?
Ashish Rajan: Yeah.
Caleb Sima: So can you know like it's just, you like these things?
I think. Every vendor says first they, they sell me the wrong thing and even if they try to sell me the right thing, when you get into any of the weeds, it all falls apart.
Ashish Rajan: Yeah, and I guess to your point, when you say if it good decision was a bad decision as well, I was gonna add that, how would you have the context of the organization to begin with?
'cause a lot of these AI agent security companies expecting the customer to put the context into their machine or into their product so that their, their product can understand. What is [00:40:00] bad, what is good for you? And it goes back to the Slack credential I was talking about to that hypothetical product.
It's the same as why am I giving my organizations context onto a third party software?
Caleb Sima: Yeah. And I think that we are not there yet, right? It's almost like next year I feel these vendors and everyone will catch up to this. Okay? Like determining what is good or what is bad is not about me giving you all the data.
And is not about anomaly detection, right? Like there are, okay, there are better ways to do this and here is the way at which this should be done.
Ashish Rajan: Do you find that, uh, it's funny, as you mentioned the anomaly detection piece, but obviously last year we spoke about the fact that. The concern people walked into RSA was with that, hey, we are trying to show ROI.
If we are saying that, and I obviously agree to the point that there's a lot of gaps in the AI agent security conversation today. What is something that people are able to get? How far can we get in this? Like we, I, apart from operations and engineering, handle [00:41:00] this because I don't know, a security risk.
That comes up in terms of AI agent security, unless it's an identity data theoretical cover rolls up identity data. Yeah, those two, those two are top ones that come to that come to mind. Majority
Caleb Sima: authorized. Yeah. Authorization, right? Obviously. Yeah.
Ashish Rajan: Yeah. So again, identity, access, control, all that. But I think in terms of nothing, which is AI specific, so when people say AI threats.
Someone just tries selling you the idea that, hey, there's a, there's an increase in AI threats around the world, and if you try and double click on that, you realize that talking about AI bots primarily, there is no agent ai. At least that I know of Attack that has happened, which is fully autonomous outside of the research labs that people have been building.
Caleb Sima: Oh, no, I, I think that's, I I would lean a little, I, I would challenge it. I feel full on true AI agentic attacks are definitely in play, both from a, you, you know, again, looking at the, let's separate the two between, uh, you know, attacking AI weaknesses [00:42:00] Yeah. Versus using AI to attack, right? Yeah.
Ashish Rajan: Yeah. I'm talking about using AI to attack.
Yes.
Caleb Sima: Yeah, using AI to attack for sure, like that is getting to be fairly common. Um,
Ashish Rajan: oh, it's, it's going to be, but today as we stand, is
Caleb Sima: it I've, I've I, listen, I don't have all the right numbers. There are obviously the. Reports that OpenAIs and Anthropic's and all those have produced around this.
Yeah, a hundred percent Their active things that they've seen. Yeah. I can also say that, some companies have definitely seen some really interesting tactics that are clearly automated that are at a higher level of intellect than normal. It's not just like web scanning and then throwing a bunch of junk at a web application anymore.
Right. It's web scanning, intelligent enumeration, intelligent attack steps, uh, intelligent phishing emails, phish that are being sent phish en mass like that has occurred to some degree already, for sure.
Ashish Rajan: Yeah. Oh yeah, yeah. Spear phishing. [00:43:00] Yeah, a hundred percent agree on that. But I think actually,
Caleb Sima: you know, token credential usage and testing, that has been very AIed
Ashish Rajan: API keys.
Caleb Sima: Yes, API Keys have been very AI by attackers right now.
Ashish Rajan: I think I'm with you and I'll probably I don't know if you've read, there was a CrowdStrike report that came out recently about the time it takes from exposure to exploit. I, maybe that's, I dunno if that's what they call it, but that's how I understood it.
It was down to, it used to be, a few days or 40 days or 30 days. It's now down to, I think it was like less than a minute, or like from the point of like some of the attacks that they saw, and obviously they did not call out that as AI or agent AI attacks. They definitely said from the point of exposure to they were, it was down to minutes.
It was not any more sitting in days and there was no clear indication whether it was an genic AI attack. My point being so far in all the conversations and all the people that I've been helping and all the conversations, these CSO advisory boards, everyone seems to, at least in the [00:44:00] ecosystem that I've been talking to with the financial institution of the world, they definitely have increased amount of traffic on the internet in general for trying different things.
Caleb Sima: Yeah, I would suggest like, uh, actually I'm involved in a project with Sergei
Ashish Rajan: Oh yeah.
Caleb Sima: Um, called
Ashish Rajan: Zero.
Caleb Sima: Yeah. Zero day clock.com.
Ashish Rajan: Yes. Yeah. Yeah. That's a good one. Yeah.
Caleb Sima: Yeah.
Ashish Rajan: Yep. I think it definitely, wait, actually you should explain it so I think I know what it is, but you should explain it.
Caleb Sima: It's just, it's basically taking all the data around from vulnerability to exploitation or time to exploit measure. Yeah. Which is like, if you go and look at it, you can say, okay. Where it wa by the time a vulnerability was found to confirmed live exploitation, either by someone or published somewhere, like if you look, you know, through the, the years, like in, I'll just go back three years, 2023.
It took about, it was about five months.
Ashish Rajan: Mm.
Caleb Sima: Uh, now it's about one and a half days in 2026. Which is pretty spot on, [00:45:00] right? In fact, I posted a AI exploit on my LinkedIn the other day. Wait, let me go pull this up so I can explain it, which was a very good exploit. It was. Using the GitHub issue title, and let's see what they did.
So this was super, super cool. This company got exploited because they had an AI triage bot that basically when you submitted into their GitHub, their, their open source, it analyzed it and like did it. So they prompt injected it and it worked. And then they downloaded as their exploit, payload, open Claw, and then open claw got installed.
Onto their machine. And then it was like a command and control. And so here's what was really cool about it is going back to this zero day clock let me see the amount of time between the public blog post. So what happened is the original security person, they reported it, they gave them vulnerability [00:46:00] disclosure time.
No response. So then the security researcher published a blog post about it, and within two days, someone exploited that exact company using that exploit. Two days. Two days. This, this was a random researcher who just. Posted it, they didn't exploit the company. Some random attacker exploited the company using the exact exploit that was posted.
Ashish Rajan: Wow. I mean to, to your point, and I obviously, I'd love for people to just check that out as well. 'cause that's definitely a good one to, 'cause I think the idea is that over time the window would just keep reducing. More. And it's getting
Caleb Sima: shorter and shorter.
Ashish Rajan: Yeah. And it's very well documented now that it is actually happening.
So going back to like, so actually maybe the foundation of all of this is that maybe the people should look at this automation thing within their team. 'cause at the end of the day, a vendor, the respective who, who it is. They probably would get it from the same source as you do unless they can start creating, uh, what used to be in firewall world called the, [00:47:00] uh, patch free fixes or whatever they used to call it.
Like, Hey, maybe the patch isn't available, but we can still close that loop. Like that kind of capability would need to be in a security team. At least you should be able to automate that piece yourself, where you're able to identify something came out and you're just basically working with your automation team to use ai.
Because you have no idea what this language is or what this markdown file is doing, you go ahead and, uh, maybe patch that up or seal that access or whatever you wanna call it without, until you have a patch available. But bottom I sounds like we have a lot of gaps in agent security, which, uh, people would probably identify.
Uh, one thing that we spoke, we people should definitely look out for is the API capabilities. So they can actually have their own, hopefully security AI connect to them. Hopefully someone makes up open source. Project or something so that security team and teams can plug and play and connect it to any vendor that they want to, hopefully one day.
And third thing we also spoke about was the fact that in terms of capability, there is a big gap in observability at the moment, in understand the intent of what things [00:48:00] are doing and what funds will be security, which ones are not security, which ones are operational, not technical. And the final one, I think it was.
Those are the top three so far. I just started to summarize that everything we spoke about in terms of separating signal from the noise, having an understanding of what they can be doing, actually funny enough. I wonder, and feel free to drop a comment if you guys want. It'll be good to kind of do like a, I won't say informational, but more like a, Hey, if you're getting to know how you can use AI and security teams, and if you want that.
To be a workshop or something, maybe just drop a common security and we'll try and do a workshop for that. I think that'll be pretty cool. 'cause at least I wonder if, if just that having that workshop allows people to create their open source project that just plugs in place, like kinda like an open claw, but for security.
But although they, yeah, what you're
Caleb Sima: saying is. Here. You just integrate every security tool you can. Yeah. And you have an effective claude code engine that can run. Right?
Ashish Rajan: That's literally all you're doing is just explaining what you wanted to do. It already has the connections to the popular vendors that are in this [00:49:00] space.
Someone has already done the pre-job. I mean, even if they haven't done the job, it figures out itself that, Hey, I need to talk to the API of whatever Palo Alto, Crowdstrike like whatever. I just need credentials. Obviously you're in that same. Put your license in and everything, but that would be an interesting one for people to work on.
So if you want that, definitely drop a comment, security and can try and see what we can do. Alright. Uh, I we have an event ourselves as well. We're doing a live recording, uh, with the Decibel VC people. You and I are doing a Yeah, what we are gonna say is what we, what people would not talk about RSA is what the rough topic is.
Caleb Sima: We have a security founders, uh, event going on, but there's way too many that, that's way oversubscribed.
Ashish Rajan: So fair. Uh, well, I think I was gonna say, uh, I think outside of that live recording we have, I think we have a CSO event on a Sunday afternoon before a panel with, we have a panel with Nick Reva, who's the director of Security Engineering at DoorDash. We have the CISO, Instacart mj. We have Daniel Miessler and [00:50:00] Jason Haddix coming for a red teaming one.
So, I think I'll probably find out. Anyway, I'll, I'll post about LinkedIn as well so people know about it. Yeah, that's the top mind. Awesome. All right. Thanks everyone for tuning in and again, drop a comment, security if you want us to do a workshop, but I'll talk to you.
Thanks, peace. Thank you for watching all listening to that episode of AI Security Podcast. This is brought to you by Tech riot.io. If you want to hear or watch more episodes of AI security, check that out on ai security podcast.com. And in case you're interested in learning more about cloud security, you should check out a sister podcast called Cloud Security Podcast, which is available on Cloud Security Podcast.
Do tv. Thank you for tuning in, and I'll see you in the next episode. Peace.






.jpg)

.jpg)


.jpg)
.jpg)

.png)






