Did Anthropic just kill the AppSec industry? Following the announcement of Claude Code Security, a tool that finds, reasons about, and fixes code vulnerabilities, major security stocks dropped by 8% .In this episode of the AI Security Podcast, Ashish and Caleb break down the reality behind the hype. Caleb explains why using AI for SAST (Static Application Security Testing) is "a no-brainer," noting that many open-source projects and startups have already been doing exactly what Anthropic announced . We discuss why this actually validates the shift toward AI-automated remediation.The conversation goes deeper into the future of the cybersecurity market: Will giant foundation models start acquiring security companies? Will they offer "premium gas" (cheaper tokens) for building on their platforms? And most importantly, what does this mean for AppSec engineers whose jobs involve triaging false positives?
Questions asked:
00:00 Introduction: The Claude Code Security Announcement
02:50 What is Claude Code Security? (Finding & Reasoning about VULNs)
03:50 Market Overreaction: Why Security Stocks Dropped 8%
05:10 Why AI-Powered SAST is Not New (OpenAI & Open Source doing it already)
07:20 Will AI Take AppSec Jobs? (Triaging False Positives)
09:00 "Shift Left" on Steroids: Auto-Fixing and PR Submission
11:30 The Threat to Legacy Vendors: Why CrowdStrike's Moat is Safe
14:30 Historical Context: AI is the New Calculator/Typewriter
18:20 The "Gasoline" Theory: Foundation Models as Fuel
21:00 Will Anthropic Acquire Security Startups?
26:30 Anthropic's Go-To-Market Strategy: Building AI SOCs
33:30 Startup Survival: Can Innovation Outpace Big Tech?
41:30 The Future of Threat Intel: Is the Legacy Moat Disappearing?
48:20 Negotiating with Vendors using AI Leverage
53:30 Using Evals for Organizational Anomaly Detection
Ashish Rajan: [00:00:00] The announcement of Claude Code Security have shaken up the cyber security market.
Caleb Sima: All the security stocks dropped by like 8% using AI to do source code analysis. It's a no brainer. But now ai, it's now up the low hanging fruit and it's eating those jobs
Ashish Rajan: not only just picks up the fact that, hey, I have 25,000 vulnerabilities, but I also goes through the exercise of reasoning with itself to identify which one of these 25,000 is actually a false positive.
And the foundation model itself is doing security for you. What is the need of a vendor?
Caleb Sima: Think of tokens as fuel gasoline. Foundational models right now is almost fuel, and what's happening now is every company is effectively needs fuel. I don't want to be another startup in a niche solution trying to fight these big guys when these internal companies are just using AI to compensate for the [00:01:00] gaps.
Ashish Rajan: Is this the end of cybersecurity as we know it? That is the one question that most people kept asking when Claude Code security was announced by in Anthropic. So in this episode of AI Security podcast, Caleb and I go into this entire ecosystem of why is the cybersecurity market not a fragile industry? That just gets, I'm not gonna talk about the stock market, but at least overall on the first principle basis, there's a lot more to cybersecurity than just a Claude Code security announcement, just wiping the entire AppSec field.
I know it may sound extreme, but in this particular episode we talk about the reality of this announcement and have we already solved this using AI and what does this mean for the broader industry, especially the startup ecosystem that may actually be fully dedicated themselves, probably months and years of money and research into solving problems that may eventually be solved by foundational models.
All that and a lot more in this episode of AI Security Podcast. If you are here for a second or third time and have been enjoying the episodes of AI Security Podcast, [00:02:00] it would really mean a lot. If you take a quick second to drop in the subscriber follow button, whichever podcast platform, yet you are listening or watching this on, we are on Apple, Spotify, YouTube, and LinkedIn, and as always, I also look forward to your thoughtful comments on the episodes, what your own thoughts are about this particular topic.
Looking forward to responding to your comments there as well. Enjoy the episode. Peace. Hello and welcome to another episode of AI Security Podcast. Today we are starting with something that seems to have shaken up the cybersecurity market, specifically the announcement of Claude Code security. So for people who have not come across this news yet, as of last week, and tropic release, something called, uh, Claude Code Security, which allows you to do a scan of your entire code base if you want to, for vulnerabilities, that would typically be from a SAST or static code analysis or, uh, something like a SCA as well.
But the part which is interesting and got people's attention is the fact that it understand, it not only just picks up the fact that, hey, I have [00:03:00] 25,000 vulnerabilities that I've identified, which used to the cases standard SaaS tools, but also goes through the exercise of reasoning with itself to identify which one of these 25,000 is actually a false positive versus which one is not.
And it, it builds a dashboard from the security vulnerability. It thinks it requires attention. And by the way, I say all that this is not live. It's still research preview mode. It's not even out. But people have already started going up in arms about this. There are blog articles coming from all the AppSec people for, Hey, we still exist and blah, blah, blah.
But I can see you already nodding, Caleb. So it's
Caleb Sima: just you start with your thoughts before I, I just get so just, this is just such an absurd, ridiculous, like Yeah, but,
and then like all the security stocks dropped by like 8%.
Ashish Rajan: Yep.
Caleb Sima: And you know what I did? I went and I bought a whole bunch immediately. Like [00:04:00] good
Ashish Rajan: idea. I shoulda
Caleb Sima: it, it like CrowdStrike dropped by, I don't know, 9% yep. On the day that, Anthropic announced a SAST product, which is not even really.
By the way, I would like to note, okay, let me, let me back up just because it's hard for me to get over my just okay, so lemme back up. So first of all, using AI LLM specifically to do SaaS scanning is not new at all. And in fact there are lots of open source projects about 30 damn cybersecurity startup companies all basically doing the exact same thing mm-hmm.
That Anthropic announced in Claude Code security. Literally the exact. Now do they do it at the same level of quality? I would argue some probably do it better than Claude Code Security probably does. I would argue many probably do not. Either way, it's in variance [00:05:00] of basically stating that using AI to do source code analysis, which is what Claude Code security is not only a given.
It's a no-brainer, right? Yeah. Like traditional, you know, ways at which we think about things can easily be automated with AI in sast. So identifying a buffer overflow, a SQL injection, a issue validating that and ensuring that this is most likely not a false positive, easily done, quote unquote, many open source projects already do it with ai.
Many vendor startups do it with ai. And by the way, I'd also like to note open ai. Matt Knight built an entire version of this exact thing at OpenAI, and he's been going around at security conferences talking about it for months, right around his project, around doing this exact same thing as well. I guess they didn't, I even, I feel like OpenAI may have done an announcement on it and maybe, [00:06:00] maybe the, it wasn't, they didn't do enough of an announcement on it.
No, but like, this is not new by any means.
Ashish Rajan: I, I think too add your maybe they did not get a lot of attention also because Claude code in general has been quite popular on the, in. Yeah, and I some even classify it to be better than the older version of Codex. And the current version is still up for I guess question for whether it's better.
Maybe that's why it got picked up by a lot of other people for, hey, maybe this is, this is the next thing and it's gonna change the world. People didn't even bother to look if it's in research preview. People didn't even bother to look at what is doing. But the fact that I think what shook the PE push shook people and maybe people in an AI summary, if you think about the bullet points that scared people, was the fact that there's a dashboard, which is, by the way, most product companies just have a dashboard for me to ab as an AppSec person to look at it reasoned with itself for false positives.
And that two things made me people go, wait, that's my actual job. And I think that's where the fear people, it started from the ground [00:07:00] source where people who were initially, and there is, I do wanna call out, there's obviously a huge fear around the AI taking my job. So anytime you mention that, suddenly people just raise their arms.
And I think my theory is that. Overall, this is a good news. Even if there was no open source projects before that, were solving this. If the foundational models start solving this problem for us, and for people who've done AppSec, they probably would know this, a lot of people in that space, at least a lot of my teams had to fight tooth and nail with developers to get that, get those mediums and highs.
Hey, could you please do this? Hey, could you please do that? Or whatever the thing used to be. And a lot of that job used to be going through false positives and identifying what can I give forward, which top three can I give forward to a developer? Yeah. If you think that's the crux of the, a lot of the jobs, right?
Caleb Sima: And, and let's not forget it goes, these projects and startup companies go further than that too. [00:08:00] They not only reason about what is a false positive and what is not, but I also go ahead and write the fix and the solution and then submit remedi,
Ashish Rajan: submit
Caleb Sima: it as a PR for engineering to accept. Yeah.
You know, it's not just finding, it's fixing. Which, you know, when you look in. The general concept of things moving forward. If, quote unquote, 95% of code is AI written already with, with an AI agent that identifies the security analysis part, submits it, does it, it just becomes an automated cycle.
Hundred percent, right? Yeah. That you don't need a human to be left in as an evolution, which is great, right?
Ashish Rajan: Yeah, a hundred percent. I think, wasn't that the whole i, the sales pitch with DevSecOps that yeah, we, we should go as left as possible, as left as possible. Now when we are left as possible, people scared, they're like, Hey, that's too left.
I don't want that to be there. I don't want it be my IDE.
Caleb Sima: We're, there's even a more left, [00:09:00] by the way, which is a left on design and architecture. Yeah. Where you can then use AI to be able to say, Hey, if you're building these components, these are the primitives and principles that I require from a cyber perspective.
Ensure you keep these and then your security agent will also validate that, and then as the AI codes it, it will build those primitives, uh, and principles into their coverage.
Ashish Rajan: That's a good point. I, I, and maybe, uh, to, to your point, may, shifting left also means that cybersecurity as an industry needs to be by default built in.
I use the example of Waymo quite a bit for this, where what people call full agency where a Waymo has to make real time decisions on whether I take a left, right, stop, whatever the. The safety or uh, safety and trust mechanism that exists. There has to happen in a few milliseconds. Cannot be, I'm gonna go to a human to, uh, come back with information.
But if there are [00:10:00] processes for it to contact a human and them to take over or whatever the needs to be. So if we were to move into a fully AI, agentic AI world, uh, most of security, especially the low and medium ones, I would think would have to be inbuilt, then we probably would just focus primarily on business logic flaws, complex attack path.
How many five or six components are built together. Like that would become the true mode for people who clearly have been doing this for a long time as well. Pen test is do it. Red team is do it apps. I people understand it. I think I look at this as like, oh, now we can focus on those things. Instead of trying to worry about 15,000 false positives.
Hey, which five do I care about? Exposure and everything else that people talk about.
Caleb Sima: Correct. Yes. I, I agree with you there as well. When what people don't realize is a lot of the vulnerabilities, I would say most of the vulnerabilities exploited today are at runtime in Yeah. The way things are configured and the way business logic is [00:11:00] put together and all of the, and the mistakes or the, you know, the differences between V ones, V twos and v threes of web apps cause security gaps, right?
Yeah. Now, so those things are the next, you know, level fruit to go
Ashish Rajan: That's right. And eat
Caleb Sima: right. Yeah. And so, and again, what people then say, well look at AI and look where AI's going. And so I think this is where the fear of the announcement of, of Claude code security, I think maybe comes from and why people sold security stocks is they think, well, if AI can do this now.
AI can now look at a general AppSec engineer and take the SAS tooling, which was our, I wanna call this the v one of our automation of low hanging fruit, right? Yeah. It, it finds the VULs, but doesn't necessarily have the reasoning to logic, false positive versus not, or verify or produce fixes. But now ai.
And Anthropic specifically is saying, we're just building this. We're just, [00:12:00] you know, releasing this for free. Yeah. It's now up the low hanging fruit and it's eating those jobs, and they're saying in the next two years, is it now, is it gonna apply itself to anything requiring vulnerability, triage or otherwise?
So now in CrowdStrike, CrowdStrike identifies vulnerabilities, remediates these on hosts and endpoints. Therefore, AI is just going to eat that too, which means CrowdStrike is going to devalue. Like, I think that's their fear, is somehow it's gonna to eat everything in the security market. Which is like, look at, like you and I both know, like, come on, like for, like, no one's gonna rip out CrowdStrike due to this SAST tool Anthropic published, but like, okay, they're saying, well, AI is going to eat into CrowdStrike.
And the thing is, is like CrowdStrike has a moat that is so big, right? Their distribution, their [00:13:00] data that they have access to their execution capability. And let's not forget, CrowdStrike is moving just as fast in ai, right? In what they're doing. And there's no way, it's like basically saying, we're gonna remove Apple and Microsoft, and that's because AI is just gonna take it over.
Hey, listen, maybe in 50 years maybe you're right, but like in like the next five, 10 years, like, I find that very hard to believe.
Ashish Rajan: Yeah.
I, I, I would, I would add, uh, something I read and I think applies here. Uh, when OpenAI search was ranking higher than Google search for a brief period, everyone had started dropping Google stocks, I think, and Google obviously was.
People were saying, Hey, this is the end of Google. People are gonna use, and obviously it's a, uh, moment reaction, momentary reaction from people. I think what some of the people who had been looking into the space for a while, what they called out is the fact that the size of the pie is bigger. Now there'll be a group of people.
It's not that [00:14:00] population has ended, right? There is, we are still all here. We're still gonna keep searching. Some of us would use LLM, some of us just still go to Google. The same happened with the cloud as well. When AWS Azure, Google Cloud, those people, they became the foundation when they came up with security products, no one dropped.
The cybersecurity vendors, these people made. We have, uh, Wiz as an example that's been acquired by Google. Now, whatever happens to that after they fully acquired it and all, whatever happens after that. Irrespective all these companies did really well because by default, the idea is that you have to be a general enough tool to be available for everyone.
Caleb Sima: Yeah, and, and I mean, listen, like, you know, I have my own fear of AI's threat to jobs as everyone else, but as we have seen every single time in our history where new technology has occurred. That has been their fear. And that has been, and the what happened was the exact opposite. To your point, Ashish jobs [00:15:00] grew, people grew, they had new types of opportunities that got created.
You know, like if you think back on the times when calculators first came out and the typewriter first came out, and people are like, oh my gosh, all these people are lose their jobs. All these mathematicians, all these people writing and transcribing and like, yeah, this is, and then you just look at, okay, how many more jobs do we have now with due to technology than we did then?
Like, yeah, it's just, just shifts. And so,
Ashish Rajan: yeah.
Caleb Sima: Anyways, this whole Claude Code thing, I just, when I first saw this, it's just, you know what? It, it just it just baffles me. I have so many things about who is trading on this. Mm-hmm. Like this just, just goes back to the stock market belongs to the consumer.
And, and the consumer is not educated. And the, and it just goes off of whims. I feel this is the thing, and I could, and listen, I'll stand here and then say maybe I'm wrong. Maybe there are way [00:16:00] smarter people who are making these moves and AI will eat CrowdStrike in the next three years. I don't know, but like, it's just, as soon as I saw this, I immediately was like, okay, is it gonna keep going down?
I'm buying, like, I just, I went, I told, and I rarely day trade, by the way.
Ashish Rajan: Yeah, yeah, yeah. I mean, same. I don't day trade much. I
Caleb Sima: day traded this. I was like, I'm buying.
Ashish Rajan: Yeah. I'm with you. I would also say something that has not been called out, you know, to, to what you said, maybe to, to what you said, maybe we might be wrong in the next three years, but for it to completely take over, we are talking a massive levels of adoption, massive levels of change in people's viewpoint on.
How much can a foundation model do? Uh, this, obviously, I'm, I'm to your point, both you and I have the shared fear of what this means for the job market in general, what AI means for the job market. But another ecosystem that we have not touched on much maybe not on the podcast yet, is so the whole startup ecosystem that [00:17:00] came across that is building the whole AI security space.
Like, you know, people who have been welding down the path of, Hey, I'm gonna build the best AI powered AppSec that I'm sure there's someone out there trying to do that because the narration, the narration is anyone who's generating or creating AI companies post gen AI becoming what it is today, are the true AI native as they were to call themselves.
They have machine learning people. All those people so do, by the way, the big, big guns. But hey, let's just focus on these ones. There, there is a narration for if. There was a say, a wiz Google moment where w acquire VIS is acquired by Google for the CAPP capability for cloud security. What does this mean for the AI security space?
And maybe a good way to divide this could be, there's this whole security for ai, AI for security. I'm curious too, 'cause obviously you are in the VC space as well. I'm curious to a, what's a good place to start this conversation, uh, that you would later lay, lay this out. What are your thoughts on this?
Caleb Sima: [00:18:00] It's what you're saying is sort of, how does AI and the foundational model companies change the landscape of cybersecurity startups or companies?
Ashish Rajan: And maybe some, would, would the foundational models acquire, I don't know, CrowdStrike, maybe tomorrow?
Caleb Sima: Yeah you know, it's a, it's a really interesting question in the fact that.
What, gosh, there's a way that I sort of wanna communicate this. The way I think about foundational models right now is they are almost fuel, right? Think of tokens as fuel gasoline, right? And what's happening now is every company is effectively needs fuel. They need tokens from these models in order to operate.
Yeah. And what they're, which is obviously growing massive, the foundational muscles are growing massive because of it. Because now they are the fuel source, [00:19:00] right? They are the ones who provide gas to every company that is coming out. And so to them, what I think is starting to happen is, are they saying that as soon as philAnthropic and AI move vertically.
Where they start saying, it's the equivalent of gas companies saying, well, I'll be an automobile company and what I'm going to do is when I build an automobile and I sell automobiles, you won't have to pay for gas, or your gas is gonna be half the cost or a third of the cost in your car. And so, if they start going into these verticals where these kinds of deals start making sense, or they're even saying in, in an in AI's, uh, aspect, I think it's even more scary in the sense that they can say, Hey, in car it, like I can give you a model, but I'm gonna give you a lesser model.
[00:20:00] For everybody else, but for my vertical, I'm not only gonna provide cheaper consumption of tokens, but also a smarter model that only is then enabled by my vertical solution. And that becomes a really scary thought in the sense of sort of this, you know, how does AI foundational models take over a market, right?
Mm-hmm. Because if gas companies decided, Hey, we're only gonna allow non-premium gas in cars that are not our vehicles, and then we'll give you premium gas at lower, lower cost for anyone who buys our vehicles. Yeah. Like, you can start seeing where this becomes very impactful in this. And so when I think about, I, so this is my generic thinking, right?
Yeah. And if I wanna apply this then to cyber, right? Yeah. I don't feel like cyber's big enough. For the Anthropics and open ais, if they're gonna make these [00:21:00] kinds of moves vertically, they're going to go eat Salesforce, they're going to go eat mega oracles, you know, like they wanna go eat mega verticals that I think, you know, produce the level of revenue, like
Ashish Rajan: $300
Caleb Sima: Yes.
And market share that matters. Whiz sales
Ashish Rajan: marketing are two big ones.
Caleb Sima: Yeah. Wiz is a small drop in a bucket that just doesn't seem to be worth the effort if they were to go eat. Like, it just doesn't make cyber is too small of a market to go and do that,
Ashish Rajan: which is a good point as well. And obviously until I moved into this content marketing world, I didn't realize.
The market share of cybersecurity is actually quite small. I obviously, uh, when you're a cso you know, you care about your job and the securities company. You're not thinking much wider. But coming onto the space, talking to the startups talking to the advisory, I definitely have realized there's obviously, we are a subset of engineering.[00:22:00]
Like if you were to think we are, we're not even engineering. Engineering is the money is, we get the triplets from there, depending on the organization for, Hey, by the way, go buy whatever you wanna buy based on, yeah, 50% or 60% went to engineering, we get 40 20 or whatever the sliced a pieces between us and the corporate security piece, or whatever you wanna call it.
I'm, I'm in agreeance with you that it is not a big enough market, but it is definitely something that. Divides the market though I'll go back to the cloud example. When Amazon came up with the security products that they did, there was a huge subset of SMBs, small to medium side businesses who did not have the, a budget to afford, say, a, an expensive cybersecurity product.
But they did wanna take that SOC tool box or ISO box or whatever you wanna call as a box so that I can get the enterprise customer. They all went by default to AWS and I, I love the gas analogy used by the way, because that fine enough translates really well here. The gas analogy over [00:23:00] there was the AWS credits, Azure credits.
So, hey, you are a startup. Let me give you a, I don't know, a $500,000 check or credit when in their check equals a WS credit. Let me give you a $500,000 of a WS credit as long as you use my infrastructure to build what you're building.
Caleb Sima: Correct? Yeah.
Ashish Rajan: And to that transfer really well to what you said.
I'll give you gas for cheaper if you build everything that you're building on my thing, and again, I don't wanna create a rumor mill, but apparently the re the, some of those startups, some tend to just become a products, I don't know whether it's an acquisition or whatever has happens, but to what you said, maybe cybersecurity may not be a big enough market, but maybe there's a potential that if Entropic does continue building more cybersecurity components, it does divide the market a bit more.
Where the SMB markets or any new startups coming up who are quote unquote AI native, ultimately Entropic may say, Hey, we are starting a foundation. If you build a startup [00:24:00] on Tropic will give you credits on tokens. And
Caleb Sima: yeah, I think y you know, like when you think about acquisitions, there's generally two reasons for a cyber acquisition.
One, it's to solve an internal problem. Or two, it's to offer it as a product or a service alongside, you know, similar to what your example is, is, a w why is AWS offering security? Because in the overall scheme of things, a W S's goal is to be a one-stop shop for everything.
Ashish Rajan: That's
Caleb Sima: right. They want a company to be able to run independently using only AWS products and be able to solve those problems.
And security, as we all know, is cross verticals and there's a foundational layer on top of infrastructure Security needs to be part of that. And so buying security or offering security services or products is absolutely a no-brainer in the sense of accomplishing that goal. So then you kind of think about, [00:25:00] well, what is the goal of Anthropic or the goal of open ai?
I don't really know what that is. Right. Like everyone, if you ask an average person is to get a GI. If you ask them a GI, but like, I don't know what the goal of what they are doing is. You know what they are definitely actively doing is they're fueling clearly the next generation of software. And business. And in that case I feel like unless they run into huge problems in the enterprise, deploying their models because of a security problem, then they would not, like they would acquire something there. They would acquire something to help their sales motion be better and faster and cleaner.
They would acquire something to help them internally manage the set of threats that they have, because it's cheaper to buy it. Than it is to be a customer of it. And because people like [00:26:00] Anthropic and OpenAI have very unique sets of requirements.
Ashish Rajan: Yep.
Caleb Sima: It makes more sense to buy it internally and then use it there versus externally.
You know, like that, that's at least my, you know, off the cuff thinking on it.
Ashish Rajan: I mean, no, no, you're, you're on the money, uh, because this is actually fine enough. I came across, I can't remember whose post was it, but someone, uh, made a post about this. I, I think this is Nick Fisher. She, he works. Or he's a VP of marketing somewhere.
I can't remember the name of the company. But essentially that was a, uh, a screenshot of a job ad at Claude, uh, at Anthropic for an account executive Strategic Account executive, which is basically a sales job. But what was interesting, and I, I'm gonna read it, read this out here you will drive a multi-billion dollar commitment by expanding cloud's footprint across customer product portfolios like AI powered soc, threat detection, security copilots, and internal operations [00:27:00] Claude Code for security teams to, and that ties in really well to what you said if customer acquisition required them to have an AI powered soc threat detection, security co-pilot, hey, things that enable their internal SOC teams.
I know so many companies that have Anthropic integrated into, into their software engineering team to build something for them. I, which is kind of quote unquote consulting. All ties back to how do I be, be more sticky with this big ass customer so that they use me for everything. OneStop shop for compute, tokens, security, whatever else.
And I, and I don't know if there, are there any companies in the current space that would be an interesting problem for a foundational model to acquire? You don't have to name name companies if you don't want to, but I'm just curious in terms of problems that could be a potential for a foundational model to acquire.
Caleb Sima: You know, when I think about this, then it [00:28:00] again, what is the goal? You know, the goal to me if I'm in the cyber position is that security and privacy's only job here is to enable a faster adoption of my model.
Ashish Rajan: Yeah.
Caleb Sima: And so to me, whatever that is the market is requiring of me to do is what I will acquire in my stack and today, from a consumer perspective, I don't think there's anything, I don't think there's any security, privacy problem on a consumer basis.
They don't have that problem. People aren't adopting their models due to a security problem. They're adopting their models due to the way it works in practicality, how effective it is, how smart it is. Right? There is some tiny bit, we've talked about this in previous podcasts, about the safety barriers and models and how annoying they can or cannot be, [00:29:00] right?
Ashish Rajan: Yeah.
Caleb Sima: So then it, we know on the consumer side, I don't think safety and security per se are any barriers of adoption for any model. So then it goes to the enterprise side. How important is it for these models to be embedded in the enterprise? That, I don't know. I don't, it doesn't feel like you don't hear a lot about Anthropic or open AI and their enterprise plays, minus the fact that they offer offerings with AWS or Microsoft that you can have internal models, self-hosted.
You don't hear a lot about, well, what barriers do they run into, or what do they need to win those deals? Are they security tri privacy, trust problems, and do they need something there to enable a faster, better? Sort of adoption to enterprises and I don't know the answer to that, and, but to me that's the only reason where this would apply.
[00:30:00] So how do you name companies that do that? You know, you can name companies like Noma and say, well, Noma has been focused on enterprise AI model adoption and rollout, right On the developer productivity on the developer side. Well, what about the corporate employee side? What about forcing all corporate employees to use Anthropic versus open ai?
Is there something I could do there if I offer products like Harmonic that says, here's all the people that look at your, use your AI and use open AI versus Anthropic. Does that level of visibility build better adoption for me as Anthropic? Probably not, I don't know. You know, it's just like, how far do you go in order to say, I ease transition, or what are the barriers to do that?
And honestly, if I'm Anthropic, I wouldn't waste my time. I would be focusing more on malicious use of my models, protecting my models, focusing internal on [00:31:00] models. And the only thing that matters that we're the rest of Anthropic is focused is how do I enable faster, easier adoption by software, by companies to use this?
And if I want to go eat a vertical, how do I go eat legal, go eat Oracle, go eat Salesforce, you know, like these ServiceNow, like yeah, if I were gonna build a vertical, like, you know, these are the kinds of things I feel I would go do.
Ashish Rajan: Yeah, I and I, I am, I'm with you on this. 'cause I think as you said that the one thought that came to mind was.
A we were already established that cybersecurity is not that big a market. But I was also thinking that the way people are positioning themselves today, or when I say people, I mean cybersecurity companies that are coming up these days, a lot of us are putting bets on the fact that this problem that they are trying to solve would still exist in five years.
And would just, I mean, I don't even know what fibers would look like. So we are all assuming that the, um, to, to your point [00:32:00] about the open source security vulnerability would not be self, uh, self. Remediated by the, of the foundation model. They just by default would not recommend it. If it's learning by itself after enough data points, I would think it should happen.
But hey, respective, like that's the AppSec market. Then there is your building market or who's gonna build the infrastructure. There's already quite a few players in there as well, but I don't know if they also decide to put infrastructure security or cloud security in there. What's the point of a C app at that point in time?
But a lot of the focus today for at least a marketing seems to be around the fact that, hey, AI is producing bad code. So you need my AppSec tier I, you need my AppSec tier or the other tier is you already have a lot of false positive. Let me use AI to solve or reduce a lot of that so you can prioritize better.
And there's another approach, Hey, how about you go zero trust. Let's just use, let's just use zero trust. Everything that way doesn't really matter what Ashish tomorrow down the line thinks of adding as a new AI tool. You [00:33:00] can pick that up, you can block that, you can get the data flow, all of that tho, those are the three bigger categories.
And obviously we can go into each one of the verticals, like vulnerable team management, all of that as well. But a lot of them are centered around the fact that it is an, it is a CSOs today's problem and the next six months problem. Whereas to what you said and dropping and open ai, they're not looking at six months one for them.
It's like, hey, what's, what does this mean the next decade that I'm still here in the next decade? And it's, that's not cybersecurity.
Caleb Sima: Yeah. I, I, and you know, it's, you know what I'm worried about? I'm more worried, I'm less worried about, let's say, foundational models, acquiring or getting into the cyber vertical.
And what I am more worried about, especially as a VC in this space now, is that why did startups exist? Startups existed because the ability to innovate and execute quickly dies in big companies. And [00:34:00] what maybe I'm worried about, and you know, to some extent, you know, there's, there's lots of debates here, Ashish, so you can push me on this, but like, is the value now held in big companies, right?
Big companies have distribution capability and does AI now allow them to innovate and execute way faster than they could before? Which means the timeline that a startup had to be able to build a product, marketed it, get it, get a bunch of customers on board to prove its value, you know, the three to five years it takes to get on that road so that then the existing big company can go, buy that value.
Ashish Rajan: Mm-hmm.
Caleb Sima: Like now can a big company just build that in six months and use their distribution already and their platform and their data? Lake that they already have already to go and reach that. So does innovation and [00:35:00] execution really now no longer die in large companies, right? Actually you have more context, more data, more reach to be able to do that.
And does that hurt startups tremendously in an age, by the way, which we're in now, where there are more startups than ever and AI has created the ability of more startups than ever, right? Yeah. Because everyone says, look at all the things I could do, but I sort of wonder a little bit, Google's, Microsoft's, all of these guys, you know, like they don't need all of these startups to go down this track anymore.
If they can go now and then start innovating and building this internally. Now listen, like I can argue against myself on this, but I, you know, I kind of worry somewhat on
Ashish Rajan: that. Yeah. I mean, I was gonna say, I mean, I was gonna to do what you said. I was gonna definitely say, going back to what I was saying earlier, the pie just gets bigger.
You know, we talk, we spoke with a typewriter calculator example as well. Like what meant ba like, so my dad was a typist back in the day, and [00:36:00] I think, whatever, I think it's called shorthand or whatever that was called. Uh, yeah,
Caleb Sima: yeah, yeah.
Ashish Rajan: And after a while he had to, he was one of those people who was impacted by the short the advent of computers and all of that.
Yeah. Uh, but then the reality is what that did, uh, moving from typewriter computers was that it made everyone knowledgeable in it. Every, everyone who was open to learn it became knowledgeable in it.
Some of them were colleagues of my dad probably who moved from being a typist on to becoming a computer person, whatever, typing computers. But I definitely feel startup ecosystem being bigger is a good thing just because. There's no practical way a bank would know all the bank problems that me as an individual customer could have today with AI world, we, they just, I mean, in, so in an AI world, what those problems are, I have no idea.
Like I, even
Caleb Sima: in an AI world, I would imagine you could just ask, right? Yeah. In an AI world, I could say, what are [00:37:00] the, based on your analysis of the context of where we are as a company, what are the five biggest things that we should innovate and build solutions for? And then it'll just tell you, and then you just say, okay, I need to spin off, you know, five teams to go and work on these projects.
And it's a team of like five people com combine with a team of 300 agents to go and build a working solution for each of these, right? And then like, okay, where does the startup fit? Is the startup then, like where does it go? The startup is in a very crappy position because actually the data, the reach.
The ca, the compute execution capability, all of that is, is into the big player. And if a big player can just do that, you know, I don't like, it's, it's just hard.
Ashish Rajan: Big players can definitely have their first mover advantage 'cause they will have the distribution and the capital to spend on it. So I, I, I'm with you on that front, but how many would actually go down that path is only time would tell.
But I mean, but to what you said, and I'll, I'll [00:38:00] probably, uh, double down on the fact that innovation does go there to slow down and probably eventually disappear. 'cause after a while, I'm just happy with my, and this may sound rude to some people, but I'm just happy with my paycheck. Innovation sounds exciting until it comes to my job.
So I'm just happy to get that paycheck and just continue doing what my minimum requirement for nine to five is 4 49. I'm out of that door. I'm like, I don't care at that point in time. So unfortunately Enterprise does create that kind of people as well. And I'm not saying all of them are like that, but there's definitely a variant of people who would be just that.
And maybe they are the ones who get disrupted 'cause to your point, we are assuming the large companies are only going to innovate. They may even lean down quite a bit. And then we'll have another ecosystem of all these people who potentially lose their jobs starting AI companies 'cause they are experts in, I don't know, like something really specific that they have done really well for years and AI gives 'em the capability now.
We are, [00:39:00] uh, at least for me, I don't truly believe that the true problem all these startups and large enterprises would have distribution. How do I get to enough people, which is where Anthropic and open AI trying to just do everything under their power to get as much distribution as possible. They've gone to country like third world countries to expand their market reach and going, okay, where else can I go to be the only provider of AI for these group of people or these ethnicity or whatever you wanna call them.
It. One is distribution. And the other one I, I truly believe is the fact that because we are moving with the AI slop world in parallel, where now more people are craving real world things. They want communities in-person things. So I truly believe the, the people who get access to building those smaller communities, like where we are building with the podcast and everything else, that's truly where those data points would never reach a large company.
They would just, they for they, they have never reached it. I like, I think I would never be honest [00:40:00] about what I truly think about a company, uh, if I'm on their payroll. I just can't. Right. It just, it's just one of those things like I would always, yeah, yeah, sure. We'll do this. I may hate my boss. Not that I hate, hated mine, but in the future if I'm, I, I'm like, oh, wait, I can't be honest because my paycheck depends on this, or my stocks depend on this.
I want my stocks so I can buy my house or whatever. And it sounds very straightforward and rude but I truly believe that there is definitely a future for startups as long as they are open to move with the AI field which is hard without capital. That's where my,
Caleb Sima: your, with your point being is that in the world of ai, their drives and incentive to create data islands, right.
Where AI does not have access to it.
Ashish Rajan: Yeah.
Caleb Sima: Um, yeah.
Ashish Rajan: It already
Caleb Sima: happens. Yeah. Where communities get together and create these data islands where technologies get together, create data, because this becomes a moat. For them in [00:41:00] that that's right in that area.
Ashish Rajan: Because to your point, there is no way I can have CrowdStrike data.
But if I, if I get enough SOC people together, I would know exactly what's wrong with CrowdStrike.
And I, or
Caleb Sima: going back to CrowdStrike is they will hold onto that data.
Ashish Rajan: Yeah.
Caleb Sima: Very, very carefully now, right?
Ashish Rajan: That's right. That because that is the only moat they have. Once that moat is out, there's nothing for anyone else to not replicate that or even access that information.
I'm sure they're preparing for it in their ways and so are Palo Altos and everyone else. 'cause the advantage that these people who have been what I call dinosaur companies like the Palo Altos of the world, ZScalers of the world, your everyone who's publicly listed and is be there for a while. The only mode they had is, Hey, look at the large threat intel team we have.
We've run 50 years of research, or 50 is too much. Probably whatever the number of years of threat intel research analysis of this. We have a real time threat research team. All that mo if that's the, that's your quote unquote moat. [00:42:00] Once that's out and you can probably, Anthropic can do that for you or anyone else can do that for you.
Why do I need to pay for a threat intel feed? I don't know, like that people would start asking questions, especially when most people are being questioned about, Hey, how do I justify? I don't know if you're saying this, but a lot of CSOs that I've been talking to in the advisory and otherwise a lot of them have the focus of reducing cost overall moving forward last year, this year and even more next year.
And my hypothesis there is that because the engineering the top of the funnel as I was calling it, where we get our money, they are being asked to spend more on AI. So they have to go justify cut costs, renegotiate contract and whatever the thing may be, which means that now I am into this, in, into back to that startup land where I would go for these startups.
I would go for someone who would gimme a deal because I can't afford the big CrowdStrikes of the world.
Caleb Sima: Or, uh, [00:43:00] you just hire the smart guys. I mean, I'm talking, I like, just the other day was talking to someone who is running, you know, isn't a fairly decent sized company and their security team is a size of six and they have agents out the wazoo.
Um,
Ashish Rajan: right.
Caleb Sima: And his plan is not to hire a lot. Like, he's gonna hire maybe one, maybe two maximum. Uh, but he is like, normally we would need 30 person team in this company for this. And we're doing not only just fine with six, but we are doing it better than what we would've had if we had 30. And so, you know, there's definitely, and they have no, they have some vendor products obviously, but this is mostly due to CLO code.
Right. And so, you know, that is definitely gonna be an interesting change in the internal security [00:44:00] teams. And I just feel honestly like these point solution security vendor startups, things has got to die. Mm-hmm. I feel like it's almost better. She like, here's what I'm thing. Like what if I just committed everything in my organization?
I'm just became a straight Palo Alto customer. Everything I bought. Across the board is a Palo Alto solution, and the way I bridge the gaps is through Claude Code, right? And then I have one vendor who I can really push hard on cheaper costs as a full dedication and use Palo Alto as the distribution points in my network and in my enterprise to get the data, to get their results, but the gaps and the research and the digging, then just have Claude Code or AI agents do that work [00:45:00] for me.
Ashish Rajan: Yeah.
Caleb Sima: And will I not probably get better than trying to cobble together 30 different, unique value pointed solutions or not?
Ashish Rajan: Yeah, I mean, I, I, I'm with you. Which is, I think how they are all positioning themselves with the acquisitions they're all doing, they're all positioning themselves as this to, in an AI world, I would need identity, I would need observability.
I would need like the two big pillars that people have been talking about that to what you said, all the data points you would collect from your Claude Code or AI agent. You want someplace. You have a quote unquote security data lake, not a siem, just security data lake or late data lake in general for the organization.
You bring all that here. Now you look at observability, you have identity as a challenge that people talking about. There is that coming out. Then there is, hey, what kind of applications are being used a hundred percent with you? That is ideally a future that people talk about. Another people, uh, question that people have been [00:46:00] asking is that if if all you were doing, and I'll probably take your example a bit more further, if all you were doing was building products with code.
The foundation model itself is doing security for you, which is the Claude Code security where we start this episode. What is the need of a vendor?
Caleb Sima: Yeah. Obviously the needs for vendors are to solve those particular problems that no one is solving right now.
Ashish Rajan: It's all for gaps which are left by humans.
Yeah. We we're, we are trying
to solve who Ashish the developer who left a vulnerability wide open, did not get picked up when he was doing if I were to, and it's funny, I think I, I made a video about this on Instagram, but if I take a step back, in general, cybersecurity industries has always been about there is a gap in the way something was produced.
I'm gonna identify that gap, help remedify it. There is no third thing that we do, that's the only we find the gap, whether it's red teaming, whatever, whatever we [00:47:00] add any other title underneath it. And the second is that how do I remediate it so I'm not in breach of compliance or not getting hack.
Caleb Sima: And and how do you mo how do you monitor to make sure it doesn't get hacked?
Ashish Rajan: Oh yeah. Like do maybe, yeah. How do you monitor? Maybe it doesn't so ongoingly, but
Caleb Sima: yeah,
Ashish Rajan: out, that is the three pillars we all stand on. Once that is gone, I'm like, what am I, so not that I wanna send people down the dooms and gloom part, but we are way, we are really far from that ideal future. But I what I want to, to what you said about the startup ecosystem today, if I were to look six months or one year, which is where most CISOs that I'm talking to are focused on.
Increasing AI usage across the organization, across cybersecurity, which probably looks like to what you said, something like Claude Code AI agent fills that gap and a hundred percent use case for that. Maybe you are building skills and training your people for that. The second one is the ecosystem of startups has, has an opportunity to get into [00:48:00] companies that maybe they never got access to because now everyone's being asked to reduce the cost.
And if, if I'm not like an all shop for one vendor or maybe I'm trying to push my vendor for, Hey, if you don't do this, I'm gonna drop your product and go to someone else. Startups ecosystem actually have an advantage today, at least in this transition period. I don't know what we'll do in the future, but I feel in the least in the six months to one year, there's definitely an advantage.
Caleb Sima: The counter to this, and, and it's funny because I'm the inve investor and I'm the one who's beating up on the startups right now in this debate.
Ashish Rajan: I know. I'm like, I'm better defend the startups over here.
Caleb Sima: I know. And I'm, and I'm the one who is like going against the counter to this is, you know, I was talking with a friend of mine who was trying to do renewals on a really big vendor.
And that big vendor was pushing super high costs on certain features that they were delivering. And so what this other, what this guy did is he basically said, Hey, listen, like we're not going to renew any of these things [00:49:00] because I'm already doing 80% of this in Claude Code anyways, so I don't need it. And then they the vendor decided, oh, okay, we'll, we'll drop our price by 80% as long as you still be a customer.
Oh. And so, like, there is a, you know, the. At the end of the day, that's what they wanted, right? Because they, they stick with the vendor, they get consistency, they don't need to do anything else. And then they, again, use their skills to, to take the output of those features and do something else with it.
But, you know, to, to then to that point being is like, does there need to be a startup that, that goes, fills the gap that this major vendor is trying to sell overpriced prices? Or at the end of the day that's a losing battle. I don't want to be another startup in a niche solution trying to fight these big guys when these internal companies are just using AI to compensate for the gaps.
Um, and using, you know, like it's a, it's a scary pa place to be. [00:50:00] In the current world, uh, with these kinds of drastic changes that are happening. And, you know, I'll end this on a, on a positive Ashish. So I'm not quite, I'll go towards the, the startup aspect of these things, uh, versus my doom and gloom. But look at the, these, everything is shifting so fast and so crazy.
Like I, you know, to, to some degree, you know, cloud had the same thing, which was AWS is just gonna offer those solutions. 'cause back to our further point in this conversation, a w S's goal to, and offer everything in one spot. So if AWS enters the market, there's no room for these startups and clearly that.
Was not the case. Yeah. More startups came out of using AWS than by far AWS eight or killed. Literally. Yeah.
Ashish Rajan: Yeah, yeah,
Caleb Sima: yeah, yeah. So going back to your, being positive. Yeah,
Ashish Rajan: no, and I'm, I'm with you and I, I, I think I definitely feel we, we are trying to have [00:51:00] a balance on both sides here, because reality is you and I don't know the future where, obviously this is based on the conversations all of us are having, uh, the signals we are getting from the market.
That's what this conversation or this, this insight is coming from. Uh, I, I definitely believe, uh, I think someone said this and I thought it was really interesting. They said as much as there's a lot of excitement for AI today, gen AI specifically today, uh, it, we always thought that mainframes would be go, we would get rid of mainframes with cloud.
Guess what? Mainframe is still there. It hasn't moved. There is, there are still grandmoms out there using a flip phone. There are like, so not everyone has an iPhone or an Android latest. Not everyone has the latest and greatest. Not everyone has a budget. Actually, someone even pointed this out. There's a lot of companies talking about how you'll miss out on the opportunity for ai, but how many companies a have the, to what you were saying about employees with skillset [00:52:00] to use Cloud Core to build those things in the first place.
'cause a lot of them came from I am a CrowdStrike expert or PaloAlto expert. I've got their certification or a Zscaler expert. I know how to use a Cisco firewall. That was the basis of someone getting a job because I was recruited to look after a cso, Cisco software or a PaloAlto software or a that's why those certifications came in.
That's why they became the, the hype, right? So
Caleb Sima: the good old days
Ashish Rajan: in all that there is a, a slither society which would move forward with, hey, the predictions we had about the AI first idle world where people are producing Claude Codes of the world, which may be the Netflix example that people used in, uh, in the, in the cloud world where Netflix was that poster child for, Hey, look at how they do automation, look at how they're building down systems and bringing up chaos engineering and all of that.
I'm sure there'll be an equivalent in the AI space, but that would not be the case everywhere because every company is different as much. I think, [00:53:00] uh, you said this in an episode, uh, where, uh, uh, behavioral patterns, like we all think that, or behavioral patterns are best way because you can, uh, predict patterns, but there's nothing predictable about the patterns in an organization.
Caleb Sima: Yeah. Yeah.
Ashish Rajan: That you can use for a behavior analysis. It it just humans reacting emotionally or in some other way, shape, or form, that there is no set pattern to it.
Caleb Sima: Yeah. And it in order to do, you know, anomaly detection at a level that really makes a difference, it's almost impossible. 'cause there is no such thing as normalcy inside of an organization.
Ashish Rajan: A hundred percent.
Caleb Sima: However, you know, this is me dreaming, this is the startup I want to invest in. Is like, but every action that takes place in an organization has pre-con. And if you had access to all of that pre-con, which is the equivalent of [00:54:00] intent, then you could determine actions that are abnormal.
Or done out of context in a way that doesn't make sense, but that is, that's like a big ask. It's plausible.
Ashish Rajan: Funny enough. Because isn't that what we're trying to do with evals? We are trying to understand the intent.
So, actually, maybe I should, uh, state for people who may not know what evals is, essentially evaluation or evals as people call it, is the way for you to use, uh, or to run test against expected behavior as a response to a prompt is the simplest to make, can explain it.
And what I find is a lot of people who were struggling with not having ai. Products or services go into production. A lot of them were using evals as a starting point to understand, Hey, what's my security eval test that I need to pass that I can have or use as a policy escort for my guardrail?
And I too, what you set up just now about the [00:55:00] behavioral analysis, maybe just think of that way. If I technically in that ideal world, in your ideal company, if everything was already searchable, my, my code, that means my context was in a vector database and I could search for an enterprise search, then what would stop a Claude Code or a chat GPT or any other AI agent to pick up on that context and maybe do a behavior analysis on, Hey, it seems like people in finance at 2:00 PM for some reason open Excel and start doing financial models.
Lemme just automate that for them.
Caleb Sima: Yeah. Or it's, it's the knowledge that it says, I see in the meeting notes, in the emails, and in the slack conversations that the legal team or the finance team in your, your case, your, your finance team has made discussions about editing these models by deadline tomorrow in order to produce these types of results.
Therefore, I know or am [00:56:00] expecting that these expels Excel spreadsheets will be open and these models will be accessed. Right? Yeah. And so like
Ashish Rajan: That's normal at that point in time.
Caleb Sima: Yeah. Yeah. Then it's, you understand the intent before the action and the expected action at which you want, um, which then creates, normalcy, right?
Like, this is not an abnormal action. The abnormal action is. There was no discussion about these models or the queries that seem to be heavy in the finance discussion or threads or tickets. And yet Bob, who has have a history now of low performance is beating, has just opened up the Excel spreadsheet starting to beat up on these models.
Ashish Rajan: Yeah.
Caleb Sima: This is now an abnormal behavior, right? Like this is the kind, you know, like, uh, like that's the, if you don't have that context, so you can't define the intent, [00:57:00] which you should be able to in any organization through communication. Yeah, yeah,
Ashish Rajan: yeah. And I guess to your point, we could not do that before because none of these systems were interconnected.
There was no enterprise search for my,
Caleb Sima: and, and we just don't have the reasoning capability. Resources to do that for every action in an organization. Like if
Ashish Rajan: way too much compute?
Caleb Sima: Yeah. Like if you really think about the compute power it would take to do this, it would take, you have to take every employee and every action they do, get the context trail to that action and evaluate, to your point, eval it to determine is this the right action that we would expect based on the context trail of every single action that this employee takes.
Not to mention any, the interconnectedness of all of those action. That's a massive ask.
Ashish Rajan: Maybe. Uh, and maybe this is the final point here. It goes back to what you said about the gas example because maybe the, [00:58:00] whatever the highest model is today that people pay money for, and maybe it gets to a point where people who use, say Tropic or Claude.
Uh, special models where you can let entropic spider in your entire environment and make a judgment call for, uh, what are the missing integrations. And like you could totally have them create that for you and you become this go golden, I don't know, company or whatever that they could, 'cause technically they have the capability 'cause they're doing it for themselves already.
Caleb Sima: Yep.
Ashish Rajan: They just need to extend that to, Hey customer, let me do that for you as well. Yeah. If they do that people would sign up for that stuff. Like in day, like day zero. I'm sure there'll be just like booking out slots, but
Caleb Sima: Yeah. Yeah. I mean, almost to a point of saying, well what is a vertical. That you'd want to hit.
Ashish Rajan: Yeah.
Caleb Sima: You know, maybe glean is a vertical, they may wanna go eat, right?
Ashish Rajan: [00:59:00] Yeah. Yeah, that would be interesting. But I mean, in saying that, I think overall conclusion so far is that people don't need to fear for their jobs because, uh, Claude Code security is going down this path and, uh, startups still have hope yet, uh, for the moment that's the conclusion.
Cool. Alright, uh, thank you so much for your time everyone, but if you have any thought, let me know in the comments. And I think as always, uh, we look forward to hearing your comments on Spotify, YouTube, whenever you drop them on, uh, what do you think of the episode? As well talk to you next time. Thank you for watching all listening to that episode of AI Security Podcast.
This was brought to you by Tech riot.io. If you want to hear or watch more episodes of AI security, check that out on ai security podcast.com. And in case you're interested in learning more about cloud security, you should check out a sister podcast called Cloud Security Podcast, which is available on Cloud Security Podcast tv.
Thank you for tuning in and I'll see you in the next episode. Please.





.jpg)

.jpg)


.jpg)
.jpg)

.png)







