Are AI Security Startups Faking It? How to Separate Signal from Noise

View Show Notes and Transcript

With over 70 startups claiming to have built the perfect "AI SOC Analyst" or "AI Threat Hunter," how do you separate the real products from the vaporware?

Recorded live at Decibel RSAC Founder Festival, Ashish and Caleb hosted a heated panel with Edward Wu (Founder & CEO, Dropzone AI) and Lou Manousos (Co-Founder & CEO, Ent AI). The group debates the controversial claim that AI can provide 100% threat prevention and exposes the dirty secret of the industry: Many AI startups are "cheating" by hiding human analysts behind their software.

If you were a CISO or security practitioner navigating the vendor floor at RSA, this episode provides a BS-detector framework. Learn why an AI wrapper around Claude Code isn't enough, why "consistency" is the ultimate test for AI agents, and how to verify if a startup actually has real-world, paying enterprise deployments (and not just friendly design partners) .

Questions asked:
00:00 Introduction: Live with Decibel
01:30 Meet the Panel: Edward Wu (Dropzone) & Lou Manousos (Ent)
03:40 The Great Debate: Has the Industry Given Up on Prevention?
05:50 What Has AI Actually Solved? (Repetitive Work vs. Context)
09:00 How to Spot BS on the RSA Show Floor
11:30 Defining an AI Agent: Chatbots vs. Threat Hunters
13:40 The Claude Code Problem: Is Your Product Just a Wrapper?
16:50 The 80% Accuracy Trap & Why Consistency is Key
21:30 Proving ROI: Evaluating AI Agents Like Human Employees
24:50 The Dirty Secret: Humans Hiding Behind AI Startups
26:30 Spotting Fake Customer Logos
28:30 Audience Q&A: Scaling the SOC vs. Replacing Humans
36:10 Forward Deployed Engineering & Personalized Software
40:30 Reimagining Security Architecture from the Inside Out
43:30 How Ent Detects Remote Workers Outsourcing Their Jobs
45:30 Final Thoughts: Asking Vendors for Real Proof Points

--------------------------------------------------------------------------------📱AI Security Podcast Social Media📱_____________________________________🛜 Website -  https://aisecuritypodcast.com/✉️ AI CyberSecurity Newsletter - https://www.aisecuritynewsletter.ai/
LinkedIn:   / ai-security-podcast  

Elias (Lou) Manousos: [00:00:00] The industry gave up on prevention. Now we gotta burn down certain things.

Edward Wu: There are probably more than 70 different AI startups are trying to build AI stock analysts, AI threat hunter, AI detection engineer. A lot of AI agents, startups are cheating with humans behind their software.

Caleb Sima: Any startup with a wrapper around a model is dead.

You can use Claude code. Go ahead and do that. Run it 50 times and tell me if it all gives you the same answers.

Ashish Rajan: If you start with 80% accuracy, overtime, it kind of couples up becomes this big thing,

Elias (Lou) Manousos: how you grade an agent. Should be very similar to how you grade an employee and that comes down to cost.

Ashish Rajan: Is it fair to ask the question, what model are you using?

Are you using more than one? Welcome to this live recording with Decibel. Um, in case you don't know who we are, uh, we'll probably do a quick round of introduction as well anyways. But uh, my name is Ashish. I am the host of Cloud Security podcast and AI security podcast.

I'm gonna pass to you guys so we get do an introduction, but. This is an AI security podcast recording. If you have never been on a live podcast before, [00:01:00] this is what happens.

Caleb Sima: It's like a panel. Wait, is this any different than a panel?

Ashish Rajan: What's, what's the

Caleb Sima: delay?

What's

Ashish Rajan: the delay's difference? No, it's like, you know how now the Zoom meetings, I call webinars, sorry, not webinars. When I call Zoom meeting, could you, would you wanna be in a Zoom meeting or a workshop? It's the same. Okay. It's a panel, which is technically a podcast recording as well, so. We would have a mic floating around for people who have questions.

'cause this is for more for you guys than for us. Uh, so if you have questions, there'll be someone with a mic around as well shortly. Uh, so yeah, as I said, AI Securiy podcast. Cloud Security podcast, been in the space for 17 plus years, but o to my first panel member, Eddie.

Edward Wu: Hey guys, my name is Edward Wu. I'm the founder and CEO of Dropzone AI

We are a security startup building AI agents for cybersecurity teams.

Elias (Lou) Manousos: Ooh, I'm Lou Manousos. I am the founder and CEO of ent. ENT is building a third generation endpoint, which uses AI directly on the [00:02:00] endpoint to understand intent and block pretty much every risk. So think of like the power of GenAI to help you do your daughter's homework or repair a diesel engine.

Imagine being able to handle like every threat and prevent. So that's what I'm doing.

Caleb Sima: Oh, we're gonna dig into that.

Elias (Lou) Manousos: Yeah, we're gonna go nuts.

Caleb Sima: Lou is asking for it.

Elias (Lou) Manousos: I'm ready man. I'm here for it. Yeah, this

Caleb Sima: before

Elias (Lou) Manousos: that

Caleb Sima: is gonna be fun.

Elias (Lou) Manousos: Before I gained a lot of experience on this, I've been obsessing over it for about five years.

I was at Microsoft Security where I ran, uh, the defender threat intelligence team, and I built the first generative AI, uh, product for Microsoft security called security copilot, and I integrated that into the Defender stack. So. It did. It was the first product. Amazingly, Microsoft shipped the first generative AI security product, so And in

Edward Wu: March, 2023, right?

Elias (Lou) Manousos: Yeah. My gosh. Thank you so much. I'll take it. We had a one year head start.

Ashish Rajan: Oh

Caleb Sima: yeah.

Ashish Rajan: And now we are behind, but hey Caleb, you go first. [00:03:00]

Caleb Sima: I, I'm Caleb Sima, also the other half of the AI security podcast. So I'm excited to do this. I get one. Woo. All that's

Ashish Rajan: good. There you go.

Caleb Sima: There you go.

Ashish Rajan: No, the, the crowd's not sleeping.

By the way, for people on the podcast, this is a real audience. We did not fake this. That is, this is not AI. Actually human,

Caleb Sima: that's what everyone who uses AI says.

Ashish Rajan: Yeah, uh, actually you should go with the first question then, 'cause you already had.

Caleb Sima: Well, I'm just like, I mean, Lou's sitting here and he basically says, we stop all threats and all risks.

So like, I mean, this is basically the, we're the silver bullet, uh, statement on this panel, so this is gonna be fun. That's the,

Elias (Lou) Manousos: well, yeah, the industry gave up on prevention because detection and response was very much needed before we had ai. It was static rules, right? We had AV worked pretty well. Then we had EDR worked great.

I think George and CrowdStrike gang, you don't have to start

Caleb Sima: defending [00:04:00] yourself yet.

Elias (Lou) Manousos: I'm not

Caleb Sima: defining,

Elias (Lou) Manousos: I

Caleb Sima: haven't asked questions yet.

Elias (Lou) Manousos: Your leading question implies that prevention. Is not possible. And I think

Caleb Sima: that, no, I, all I said was, you said you made a statement that says, we will prevent and stop all, all attacks, all, alright,

Ashish Rajan: I'm ready.

Go. Because clearly you're going in circles now. I'm like, wait. So, uh, and for context, obviously, uh, for people just came in as well. Uh, you get, you get a chance to ask the question too, but the context today is. We're basically trying to separate signal from the noise. That's kind of been the theme across the board and our hope here is to at least present to you and people who will listen and watch it later on.

Uh, a version of when you walk a walk at any cybersecurity conference or have a conversation about AI security, what is something that should totally run away from, or words you should probably hang around and listen more or lean in to those conversations. So I think that's to set the theme for this, and that's kind of why when you kind of mentioned the prevention detection piece.

Maybe the first question that I have and seems

Caleb Sima: like we, because that's our theme of this [00:05:00] podcast, is when vendors say they do AI and in security, what are the questions and things to

Ashish Rajan: validate that? What the gap? I mean, have we sold everything? Maybe 'cause we'll start there. 'cause clearly there's a claim being made already.

Do you guys feel, obviously you come from specific fields within AI security, which would not have been a thing three years ago, but today there are specializations within it. Have we sold everything in AI security as some of the billboards said?

Edward Wu: Uh, I don't think we have a billboard that says we stop our threats yet.

So, I mean, we have lu so Yeah, we have Lu. Yeah, yeah. Um, but, but in all seriousness, I think there are parts of cybersecurity. Um. Different chunks of manual and repetitive work. I think we're seeing great progress being made by AI agents to automate those. Um, whether it's alert investigations, you know, reading a hundred threat reports and extracting TTPs or behavioral signatures or code reviews.

Right. We are seeing. AI's ability to really identify [00:06:00] vulnerabilities, um, and, um, and weaponize 'em very effectively. But what we haven't solved in my mind is we haven't fully solved the context problem, uh, every single security organization. Has tons of kind of organizational caveats that they're keeping their heads, and I don't think the industry has solved the problem to give AI agents access to those kind of informations that currently is pretty unaccessible using APIs.

Another thing we haven't solved is end-to-end autonomous SOC. I think there are vendors out there claiming, Hey, you can fire everybody in your soc. Deploys the software, it will do everything and the world will be great. But in my mind, we haven't, uh, reached that place yet.

Ashish Rajan: What about Lou, what's your thoughts on this?

Elias (Lou) Manousos: I just can't get my mind off of Caleb's thoughts.

Caleb Sima: Get the prevention out.

Ashish Rajan: The prevention

Caleb Sima: on,

Elias (Lou) Manousos: look. Look, I think there's [00:07:00] just been tremendous advancements with ai, uh, specifically the copilot. End to end workflows and it's not perfect, but it is amazing that we can actually supercharge a level one analyst and you know, the concept of like a human on the loop and a human in the loop.

Like they seem to be working and it's pretty fantastic. Um, the thing that got me really excited about prevention to, of course, keep going back to that. I think it was seven years ago when we had these self-driving cars in San Francisco that you'd start seeing and you're like, man, this is scary. Right?

And within the last two years, if you're in a Waymo, you can actually get to your destination with not an accident, not nothing. So imagine if we had like a self-driving car that worked like security products. It would like log. Accidents. That's what it would do. It would not like avoid collisions. And if it's possible [00:08:00] for a car to operate with that level of safety, it, I think it is possible for us to think about going, you know, beyond the context of just helping an analyst and like helping users be safer.

So of course it's not gonna be perfect. Like, we're gonna need safety drivers, we're gonna need that, um, human on the loop. But that's the part, um, that I think we've proven that we can help analysts become. Super analysts and now can we help average users be a little bit closer to like Caleb level knowledge of security?

So that's kinda how I see it. Did he, and I think his context, so I would agree with what everything

Caleb Sima: you said.

Elias (Lou) Manousos: What's that? Did he pay you money for this? Because

Caleb Sima: like, I think he's, he's buttering me up because knows I'm like, uh, he's trying to ease me off. I could see a

Ashish Rajan: setup coming. Sorry. You continue. I just felt like a setup coming, which is why I just was.

Giving a heads up, but go for it.

Caleb Sima: No. Yeah. So like, let's, uh, so let's start with maybe the positive side of sort of this, you two both are, you know, experienced security practitioners know the [00:09:00] industry. I mean, Lou, you and I have been friends for a long time, which is why we can give each other such shit right now.

This is true. Um, and, and like you're both now building security companies in the AI space, so. Before we target all of our criticism and digging into both of you, I would love to know how do you approach it when you go out at RSA on the show floor, when you go and look at the kinds of vendors that are now presenting and their AI security companies and how they're going, how do you sort of vet or validate, right, like you're in the middle of it so you know everything.

What's your approach? Do you have an approach? How can you smell the bs?

Elias (Lou) Manousos: I, I'll start. I, um, I immediately ask very first principle like questions. If you just keep trying to solve the problem in the same way again and again and again, I'm not sure you're gonna get a different result. And so that starts with the types of people you hire.

You know how many [00:10:00] practitioners you talk to. I think you can't build anything in a vacuum in this industry. Like if you don't know customers, you don't understand the threat actors. You cannot build the right system. And all too often vendors who are just bolting things on, even if it's a new company, they're essentially trying to fix identity with ai or they're trying to fix endpoint with ai, it really, you have to peel it back like, what's the engine like, what's the architecture like?

We've heard that from other CISOs, even like who are, are basically presenting. What I would call a first principles approach to re-architecting the entire security stack. I think what got us here, and I'll take responsibility for it, um, is poor architecture. That's not compatible with the scale, the speed, and the massive scale of this industry at this point in time.

Like we never thought 35 years ago. That we would have this level of responsibility in our hands as security practitioners. And now, now we gotta kind of burn down certain things.

Caleb Sima: Can I give you, I, I'm wanna give you like a real life [00:11:00]example. Um, you know, not that long ago someone came up to me and said.

Hey, we're building a, a security company and it's an AI threat hunter, right? This is what it does. How, what's, what do you do? What's the, it's the founder of the company. What, what's the questions you ask to say, okay, you know, is this valid? Is there a mo? Like, what is, what's the kind of thinking you go through in that specific example?

Edward Wu: Yeah. Um. It's interesting you ask that because we just announced our AI threat, so, uh, we, we definitely right there. Yeah. So we definitely know.

Yeah, so we definitely know where the bodies are buried. Like, um, obviously I, I think at this point there are probably more than 70 different AI startups are trying to build ai. So analysts, ai, threat hunter, AI detection engineer, and in my mind. As a, as a technical founder, when I look at [00:12:00] our, like other startups, I look at a couple of things.

First and foremost, what is the boundary of the automation? Um, and this is where like giving people for whatever reason, um. Kind of define, you know, AI agents differently? Um, in my mind there are, and, and you can tell pretty quickly by looking at what is the input of the agent and what's the output of the agent.

For example, uh, I've seen cases where vendors will essentially call chat bot agent, like when you talk about an AI threat. So I have seen, you know, not to name names, but some of our competitors, they'll say, Hey, we have a AI sweat center. And you're like, okay, what is input? The input is. Kind of show me logging history of this user from the last 30 days.

And then the output is a natural language summarized, uh, response, um, based off, off of a same query. Like that's kind of one type of a AI agent, but in my mind that's kind of closer to a chat bot or co-pilot. Um, and [00:13:00] then another way to build, for example, AI Slack enter is say, okay. It like, and that's kind of how we are doing it in drop zone, is we define the input of an AI threat hunter to be a hunt back, which consists of one or more TTPs and behavioral signatures, and then the output is the.

Hunt result. The details, like all the behaviors we track down, like we might have fetched, you know, 50,000 rows of behaviors from the sim. We performed a number of data analytics to filter that down to maybe a hundred unexplainable anomalies. Then for each of the a hundred unexplained more anomalies, then perform a detailed investigation.

Caleb Sima: If I were just like, okay, I'm gonna ask you, like if you were to say that to me, and then I would just say, okay, well, like I feel like I could just do this with Claude Code. So what is the difference? Like I can fill in and say, here are the things, here's [00:14:00] my skills that are my hunt packs, and then just here's my integrations.

Just go run it. Like, like what's the difference? Why can't I do that? Versus like, what, what are you gonna bring that's gonna sort of be more valuable?

Edward Wu: Yeah. Yeah. That's a great question. And I think a lot of AI agent startups or security startups are running into this challenge, right? Which is, nowadays with Claude Code, it seems it's so easy to DIY, all sorts of different security products.

Caleb Sima: And by the way, this is my first question I ask people. Yeah, yeah.

Edward Wu: And for us, the way we think about it is three things that we have, you know, as a vendor. That specializing exists that Claude Code doesn't have. Number one is we have the operational experience to really know how to use different security tools.

It's one thing to have an API access to Splunk, but being able to generate the queries correctly and as well as making sure. The queries, which like the SBL queries that you generate is actually efficient, is not going to tip over the same, um, is like, requires a lot more depth. You need to understand [00:15:00] the caveats of different Splunk operators, how indices work, how.

All those data fields work. So that's kind of number one, which is kind of as a security vendor, we know how to use each security product in a a lot more in depth ways than simple MCP server can relay. And number two is cybersecurity specific reasoning. Obviously, cloud all the foundational models are very smart, but at the end of the day, they are not.

We haven't seen a single model. that perfectly replicates the saw process and techniques of an expert human security practitioner.

Caleb Sima: Do you need to perfectly replicate, you know, like isn't 80% just good enough? Like I feel like if I can do 80% then and I don't have to pay the cost, I'm already paying.

Token costs anyways, then that should be good enough.

Ashish Rajan: I'll probably add another layer to that as well. Like obviously we are moving into a world where a lot of products are becoming API first as well, to bring it back to you, if I feel [00:16:00] confident enough that I have a secured engineer who's, who's really good with Claude Code can work with APIs and maybe is the advantage, the fact that you have figured out the.

Ai AI MLPs or is the advantage the fact that say some big provider out there has massive amount of threat intel that they've been collecting for so many long, such a long time and I'm already a subscriber to them. It's kinda like the iPhone question. 'cause most

Caleb Sima: don't answer his questions for him. Uh,

Ashish Rajan: over there I was like, I, I was trying to give room for the fact that, because I imagine.

People who would listen or watch this, they're going, I already have an EDR. I already have, they all give me APIs. So it's not that I'm creating this in isolation.

Edward Wu: Mm-hmm.

Ashish Rajan: Yep. I already have access to a potential, uh, treasure trove of things that I would need. So I I that's, sorry to, going back to his question, I would add that layer in here as well.

Does that give me at any different advantage if I am in an enterprise with all this access that I can, can I create then create my own? AI SOC then

Edward Wu: [00:17:00] Yeah, I, I think you can certainly prototype one and it will probably work. Somewhere between, probably better than work more than 50% of the time and less than 80 or 90% of the time.

I think to answer Caleb your question, the challenge with these, um, you know, having a models that can 80% replicate investigative techniques is, uh, first and foremost, 80% is not good enough for more security tasks. Because if you are only 80% correct, then the question immediately become like. Now I have to continuously review the output of the system and I have to continuously review the output of the system.

I might as well do the work myself. So that's kind of the first thing. And then second thing is when you look at alert investigations or threat hunting, like you are not using model to make a single decision, you are actually using the model to to dynamically plan. You know, somewhere between 50 to 200 steps and if your model is [00:18:00] only 80% correct at each step, that kind of inaccuracy start to accumulate when you perform long running tasks.

Caleb Sima: Ah, so let me ask a question. There should be a good, and you

Elias (Lou) Manousos: know, I wanna gimme a second to chime in once you're done. I'll let you go first.

Caleb Sima: Okay. Actually, this gives us sort of an interesting metric that I feel like I would want to validate against vendors, which is a consistency problem. Right? To your point, almost what you're saying is.

Hey, if I don't, if I have 80%, this other 20% in variance over cycles and runs creates a large amount of actual inconsistency. So you can use Claude code, go ahead and do that. Run it 50 times and tell me if it all gives you the same answers. Um, and so if you run our product, we give you the capabilities of ai, but you're gonna get consistency.

Have you heard of. Even if, even if I were to say or ask a vendor about consistency and can you run a hundred times and get a hundred times the same results, how would you [00:19:00] even, uh, sort of push on that founder to say, well, show me how you were able to achieve that level of consistency. Is that even a depth area that you could get to?

Edward Wu: I think you can always ask, but the challenge is it's easy to, you know, respond with words. I think most, more often than not, we see. Like for example, organizations during the POC process, they will evaluate consistency specifically by, for example, sending the same phishing email alerts to an AI SOC analyst over and over and over again across 20 runs, 50 runs, to really examine that.

Um, one key element of consistency is also, uh, I like how you put it, like one analogy I use with large language models. Um. Like when you look at startup like us, you can argue we're building harness around, see? See large language model providers and ultimately there's a question like, why do we need the harness?

Right? What the models become smart enough some days that it would just work [00:20:00] magically. Yeah.

Caleb Sima: Any startup with a wrapper around a model is

Edward Wu: yeah,

Caleb Sima: dead.

Edward Wu: And the one knowledge I obviously tell people is I think I personally think about large language models as an intelligence spark, it's very bright, it's smart, but it's somewhat uncontrollable.

It, it's not obviously a hundred percent deterministic. And what we as agents, you know, startups are doing, we're kind of building an internal combustion engine around that spark and the whole goal of that in internal combustion engine. Is to deterministically harvest, right? The chemical energy released by combustions into mechanical energy.

And like most people, you know, when they are driving the cars, they're not thinking about all those thousands of explosions, mini explosions. That's happening within the engine in a very similar way to when you use an AI agent, like you shouldn't need to think about like all the uncontrollable, like a hundred, 200, 500 distinct large language model invocations that's happening behind the scene.

You [00:21:00]

Elias (Lou) Manousos: jumped. Yeah, go for it. So, uh, in full disclosure, I'm the chairman of, uh, like a next gen MDR company, which uses an AI powered approach that's called 10 x. And, and so. I, I've seen a lot of the movie of like, how does this work in practice? And of course at Microsoft we had defender experts and we, we offered this service.

I don't believe in just words. I think you have to prove it. And benchmarks and evals are one way to do that. I also think the magic of. These, uh, simulated data environments and allowing, you know, at end we use, uh, a technique called the farm where we simulate a customer's environment. We fire up thousands of, you know, basically bots that represent employees, and then we can replay.

Real live events through this and just see like, how did we do, how did the humans do? 'cause yes, prevention is not always possible, but we can evaluate ourselves, right? And so, and we, and we really [00:22:00] should do that. And in the world of what you guys do in the soc, I would argue that the outcome, how you grade an agent should be very similar to how you grade an employee.

And that comes down to cost. Very much so like, and if I'm doing a good job at it, the proof's in the pudding, I should be able to pass that cost on to the customer. And if we don't, if that doesn't work, then we're not gonna actually be able to do the job of tilting the scales back to defenders. Like it's just essential that this automated SOC stuff works in my view.

So, uh. Benchmarks evals, judging ourselves based on the actual cost to defend,

Ashish Rajan: but isn't trust A big question in all this as well, like, and obviously I'm thinking about a lot of our audience also has CISOs who are in Fortune 500 and large enterprises when they explain ai, Hey, this is how I'm [00:23:00] doing AI security.

The reason why they go for a larger vendor or like a public listed vendor, I'm not gonna any vendors, but is because there is this understanding that oh, they, they are public brand name, they have the data points. And I'm curious, the advantage that people are, the way they hear that is, oh, they already have the data to, to what you said earlier, to train their models on what could actually be closer to, I'm gonna use the word synthetic data, but I don't think I can recreate synthetic data that clearly, but

Elias (Lou) Manousos: I think you can actually, it's what I'm saying is, oh yeah.

I actually believe the models know enough about security incidents to create reasonable. Mistakes that humans make, or malware that's in the environment, and replay that in a way that we, you can build a benchmark. Now, of course, the model also has the answer, so we have to be careful about that. But, but I think it's doable.

Ashish Rajan: So, so it is possible for large enterprises who at the moment may have some skepticism around new companies, Hey, you probably get acquired tomorrow, and all of that

Elias (Lou) Manousos: and, and they can test it. Yeah. Because if your [00:24:00] cost is higher,

Ashish Rajan: mm.

Elias (Lou) Manousos: Then there's a man behind the curtain.

Caleb Sima: I just, you know, I, I'm just struggling.

I'm, I'm struggling in the sense that, you know, I hear. Startup pitches around AI every day. Um, and of course, you know, today the biggest threat I feel to startups is, well, Claude Code can do this, 80% right. Sort of this question. Claude Code

Ashish Rajan: security.

Caleb Sima: Yeah. Claude or any of these. And, and then like, okay, I get to this, this point where I think Edward was going was, you know, hey, it's about the framework and you know, I as also.

Someone who sees this, understands that, but what I don't understand is how can I get provability? How do I ask this founder to say, okay, well consistency is a problem. Prove to me that you can beat CLO code and that this, you know, uh, whatever, whatever threat research agent or, you know, whatever this is, can actually perform way better in a way.

What, [00:25:00] what sort of, you know, data set or, or comparison, they have the data,

Elias (Lou) Manousos: we know what it costs to do it. Now,

Caleb Sima: is there, are there any of these that are open that they, that you should be asking to see a report or a comparison to that are available?

Edward Wu: I'm not aware of like, um, you know, alert investigation, benchmark data.

Um, the challenge with that is I agree with that to some extent. You can use simulated, simulated synthetic data to run some of the evaluations, but I think oftentimes what's also missing is seriously technical reality of whatever security product is deployed out there. 'cause not all, every single security product is actually deployed correctly, uh, within each environment.

And in the real world. A lot of times the agents have to find ways to work around, you know, a poorly configured sim or, um, very quirky setups. Um, but to, to answer your question, Caleb, like one thing I do as a founder nowadays, in [00:26:00] addition to digging into technical details like, hey. You know, what is the scope of the automation?

Do you guys, for example, are maybe, you know, using artificial artificial intelligence behind your software, or are you, are you guys like a hundred percent software? Delivered outcome. I think that's a big difference. 'cause frankly, a lot of AI agents, startups are cheating with humans behind their software.

And, you know, at, at the end of the day, at least in my mind, it doesn't make sense for a human to direct an AI agent that's also directing more humans. Hmm. Um, to, to get the task done. But one thing I do look out for as a founder when evaluating other startups, ultimately it's, um. Deployments, it's mileage like similar to autonomous driving.

You know, if, for example, you know, this particular startup has already done, you know, 10, 50, a hundred years of alert investigations and can show me actual real world customer deployments [00:27:00] and can share those logos with me. Um. I definitely have a stronger belief in their technology actually works. And this is where as a founder, I'm actually becoming pretty good at seeing through how different, how other founders try to pretend that they have large deployments, um, and cheat.

For example, they will say like, Hey, brought to you by teams, uh, work at Microsoft and Nvidia. So if you go to their website, you see Microsoft, Nvidia, you know Snowflake and DOD, and you're like, wow, they have all these deployments. In reality, these are just organizations they previously worked at, so real world

Caleb Sima: deployments.

And, and by the way, and they were there for like, they were like, intern you like, oh, I was there for like six months. And like, I've seen this,

Ashish Rajan: I, I have seen AI and Microsoft. But, um, so, so two points so far that we've taken away. One was the. The overtime if you have 80, if you start with 80% accuracy overtime, it kind of couples up becomes this big thing.

The second thing we've taken away from this is the, how much of it is [00:28:00] software driven versus a human in the background as well? I, I know we, I would love to ask, uh, pass it on to the audience as well for people who have tough questions for these two people, maybe for two of us as well. Is the mic coming over to you as well, so we can get that according If you wanna pass it

Caleb Sima: over.

Yeah. Get some of the audience involved. Yeah. You could either, by the way, either ask questions of all of us, or actually if you have methods that you found Yeah. Uh, as you validate, that's also great.

Ashish Rajan: Yeah. Throw some hard ones. Oh,

Audience: being a CISO with a fortune 540 firm. You know, one of the things I think about, right, is it, it's not the cost of my SOC lists that we have, that I'm worried about. It really is making sure that we can scale them. So we don't wanna reduce our SOC team. We just want them focused on, and I don't even, we don't even call 'em a SOC team, right?

They're cyber professionals. We just want them focused on things that humans should be focused on. Not things that computers can do really well, so we wanna enable them in that way. So we, we don't, I don't think about that decision point in that way. Um, we [00:29:00] practice modernization of technology, automation, orchestration of tasks every day to make just better jobs for people and data led insight.

What I find often, and I think I heard this, is when I hear misconfiguration, when I hear, you know, you're not looking at the right things. That feels like that's a. Leadership and a program issue. Do you really know what you want? Do you just deploy tools? AI is not gonna solve that problem at all. So anyway, not, not, not a rant at all, but just commentary, which is, I don't find this to be a tech issue in many cases.

I find this to be a clarity around what problem are you trying to solve? And, um, uh, AI is just one of the many tools and we've been using machine learning for an awful long time, AI for an awful long time as well. So anyway, I I appreciate you sharing all of that. You know, we, we, we look at startups all the time, uh, and if they can help us, we, we, we don't care the branding that they have.

As long as, you know, decibel or Jon Right. Is that's enough. That's enough of a brand for us.

Caleb Sima: Could can I ask you [00:30:00] a, a question though? You know? Yeah. Just to Yeah. Re return back. So, you know, how do you, when you're looking at, I, I mean it will be probably maybe your team, but more likely, but are there methods or things that, that are important that you can help filter?

'cause what you're gonna get, you're getting bombarded. By we are the AI for every single security function you can probably think of right now. Right. How do you sort of think about, hey, when those, when they come in, you know, this is the priority or this is how we're gonna determine, are they really offering the value truly that we think they are?

Audience: I think it, it's a great question. I think for us, you know, we're, we're educated. I consider us educated consumers. People can disagree, that's fine. Mm-hmm. Um, but, but we build stuff. And, um, we certainly leverage tools for scale and cost where that is, but we really invest the time in having engineers, data science people, software developers, those kind of things because we feel like if we build, [00:31:00] we understand what we want, we understand what we can't really do.

We understand where the cost is and the cost trade off for us is that like, Hey, we don't wanna build and maintain, but we understand what we want. So we, we try to really understand what. Smart people, passionate people that have a conviction about what they're doing right. Are do. And then we, it's like if, if we're already doing it and you can do it less expensively, great.

Um, and if we can see through, and I think it's a great point about the human interacting with a ai who's then interacting with a human. If we tease that out, then. We'll say,

Caleb Sima: or if Jon recommends,

Audience: if John recommends we pay attention,

Caleb Sima: the trust path.

Audience: Uh, the trust path is there. But you, you really raise a good point, and I think this is important, but that it, that trust path matters.

Caleb Sima: Yeah.

Audience: Like it super matters because it makes it simpler. For me to decide, even to pick up a phone. Like take a phone call, take an email, meet, meet a founder, right? For, for doing things. There's [00:32:00] founder right here.

Ashish Rajan: Oh, there you go.

Audience: So, um, uh, but if there's, uh, but it's really quick. Within a 10 minutes, do you wanna spend time with 'em or not?

Are they solving the problem that you need to be solved, solve for? And I think that's, anyway, I educated consumer as the approach that we've, we've taken over time.

Ashish Rajan: I think the golden nugget there is the fact that do people even understand the problem they have? And if AI is the right thing to have, I guess, approach with that problem.

'cause sometimes this could be as simple as process improvement. 'cause Joe down the street is just unhappy. I just need to, if I just had a conversation, maybe I wouldn't have an in issue in the first place.

Uh, sorry, it was a question back is, oh, there's a couple more.

Caleb Sima: I think I saw him first.

Audience: Um, hey, thanks. Um, one topic that was brought up in your discussion so far is, uh, well twofold. One is that there are many security tools that are out there today. The quality of their deployment, though, can vary wildly between customers.

And a topic that was brought up just recently upstairs is that, you know, proper architecture. For what a [00:33:00] security program is supposed to look like, changes pretty dramatically as well. So I guess one thing I'm curious about is, you know, how are you thinking about form fitting some of these solutions to a customer?

Because a lot of times with vendors, a lot gets lost in translation between a customer and how they think about a problem. Or how they speak about a problem versus the way that the vendor provides a solution for the problem and how they like to talk about it.

Ashish Rajan: That's a good question. Oh, I have a hypothesis, but

Edward Wu: I have a, I have a lot say that too.

We let these guys Yeah, go for it. Do

Ashish Rajan: you guys have thoughts on this?

Edward Wu: Yeah, so for us, when we explain the problem we solve, we really come up like focus on. Like staff augmentation, like using software as a way to staff augment the existing human analytical capacity. Um, and when we think about ROI, a big part is, hey, you know, you might, as a security leader, you, you have certain, you know, headcount budget, and with software augmentation, [00:34:00] we can really allow you to do a whole lot more with the same, same budget.

Um, so that, that's kind of how we do it.

Elias (Lou) Manousos: Nope. I, I think what's really different about AI and all of these new generation of tools is that you're partially selling outcomes, and that was why that statement really resonated with me. The other side of it is you're, you're bringing primitives that your customer can now use.

Inside the pro, the best practices that they have. And it wasn't very easy to break open the pinata of your bundled offering, selling these legacy products. The deterministic engines made it very, very difficult. And I think any vendor today who comes to the table with a very static solution is gonna struggle because the expectations are no.

Your software needs to meet my requirements and. I think it's very refreshing and it's [00:35:00] gonna, it's gonna change things and make it a lot easier to protect.

Ashish Rajan: I have a hypothesis on, on this, 'cause I think, um, you've mentioned harness as a word, which is a good one.

Uh, 'cause you kind of see this pattern across the board for people who have been quite forward with AI and the usage of ai. And the hypothesis there is that ultimately all of us, as when I say us in the sense of the, the customers of these product companies. We have to understand our problem and build harnesses for like, we already do this.

Like if it doesn't really matter if you take a cloud security product or AppSec product, we as consumers of the product have to add layer of is this a high medium risk or, Hey, by the way, Joe doesn't workforce anymore. So I think this is,

Caleb Sima: it's a layer of customization.

Ashish Rajan: Yeah. That is gonna be the mode that we would all need to work for, irrespective of what the product deployment looks like.

And I think. We are moving towards a world where the, I used a API example earlier because it should not matter if you are with a big provider or a small provider, uh, 'cause they're, they're all building a harness as well. Your [00:36:00] harnesses just need to be able to communicate for the information that it requires for that particular thing that you're trying to work on or the problem that you're trying to solve.

Caleb Sima: Yeah. This is what I think excites me about ai, which is what you mentioned is this is personalized software.

Ashish Rajan: Yeah,

Caleb Sima: right. Personalized software is, you know, sort of what Ashish was saying is in the old day you would bring in the vendor product that was static and brittle, and then you would create your own interfaces, your own glue to connect it with your own systems and your own workflows.

Now. My belief, at least in the next couple years as people are leaning into ai, allows you to create that glue very, very quickly. Um, and actually maybe the vendor deployment model changes alongside of that. So I don't know if you've noticed, but you, there's a lot of discussion and movement about sort of Ford deployed engineering.

Ashish Rajan: Yeah.

Caleb Sima: Where what the, the companies are doing is instead of selling a product. That is sort of fixed in solution set where you run into, uh, sort of a [00:37:00] conflict with the, with the team now and operations saying, well, we want to build. Because not only is it cool, but we get promoted for it, right? Like what you want to do is the right vendors are leaning into that and saying, Hey, maybe the way software gets delivered changes a little bit.

So what We'll, what we'll do is we'll take all of the cool hard components that we built, expose them like Lego bricks for Deploy and say, Hey, guess what? Now you can use Claude Code to build your own deployment. You can build your own glue, your own workflows, and then you can use our Lego blocks on top.

And by the way, we'll help you do that for you. And then you get this perfect balance of, you know, I'm gonna say this snarky, but it is really true that now the internal team can say, oh, I built this with ai. Get their promotion, but they don't have to maintain and manage, right? So now you get sort of the components working in behind the scenes.

And so I actually think that's, that brings the personalized [00:38:00] software aspect, and that's what's amazing about AI right now.

Ashish Rajan: You probably see that in the, sorry, just to quickly finish. You probably see this pattern in the bigger companies as well. The reason why the, they're trying to answer the same question by being a platform is because ultimately the, there's a, the ecosystem at the moment is on one side.

You're being driven to adopt more ai, a adopt more ai, which kind of creates an environment for people to create the harness in the first place. And the other side is some of the players have started saying, oh, I need to be a platform. So when you're thinking of plug and play, I have all the plug and play responses for you to individually go out.

Instead of going for five different vendors, you go for one vendor and it's not a pitch for any one sustainability platform, but that's where. That's how they're thinking about this.

Caleb Sima: I actually have an extreme, I We did an episode.

Ashish Rajan: Plug in the episode. Get back to the guy.

Caleb Sima: Yeah. Yeah. I, we did this ex, this other episode where I kind of gave an extreme example of this, where I was like, as a ciso, like I don't, why do I have to deal with so many vendors?

In the world of ai, is there gonna be a [00:39:00] point where I'm just gonna buy all Palo Alto?

Ashish Rajan: Yeah.

Caleb Sima: You know, they're not great at any one thing. You're not supposed to name vendors,

Ashish Rajan: but

Caleb Sima: yeah, they're not great at any one thing. But I can, I can uh, like pretty much clear out everything. 'cause they've got one solution for everything.

Ashish Rajan: Yeah.

Caleb Sima: But they, I get very wide, but not very deep, very shallow. Yeah. And I just use AI to make up. For all the depth everywhere else, like, oh, well, Palo Alto doesn't have a great, you know, waf, that's okay. I'll use AI to make it way smarter.

Ashish Rajan: Yep.

Caleb Sima: Oh, but yet it provides the hardware, provides the communication.

Oh, endpoint client. You know what? Not a great endpoint client, but you know what? It does have all the data.

Ashish Rajan: Yeah,

Caleb Sima: I'll just use AI to make that a little bit smarter. But this is an extreme example of personalized software, but you know, that's it.

Ashish Rajan: But the, the happiest person will be CFO in that context, because you don't only have one approval, you're not going for six approval at that point,

Caleb Sima: and it'll be so cheap and easy.

I think this is a sales pitch for the E seven bundle at Microsoft. Oh,

Ashish Rajan: sorry. We'll come back to you. Sorry, you had, you had a comment you wanna make. Sorry, I'll let you finish your thought.

Audience: No, I thought that was very helpful [00:40:00] and you know, I think it is very interesting right now how you have, you know, this, um, this period of time taking place where you've got a lot of existing vendors that are trying to bolt in their ability to use AI in some effective way, but they're dealing with the problem of.

Developing with consensus and their customers are still trying to use their technology and compensating for the ways it fits them and the ways it don't. So, um, all good. Thank you so much. Yeah. Thank

Ashish Rajan: you. As one more question.

Audience: Thank you. Thank you so much for, uh, this insightful and timely discussion. Uh, I have two questions. So first one is, are you seeing any activity in building more? Domain specific models for security. That's that's number one. And I truly believe that that's long pending because we are trying to use general purpose intelligence to target to one of the very custom, hard, complex domain.

And the second one is having spent 25 years in the cybersecurity member proxies to NextGen firewalls, messaging security to SS e ses and all that stuff, I [00:41:00] actually believe that we are still trying to identify around the old ways of doing security. And I don't think I'm seeing any gutsy new paradigms to reimagine security.

And one simple example is the way that traditional security has always been outside in kind of firewalling, you know, secure the just, and then come inside. And then only recently we started securing identity and data and APIs and so on, so forth. And the one, uh, recently I hosted a podcast where one of the, uh, thought leaders actually, you know, uh, we were having a discussion, which is like.

To attack, to defend your territory. Only when you know, you know, you're going back to your personalized software. Only when you know deeply your business, your identities, your flows, your data, then when you, when it's actually inside out, percolated into all aspects of your business, then it's actually very easy to [00:42:00] detect the anomaly of an attacker because attacker is actually trying to guess your stuff.

But we are never built that way. So we are never re-imagining security as a whole story. Like all the old golf or like what you're talking about, the Palo Altos and the SASE. They're like identifying whatever you're doing it. It's more of a convenience kind of a hack.

Ashish Rajan: Yeah.

Audience: But I feel like we are not gonna get to securing the world of autonomous systems.

I'm not seeing any bold ways of new paradigms or security reimagined as an inside out. This is like personalized software, personal security. In fact, almost to a point where every citizen needs to think about your own agents like Open Claw, your own personalized red and blue team that's guarding your own turf.

And I am not seeing that level of this stuff. I want to see if you're seeing that level of new frameworks or paradigms.

Ashish Rajan: Your in for a treat you should answer this question.

Elias (Lou) Manousos: Well, I think it's time to reimagine security with

Ashish Rajan: prevention.

Elias (Lou) Manousos: Yeah. With, well, a hundred percent prevention, first of all with with a programmable approach.

So there's three [00:43:00] control planes. Like you have your endpoint, you have your network, you have your cloud. Like if I really dumbed it down. Endpoint's probably where humans will interface with agents. That's gonna be the device, either your phone or your PC, or Mac, whatever. So if we continue to use the same kernel level approach, we, we lose and that ability to interface with agents and control them.

Of course, if you, if it's static, it, it needs to be programmable, it has to start from the ground up. My thought on that is. And you mentioned models like it's, each company has their own way that they work, the problems that they solve, the roles and responsibilities, the business outcomes, the intent of their users.

And I believe if Endpoint can model that intent, you now have a way to burn it down first principles and build it straight up, and now apply security controls based on. Smaller [00:44:00] domain specific models that don't just know about security, but know about your organization.

Caleb Sima: Lou, can you provide an example in sort of how you're building what you're building and how that would work?

Elias (Lou) Manousos: Well, one example might be you have, uh, a remote workforce and that remote worker handles customer support activities for you. Part of way attackers get in through the enterprise now, specifically North Korea, is they will convince a remote worker to outsource their job and then use very basic tools like Zoom, which are already approved to run in the environment, and they share remote access through Zoom.

That is an a very different type of threat vector that if you know for your organization. These are not the common patterns. You can now detect that type of threat on the endpoint, which would look totally normal to EDR. It would look totally normal to everything else in your environment. So those types of [00:45:00] things are now possible.

Ashish Rajan: I think it's a great question to be asked in general as well.

'cause uh, what, what you've said is definitely proven. Uh, I do wanna take a moment to kind of, for, for you guys to kinda have some last thoughts on this particular space There. Um, and probably a 30 second version of how do you separate singing from the noise for all the pitches people gonna hear at RSA, uh, 32nd version, Edward?

Edward Wu: Yeah. For me it's about proof points. So ask about deployments, like actual, you know, paying deployments, not friendly design partners, because some of that never turn into pain engagements, which doesn't fully validate the, the outcome of the technology. You

Ashish Rajan: mean?

Elias (Lou) Manousos: Yeah. Clearly identifiable pain is only possible with, with real deployments.

So I would agree. And then the outcome of that deployment, like what did you provide value and did, if you're using ai, did the AI actually provide that value? And I believe that can be reflected in, uh, the speed of detection, the breadth, or the depth or [00:46:00] cost. Yeah. I know it's not always about cost, especially for like a government, like it's really about mission success.

Ashish Rajan: Yeah.

Elias (Lou) Manousos: So. I think there are very few people who have the experience to build these systems at scale. There's quite a few who are trying, but you do need to have seen the mistakes of the past, I think, to fix it this time around.

Ashish Rajan: But is it, and final thought, maybe it's food for thought for the audience as well.

Is it fair to ask the question to your, to these vendors, what model are you using? Are you using more than one? Because

Elias (Lou) Manousos: I think that's a very fair question to ask.

Ashish Rajan: That's obviously, thank you for answering my question, but this is a good one because I, at least for people in the European crowd, 'cause there's definitely the EU AI Act where you're responsible for sharing what you're using, all of that as well.

With that said, thank you everyone for, for joining us. Ask the questions as well. Really appreciate all the support as well. Go and clap for everyone. Thank you. Thank you. But please continue talking to these people over here as well.

They've done a great job. Thank you everyone. Thanks.

Thank you for watching all listening to that episode of AI Security Podcast. This was [00:47:00] brought to you by Tech riot.io. If you want to hear or watch more episodes of AI security, check that out on ai security podcast.com. And in case you're interested in learning more about cloud security, you should check out a sister podcast called Cloud Security Podcast, which is available on Cloud Security

Podcast tv.

Thank you for tuning in, and I'll see you in the next episode. Peace

No items found.
More Videos