Caleb and Ashish cut through the Agentic AI hype, expose real MCP (Multi-Cloud Platform) risks, and discuss the future of AI in cybersecurity. If you're trying to understand what really happened at RSA and what it means for the industry, you would want to hear this.In this episode, Caleb Sima and Ashish Rajan dissect the biggest themes from RSA, including:
- Agentic AI Unpacked: What is Agentic AI really, beyond the marketing buzz?
- MCP & A2A Deployment Dangers: MCPs are exploding, but how do you deploy them safely across an enterprise without slowing down business?
- AI & Identity/Access Management: The complexities AI introduces to identity, authenticity, and authorization.
- RSA Innovation Sandbox Insights
- Getting Noticed at RSA: What marketing strategies actually work to capture attention from CISOs and executives at a massive conference like RSA?
- The Current State of AI Security Knowledge
Question:
00:00 Introduction
02:44 RSA's Big Theme: The Rise of Agentic AI
09:07 Defining Agentic AI: Beyond Basic Automation
12:56 AI Agents vs. API Calls: Clarifying the Confusion
17:54 AI Terms Explained: Inference vs. User Inference
21:18 MCP Deployment Dangers: Identifying Real Enterprise Risks
25:59 Managing MCP Risk: Practical Steps for CISOs
29:13 MCP Architecture: Understanding Server vs. Client Risks
32:18 AI's Impact on Browser Security: The New OS?
36:03 AI & Access Management: The Identity & Authorization Challenge
47:48 RSA Innovation Sandbox 2025: Top Startups & Winner Insights
51:40 Marketing That Cuts Through: How to REALLY Get Noticed at RSA
Caleb Sima: [00:00:00] A great example is that you could do something with an agent that you could never do with automation. That is like a good example of this is deep research. You could never really automate deep research the way deep research works, which is it clicks on a link, takes text, reasons about the response of that text as to what next step it should take and determines and creates the next step that it should take.
This is something that only a human could have done in the past. And so the definition for us, I think in and with the, that episode was an agent is something at which only a human was able to do before, but now computers can.
Ashish Rajan: So one of the biggest cybersecurity conferences that happen every year is RSA conference, and this year is no different.
AI, Agentic AI was everywhere. And in this episode came when I talk about the three themes that we saw at the event, what is a safer way to deploy at least one of the popular AI technologies? We spoke about some of the themes on how [00:01:00] people were trying to stand out and how they can stand out, especially trying to get attention from senior leaders and executives who are attending RSA.
And finally we also spoke about the Innovation Sandbox, which is a highlight event for many vendors going into RSA as they are being showcased as either leading the charge for what could be new in the world of startups for cybersecurity. All that and a lot more in this conversation between Caleb and myself, including revisiting some conversations that we had probably in the previous episodes about definitions, which may have been misrepresented to an extent at RSA.
So if you haven't watched or listened to those episodes previously, I'll call them out in the episode. So you get to go back to them and get a deeper dive into some of those definitions as well. You can be a bit more confident about some of the definitions before, in case you are being incorrectly marketed to.
And finally, if you have been watching or listening AI cybersecurity episodes for some time and you have been finding it valuable, if you listen to them on Apple, Spotify, definitely give us a follow or subscribe. Or if you watch on video like YouTube and LinkedIn, definitely give us a follow [00:02:00]subscribe there as well.
It means a lot, takes only a few seconds for you, but I would really appreciate this. Thank you for following and subscribing on your popular audio, video platform. And now let's get into the episode and talk about RSA conference and what was AI doing across all RSA conference? Enjoy the episode.
Okay, I welcome with another episode of AI Cybersecurity Podcast. This is the post RSA edition. Literally happened last week. We had a chance to speak to a lot of people, the insights as part of this recording would be more conversation that Caleb and I had with different people at different events.
Things that have stood out for us, the common themes that came out, the three that came out. And finally we also talk about the innovation sandbox, which is top of mind for people, at least the new companies that are coming up in this space. But Caleb, to start off with, I think the overall theme was Agentic AI is the one word that just kept resonating everywhere, no matter which,
Caleb Sima: yeah, like agAIn, like AI was last year's. Agentic AI is this year's. And I continue to [00:03:00] find frustration in the amount of bullshit that marketing and sales people have around Agentic AI It's very frustrating the amount of crap that I think people are throwing around. One other point to to go on a positive.
Ashish Rajan: Yeah. Let's go on a positive to
Caleb Sima: go on a positive before
Ashish Rajan: I just go to the positive.
Caleb Sima: Yeah,
Ashish Rajan: no, I was gonna say, oh. Go,
Caleb Sima: go ahead. Go with your addition.
Ashish Rajan: Yeah, no. Okay. I, what I was gonna say was the funny thing in the conversations that I was part of, or most people and even though we've done a whole episode on this still, you were calling a chatbot an agentic AI and every time you got, so a lot of conversations I had with people or you're working with agents.
And you're like, oh, what kind of agents? Oh, AI agents. I'm like, okay, what does your AI agent do? Then the, oh, it's just a chatbot. So I'm like, so it's a chatbot.
Caleb Sima: Yeah,
Ashish Rajan: it's an AI agent. It's a Gen AI agent. I'm like, I don't know, man. Yeah. Like you're just trying to, it's I think you gave an analogy, which is really good.
[00:04:00] The last time we spoke about the whole AI agent versus Agentic workflow, we did that episode with Daniel Miessler. I think it's fascinating that even after all this conversation, a lot of people were still almost not confident in their version of what is an AI, what is an agentic AI, what is an agentic workflow?
And to your point, the marketing that maybe some cybersecurity people are doing is not really doing justice to it because they're trying to sell their product and whether the market is ready for it or not, like there's a lot of MCP and A2A being thrown around as well. We were talking about this earlier, but but let's talk about the positive as well.
I just wanted to add that I'm going Okay. There's definitely, I agree that there is a bit of a, Hey, this is the problem we solve, so let's just hammer it into everyone. 'Cause it's, the world is confusing anyways. Let's make it even more confusing. So they come to us for, what do you guys do exactly.
Hopefully my assumption is that where they're coming from.
Caleb Sima: I will say that this RSA was in fact one of the busiest RSAs [00:05:00] in quite a few years since pandemic. I almost wonder to some extent if this RSA was even busier than even pre pandemic RSAs. I think it was packed. There was a lot of people,
Ashish Rajan: I think they did say 45,000 attendees, which is the largest RSA ever.
Caleb Sima: Yes, I can definitely feel that. You can tell the difference in the amount of attendees. And I also noticed one thing just to, talk to people who didn't go. I'm finding that RSAs is starting earlier and earlier, like it used to be BSides was Saturday and Sunday, and then RSA was like, events started happening on Monday, but I think RSA technically started on Tuesday in prior conferences.
Yeah. And this year what you start finding is a lot of people start hosting events on Sunday. Yeah. As part of RSA and then RSA itself started on Monday. Yeah. Instead of Tuesday this year. That's right.
Ashish Rajan: Yeah. And I think next year [00:06:00] it's even earlier in terms of timeline as well. I think it's in March as well next year.
Caleb Sima: Oh, it's really,
Ashish Rajan: gosh. Yeah, they've already announced I guess people who had to pay for next year right Now they are talking about it, so they're saying, I think it's most likely in March.
Caleb Sima: Wow. I, man, it's been it's, it was, that's why it was so exhausting. I was exhausted. Yeah. I'm usually exhausted this year, every RSA, but this year was even like, on top of that,
Ashish Rajan: I, I think for me personally maybe a good plug for BSides SF as well, which obviously we were talking to people over there about AI as well.
Even fortunately, there was not a lot of AI slapped around some of the sponsor booths at BSides SF, because I think it's more practitioner led. So there's a lot of conversation around other topics outside of AI. There was, we had the cloud security panel, we had AppSec panels. There was just, there's a mix of every other thing that you would expect from a security conference.
So it was a good bit refreshing. But RSA was primarily [00:07:00] about AI and agent AI. And to your point the definition of Agentic AI just became a bit more murkier. And as the more conversations you have, the more you realize everyone's trying to figure out what does this mean? And they're always trying to be cautious about A, not oversharing, and B at the same time not oversharing from a perspective of, oh, I don't wanna come across as someone who hasn't done a lot in AI, but B also from a perspective of what if this person knows more than I do? So I don't wanna sound like a fool as well,
Caleb Sima: man. It's yeah. So to clarify, we'll tell you in talking to a, quite a few people, I do feel AI security and knowledge of AI is still at its basics.
So I think for everyone that's out there. One thing is for sure, a hundred percent of people feel that they're behind. Gosh, even when you talk to people who work on security teams at AI foundational companies don't know what's going on. So like you could go [00:08:00] talk to people on security teams that work for, these foundational models and they don't know and can't keep up with everything that's happening.
Yeah. So don't feel bad, number one. Number two, I find a lot of benefit in very much stating, I don't know what's going on in AI and I can't keep up and I don't know what's happening because that opens up a window, I think, for the people that you're around to help fill you in. And I think that's been really helpful.
Like I had a chat with a CISO who was just starting to figure this out. And he was very open and he was like, I don't know what is happening around this and this, and what is the difference between Agentic and regular AI why does it, what's MCP? And then the group that we were in, a lot of people were super helpful in being able to really, and he walked away, from a 10 minute conversation caught up.
And I think that helps a lot.
Ashish Rajan: Yeah. Hopefully they go [00:09:00] back and listen to the episode we did with Daniel Miessler as well, spoke a lot about agents and agentic AI, which hasn't really changed that much since we did the recording.
Caleb Sima: But we should re, we should repeat what our definition was, right? So
Ashish Rajan: actually maybe we should actually, do you wanna give it a shot man?
Caleb Sima: Yeah, I'll give it a shot. The difference between AI and agent and automation I think was really the big key is that you can think of agentic being that it takes actions, but it produces its own journey, right? So a great example is that you could do something with an agent that you could never do with automation.
That is a good example of this is deep research. You could never really automate deep research the way deep research works, which is it clicks on a link, takes text, reasons about the response of that text as to what next step it should take and determines and creates the next step that it should take.
This is something that only a human could have done in [00:10:00] the past. And so the definition for us, I think in and with the, that episode was agent is something at which only a human was able to do before, but now computers can.
Ashish Rajan: Yeah. And I guess your point, and it uses
Caleb Sima: AI as its brain, right?
Ashish Rajan: Yeah. It used AI as a brain, I think, to what you also alluded to there.
And I think the conclusion from the conversation also was that the simplest part is just that what you just said it required a human to do it first. It's not like I'm making an API call that I've put an FL statement against. Yes. Literally. Then there's a whole agentic workflow as well, which is worthwhile calling out as well. 'Cause a lot of people are but their version of AI agent is that I have a Zapier account and I have agents there as well. So worthwhile clarifying that as well.
Caleb Sima: Yeah. The, and there's a difference between, I think people are saying that they're chatbots. Yeah. They're calling them agents in the aspect that they're thinking agents is equal to roles.
Right, which is you could say, Hey, there's a [00:11:00] customer support agent that is technically in the past correct. However, when you really think about what it, what we're saying in AI as an agent is an agent, which is an independent thing at which has a goal or a mission at which it, it does a series of steps in order to accomplish that goal that only a human was able to do before versus an AI chat bot.
Is a role that it takes so like we have we have engineers, we have AI support bots. These are all more roles that it takes versus the actions that it takes.
Ashish Rajan: Yeah. Yeah. I think and maybe I'll close with the final, a non-cyber security tech example. If you wanna book a holiday, I don't know, you wanna go to UK or wherever.
Normally you and whoever you're going with, they'll, or you'll basically spend time finding the right flights based on your cost or that you wanna spend on, or based on whatever you wanna spend on it. Find the kind of hotels, the kind of hotels you like, whether boutique, five star, [00:12:00] whatever. There's and maybe book the flight, book the train book, the five day trip.
All of that would've required you and someone else, or you yourself, who's going on a trip to just do all that research yourself, book the tickets, book the hotel, and all you had to do was, oh, and then pack the bags as well. But I think the way we are going with this, if there was an AI agent would've just basically done all of that.
All you need to do is just pack your bags and just get board the flight. That would be like the most simplest human examples. Would that be correct?
Caleb Sima: Yeah. Again, it's a series of, it's a mission or a goal at which needs to be accomplished that requires a set of steps that only a human could accomplish in the past.
Yeah. I think automation, there's a difference between automation and an agent. And those are the things at which, only requires a human to be able to reason about and make decisions and create its own next steps that were not pre programmatically created.
Ashish Rajan: I think one, one thing talking to a few people who work in the enterprise, [00:13:00] and maybe this is related as well a lot of the confusion was also coming from, is an agent different from an API call within an organization, they would have a proxy or something in front of the a, the local SLM or LLM that they're using.
If they make a request for, Hey I wanna find out, I don't know what I, I worked with a like a road tax or a toll company back in the day. So imagine if I wanted to find out what's my, as a user, what is my current fee that I owe for the toll that I've crossed in the main road, or whatever the example is.
Now, as a, and as a technical person, you're looking at, as I'm making a, someone in the background is making an API call to the to the effective service, API service going, Hey, what's the balance remaining on Ashish's profile? What would that put that under is a, is that's not technically a chat bot.
You've asked for a thing, it gives you back a thing. Is that an agentic workflow at that point in time?
Caleb Sima: I think whatever produces the answer for you is probably an agentic workflow behind the scenes. It could not [00:14:00] be an agentic workflow. And again, the thing that asks the query from the API could be an agentic workflow if there is a wrapper around it, like for example, if you're using AI to go make that call and it's using reasoning to figure out, hey, this is the right API call to make.
And I'm, I need that information in order to help accomplish the task of your initial question. Then that is an agent, right? It then uses the tools that's available to it, which is one an API call to go get its answer. And on the other end of that thing, who knows how that answer is delivered to you. If that answer delivers is delivered to you behind the scenes through agentic workflows, then there is an agent doing it.
But it could also just be a straight SQL query that gets returned to you in that answer, and then that is just regular code and does not require any AI or LLM at all.
Ashish Rajan: So to your point, I guess depending on which persona I use, if you're the [00:15:00] one building that capability and if you use agents to make the, like you've given it 25 API calls, and your agent is smart enough to pick up, Hey, based on ashish's question, out of these 25?
This one right here? Number four is the one that I need to use to answer this question. That is technically a agentic workflow.
Caleb Sima: Yeah. Yeah. Like an API is nothing more than a, an interface, right? It is a protocol that defines what information at which you want, that I offer to you. That now the reasoning behind making that call, when to make that call, how to make that call and how to interpret the results or what inputs to make into that call could be you could be a software program that someone writes, or it could be AI.
Yeah. If it's AI making the reasoning of what to input, when to call it, and then how to interpret the results, you have an agentic piece of code, right? Yeah. On the back end of that interface. Similar, it could be a straight call to a SQL query. It [00:16:00] could be a human who takes it and hands it off manually and gets you the response and feeds it.
Or it could be an agent now reasoning about, hey the input into this API, what they want, how to go gather the data and then feed it back. Being agentic is just another way of getting shit done. So it's just a different way of doing it.
Ashish Rajan: Yeah.
Caleb Sima: So we're going off topic by the way of RSA, but yeah, now I
Ashish Rajan: wanna bring it back to the RSA one because I, no I think it definitely for people who had still confusion, hopefully that clarifies it.
But if this was something that did come up in the conversation or the RSA, so in my mind I'm like, oh, see, that might make sense.
Caleb Sima: Yeah. I'll give you a great example. I was standing at a booth. Yeah. And across their entire booth was about how it protects you against agents and how that works. And I was, and this thing is just a firewall, and I was like.
How does this protect me from agents? And they were like it's just, by using the agents we can block the agents. And so we're an [00:17:00] agent. So you're an agentic Securities. Yes. We help secure agents. But how do you secure agents when you're it was just a lot of tiptoeing around this and it's frustrating because, obviously every startup and every company has to do what they can to get themselves noticed, and make sure that they rise above the noise, which you gotta do a lot to be commendable on this. However, this confuses the market, confuses customers and does not set the right tone, and I think can be far more damaging in the long run than helpful in the short run. I agree. Which by the way, Ashish, I think one thing we should talk about RSA is how to get noticed.
So I spent a lot of time this year looking at what are the things I, that I notice and what is the kind of marketing that works. So that may be a topic we should hit at some point.
Ashish Rajan: Actually, yeah, actually, yeah. I was gonna say we should definitely talk about that. 'cause something that, as you were talking about this, something that came to mind [00:18:00] was another conversation that I had where there was confusion about what's inference.
A lot of people keep throwing the word inference around the world Oh, around RSA, oh, we pick up the inference, we pick up the inference. Hey, what is inference? And at least for me it was a the API example that I just gave earlier, where if the user is initiating a prompt to have this, Hey, tell me what my how much money do I owe on the tax or whatever.
That's the inference that I'm making. Is that right?
Caleb Sima: No inference is just, is runtime. So it's your model at runtime. So it's the effectiveness of it's in production. So when you are feeding data into the model and it actually is operating, that is at inference. So that's the only, that's how you define it, is it is at runtime.
Ashish Rajan: So would it be the same as a, as in when you say, I always thought it was the user prompting for, and then that's the real time information, quote unquote, by making AI quotes over here.
Caleb Sima: Yeah. Yeah. There's a series of tokens [00:19:00] that get fed into the model at which it run and interprets, and that's inference at inference.
Yeah. But model has to be loaded up off disc. Yeah. Has to run in memory. Yeah. Has to run like, it, it's like any other code. If you take a, if you write code and it's sitting in code. Not doing anything then it, this is build time, right? You're at build. That's right. When it actually loads up, runs in memory and actually works, that's at inference.
So for a model, it is exactly the same. You can have a model that's just a bunch of bits and bites sitting on disc, but to load it up and operate it, it is now at, in, at inference.
Ashish Rajan: So user inference is not a thing or user
Caleb Sima: inference.
Ashish Rajan: Yeah, I heard the word, that's where I was coming from. 'cause a lot of people are using the word user inference and I'm like, that my, my summation of that, again I don't know if this was part of the whole marketing thing as well, and what I understood from that [00:20:00] was more to your point run, it's still similar, but instead of saying the model runtime, which they're more, more talking about that the user has sent a prompt.
And I guess that activity of the entire interaction, I'm like, that's, I'm not explaining, I
Caleb Sima: guess you could, technically that would not be incorrect. In the sense that maybe, you're talking about the live user interaction with the model, then yeah, you could say user inference.
Ashish Rajan: That's basically, but then I don't know if that's supposed to be a term that should be used. But then the reason I brought that up is because the word user inference and inference was being used as if it's the same thing. Yeah. And that's when you spoke about inference being the live model running.
And I'm like, oh, actually that's the interesting. Anyway that was one more thing.
Caleb Sima: I would say that at least traditionally, I think it's just called running the model at inference. I don't know about user inference, but I also want to, say one, I'm not caught up with everything that's [00:21:00] going on, so who knows what new buzzwords have been created.
That could totally be one I'm missing out on. And two it sounds technically correct, right? At least I, I don't know why you wouldn't just say inference, but I guess user interaction with a model could be defined as that yeah. Yep. But it's not wrong.
Ashish Rajan: Talking about another thing as well which is the other thing that I came across was deploying MCP and A2A conversation on that was very HOT. We had a conversation with the Google Cloud CISO, Phil Venables, which is gonna be on the podcast soon. But even outside of that, a lot of questions were asked around, Hey MCP is exploding. We did a whole episode on that as well, on what MCP is and what A2A is in terms of deploying them safely.
A lot of questions that I got from people, but more around how do I deploy MCP across an organization? In a way that a, it does not slow down the business, which has been the pain in the back for security for a long time that we become that department of no. Is there and especially if the, [00:22:00] we know that adoption is gonna be high, irrespective of how you look at it.
MCP, the floodgates of opened people are adopting it. Every other conversation people wanted to bring up MCP if they could. Yeah. Vibe coding was the other thing, but hey, so let's go, let's focus on MCP, but
Caleb Sima: Yep.
Ashish Rajan: Is there, in all the conversation that you had, like I, I didn't come across a solution, I don't think any vendors that are looking up at came up the solution.
In fact, least from what I understand, MCP itself, the client doesn't have white listing. So kind of makes it difficult. I if around deploying this across an enterprise,
Caleb Sima: yeah. I would love to give a balanced view on this. Yeah. So before we put on the, oh my God, the sky is falling hats because MCPs going in my enterprise.
Let's look a little bit about what MCP is, right? We talked about in our previous episode, but it is a series of tools at which an AI can now interact with to get things done. This can be having access to your operating system, having access to your [00:23:00] file system, having access to your network, basically all the things it needs in order to get stuff done.
So that being the case, where is the risk and the danger? So let's think about this from an enterprise perspective. What other random third party black box software can do those exact same things. Applications that you download on your laptop can definitely do that, right? You can go download software Apple if you're on Mac or anywhere else and run it and it can do, and access file system resources, access your network, and do things, and you don't, you have no idea what potential bugs or issues can be it in that software. Number two, you can have that exact same thing with plugins in your browser. Chrome plugins being a great example. Lots of people, everybody has extensions to plugins who have access to a lot of things. In fact, [00:24:00] it's even more sensitive, I think, in your browser than it is even on your, potentially your operating system.
A plugin can steal your session IDs, your tokens can have access to a lot of interesting and sensitive information and send it out to who knows where. Both of these things are the two most obvious examples of risks and enterprise today that have the exact same problem set. And by the way, are way more prevalent, much more dispersed, way more attack surface in your enterprise today than MCP does right now.
And MCP is the exact same thing. Some people will say but there's AI in MCP and not AI in those other things. And to that, I say you are wrong. There are lots of AI companies, AI built into plugins and in fact, I think most employees use plugins that are AI because there's AI in it because it helps them screenshot, interpret and use AI automatically.
So all of these [00:25:00] things, and if you even think of Gmail or Google apps and the amount of plugins that are available into those that have AI in it, there are tons. So in, in terms of risk, I want the enterprise and the CISO hat to think of. Today, your existing holes and risks that you have in your enterprise with just software and browser plugins and other plugins are far more significant and far more capable than MCP is.
So just keep that in mind. And if you even think about engineers and software engineers, think about all the tools, utilities, and visual studio plugins. You already, your engineers have downloaded, run and use that have the exact same access as MCP which already exist today. So if you're thinking engineers, agAIn, way more bigger risk surface and attack surface than MCP.
So that being said, I wanna put on the, oh [00:26:00] my God, sky's falling. Come on guys. That is not true. However, it is a thing you need to get a handle on, right? What do you do? If I were a CISO today, the first thing that I would do is think about how do I manage those other risks. For example, what is the existing program and policies that I have in place from our software or from my plugins and whatever those are probably very similar policies at which I would look at adding MCP to it.
If your software engineers have a policy around their visual studio plugins and what it can or cannot access, or what risks that it takes or what audits that it goes through, I would ensure MCP plugins are similar in nature because they're basically the exact same thing. And so you may want to put together a little bit of an MCP policy around what is expected in your enterprise, what risks you need to be careful of for the software engineers.
[00:27:00] You'll probably in your, and by the way, MCP can't go across everywhere, but you probably wanna hold a little bit of an educational session. Your security team can host a, once a month or a couple times a MCP education course about what MCP is, how to use it safely, and what you need to worry about.
And how to ensure you do it according to policies. So whatever that may be, you may have to get it approved. You may just allow your employees to do it the way you want it to. And then there may be ways at which you can validate what that looks like later. But understand that, I think it's a good to get ahead of it by having what are your policies, what are your programs?
And ensuring you put those things in place and you take a step forward and start educating your employee base around it and what are the things that are direct nos versus acceptable and keep those in play. And I think that is good enough for where I think most organizations should go until we get to a [00:28:00] point where we have things like enterprise MCP registries, where then as an enterprise you can stand up your own registry of MCP.
And similar to in engineering we have a dependency centralization manager that is hosted and managed by the enterprise. You can have an MCP Enterprise managed registry, so then all of your MCP or your plugins will go to the central MCP registry at which will be hosted by your enterprise and managed there.
But we are not there today. That does not exist as far as I know. But again, I'm waiting for the next 50 startups. I posted this on my LinkedIn, the next 50 startups to create this for the enterprise. But I also believe and by the way, this is not believe it is true, both Anthropic, OpenAI Google, or all creating their enterprise managed registries. So this is definitely coming down the pipe.
Ashish Rajan: Yeah. What happened in containers as well. Like there, there was a docker registry, [00:29:00] which is public. Then people started making their own private registries. Correct. I'll start with a public version and then later on have a private version for enterprise and others to be able to use it.
I think another thing to probably call out here is also the fact that there are two parts to it. There's a whole MCP server versus MCP client as well. To your point some people may have yes, policies will exist, and if they're only working on MCP clients and they're only talking to external MCP servers, they might be a different risk posture or profile that they wanna manage for.
But if you are both a client and a server, perhaps only internally, then to what you sAId there, like you already had the, perhaps the basis covered already. But if you're going for an external one, I think there's a whole conversation about if you're an organization that is dealing with multimodal, like using voices or video.
Then the traditional authentication authorization that we talk about in, in the MCP context would not be sufficient as well. I think there is a whole argument about should that be even [00:30:00] a thing in this world with MCP servers being coming up for OpenAI, which can do image generation, but what does that mean in the context of who's authenticating this, all these images and how does that work and all of that as well.
Caleb Sima: Yeah, Ashish, what is very good point, you brought up a very good point, which is how do we manage the server side problem of this, which is, this is gonna happen on two places. You're gonna have potential of local people now opening up new endpoints, new server points on their laptops.
Yep. Yep. As employees run their own sort of access, and now we have a firewall problem, so make sure that you have control of your MDM and your endpoints and what ports are gonna be accessible because you're gonna see a rash of people opening up their own MCP servers to access their laptops. That's gonna be a lot of fun.
To your point. That's a great point to bring up. And the second point is around software engineers which is, hey, how many now software engineers are gonna be pushing to either dev or prod [00:31:00] these sort of new integration server points, right? Is what the, what I would call this. These are integration endpoints that are gonna be MCP servers that are gonna come up everywhere.
And they're gonna be now the new interfaces towards your API interface that you initially had exposed. And so what are the development processes you need to put in place to ensure that there are gates that before we go and just stand these things up, they go through the software security development lifecycle and have the gates checked and make sure that these things are being looked at in the right way.
Because I think we talked about in our episode the problem of the privilege issue here.
Ashish Rajan: Identity that they're working of, and potentially, if it's using the same identity Ashish, how do you differentiate between the MCP Ashish versus the actual Ashish that's working on it as well? Yes.
Caleb Sima: Yeah.
Ashish Rajan: But the, I think another thing to call out here, as you mentioned, like actually this is maybe where a, we can probably go back to that [00:32:00]vendor that you saw who was talking about LLM Firewall. Maybe they can add this as a use case. Hopefully they actually have a legit use case you talk about.
Caleb Sima: But they were only external. There were only like web external.
Ashish Rajan: I was hoping that at least this is the pitch they were going for, but they could not explain it.
Caleb Sima: No, because that would make sense. That would only make sense.
Ashish Rajan: You made an interesting point by comparing it to the browser extensions and things that are added to Visual Studio today.
I definitely feel there's a use case for it. And there, I don't know if you noticed this, but there's definitely a stream of vendors talking about browser security. Like as in have a
Caleb Sima: like island these enterprise browser Yes.
Ashish Rajan: Palo Alto has one as well. But essentially the idea is, hey, your standard browser that allows you to download anything, have any extension you want is not good enough anymore with the, in the world we are moving towards.
Yeah. Especially in a world where now you can just simply go on a website just because you have a white list for OpenAI that has never stopped people from going on Facebook when there was a white list for, Hey, you can't go on Facebook. So people will find process to go to [00:33:00] it.
Caleb Sima: I think we should put on to-do list, we should have a whole episode on just the browser security market.
Ashish Rajan: Yeah.
Caleb Sima: I have a lot to say about that. And it's a really good topic. And you,
Ashish Rajan: do you reckon it's still relevant for the AI space?
Caleb Sima: Oh, it's a hundred percent relevant. In fact the premise of this is, we all know this, the browser has always been the new OS. It's just that it hasn't quite gotten there.
Yeah. But 99% of the time you spend is in browser.
Ashish Rajan: Yeah. Literally every interaction you do with the world. Yeah. Google and all of that.
Caleb Sima: And by the way, my prediction is, a lot of products are gonna start understanding that and tackling that as not just a protection space, but as a place where deployment happens.
Like for example Dropzone did an interesting move where they released a open source tool that helps the SOC team. Yeah. And what it does is normally if you want to POC a [00:34:00] product, what you would do is you would say, Hey, take our product, deploy it in your infrastructure, then access it.
And it's these backend cloud services, right?
Ashish Rajan: Yeah. Like dev or test environment.
Caleb Sima: Yeah. Dev or test environment. And what they did instead is they just released a Chrome plugin tool that looks at your existing SIEM interfaces and workflow that you're already using as an analyst, and then does the analysis on your pages and can click through your actual operations pages and it produces the same kinds of results, what that you would get.
So now basically, I don't have to deploy a system into your production environment. I can just use your access through your browser to produce. Not exactly the same, but very valuable results to you right away.
Ashish Rajan: Also, time to value
Caleb Sima: is instant. Yeah.
Ashish Rajan: Yeah. And I guess you are, but doesn't that mean you have to give them access to production?
Or [00:35:00] you can,
Caleb Sima: Again, there's so much to talk about when it comes to browser. We should do a whole episode on both. How is AI gonna change the browser? How are browser security companies like Island, how does that change the way users interact with future products and what AI is gonna do there? There's like a, there's I think there's just a lot, there's a lot in, just talking about browser alone.
Ashish Rajan: We can do that part of that 'cause all of that happens on browser as well. Like the whole OWASP 10 was entirely built on the concept of the browser is the place where shit is gonna happen. You are fuzzing and everything.
So that was the MCP standardization and I think there is a whole case to be made. I love the example that he gave of lunch and learn a thing where the DevSecOps way of, hey, let's just do security champions as well. 'cause I think the explosion of MCP is going to happen.
Some may be known to you, some may not be known to you. What happened to the API world when API started happening? Yeah, at least my hope is, at least this time there would be documentation of [00:36:00] APIs because the AI would be doing and can create a real time. The third thing that I want to talk about which is the theme that I had, was more around the access management partner.
So yeah, the access management part, which is kinda like the identity paradigm. I think you're very passionate about this. We had a whole episode on this as well. Access management was the third theme that came out of RSA for me, at least in the AI space. I don't know. Did you have some conversations around that as well?
Caleb Sima: I did a presentation at RSA this year on my identity, right? This was more birth certificate, social security number problem space, and about how AI will be a forcing function to force America to reboot that aging old non-working identity infrastructure and foundations. And, there it's a reach.
But, there's, there is a far aspect here of saying when you can no longer trust audio, visual, aspects of [00:37:00] things, how do you validate authenticity? Yeah. And authenticity both means that the person I'm talking to is truly the person I'm talking to, or the agent or individual representing you is truly acting on behalf of you.
So if, for example, if I'm DoorDash and someone orders something, or I'm a restaurant, someone orders something, or I'm a bank and you do a transfer fund. If agents are now doing this on behalf of you, how is authenticity being proven? How is it, how can you confirm that you are truly acting on behalf of this individual?
And also going to access control? The problem here is I can prove that my agent is me through some sort of pass through, right? Like it has my quote unquote access. Yeah. However, I cannot control what it may do with that access. So my example that I've always given, I think I've given that in the past here is the ea [00:38:00] AI example.
My AI, who is my executive assistant, has access to my social media and also has access to my emAIl in order to do the job. But how do I prevent that AI from posting my personal email on my social media? Yeah. It's obviousness in the fact that well, no executive assistant would do that.
Yeah. But why is the question. So it's not just about access. 'cause they have access, they have authorization to do it, but it, there is this unknown or unsaid thing that is, I want to ensure that's not just that you have access to data A and data B, but you don't post data A on data B, or you don't do something with data A and data B that you shouldn't.
And today that's more built on the nature of accountability. A human EA will get fired for this and they'll have long-term effects of both themselves, [00:39:00] their family, and their career. If they do something that's stupid, right? But AI does not. And is this trained already in them in our model?
Is it something at where, no matter what influence or prompt injection is created to an AI model, that it knows that there is some accountability that if I do this, even if everyone's telling me to do it, something is wrong and to not do it. And so how do you either see this, prevent this, blah, blah, blah, blah.
It's a hard problem.
Ashish Rajan: I think maybe from an enterprise context and over RSA for some reason I came to a conclusion and I don't know if it's right or wrong, but I. I kept thinking, I'm sure there's a simpler way to talk about accountability in the current context. And the thing that I came up with was like, if it was an external AI agent thing that's being like the examples where we've seen a car being sold for, I don't know, $1 or $2, whatever the thing was, the accountability at that point in time lied on the company, which [00:40:00] is what happens today for any screw up that a company does, A company has to take the accountability.
If it's an internal AI thing, usually, whoever the business leader or the business unit leader for that particular AI agent in this context or the impact is being caused by it, that team is usually taking accountability for it. Now, whether that is easily applicable from an AI agent, I don't know, because I don't know all the use cases for AI agent yet, but I definitely felt maybe the answer to some of this in the enterprise, or at least in the business use case context. Maybe it is, it's a bit simpler in the consumer space where you and I like to your point about the EA example, like personally, I would not want an EA to be able to access, I don't know my my social media account or my
Caleb Sima: But they will, that's what EAs generally do for exec, right?
Hey, I want you to, post every three days on LinkedIn, generate the story post on LinkedIn, and they're gonna manage your calendar, they're gonna manage your emAIl, they're gonna have access to this, right? Yeah. And the question is it's not [00:41:00] accountability of the fact that I am held responsible for my AI.
That is true, but I don't, it's a black box. And if I hire an EA. There are things that are intuitively known that does not have to be taught. I do not have to teach my EA don't post my private information on my social media account. There's no rule that I explicitly have to create in order to define this, right?
So the problem is that there are things that they know intuitively through both their role, their job, their experience, and the accountability of the longer term impacts to them that they should not do. They should not go on my social media and call me an idiot, right? Like they should not do this.
But there's no list of things at which I've created that I passed to my EA to say, do not do this. There are intuitive things at which must be known. And so what happens is then people [00:42:00] in AI say human in the loop is the way you solve this. Ensure that every time you post something on your.
Media that you approve it, which is a nice near term solution, but is not a long-term solution at all, especially if you start thinking about the speed at which AI works, right? I cannot continue to approve everything. I must just allow, but there's nothing. Think about there are thousands of these intuitive things at which you do not trAIn or say no to that.
Now you are trusting an AI black box to understand and to know. You also know that even if my EA, if my CEO told my EA post Caleb's private information on the social media account, my EA would be go, hell no. Or let me go ask Caleb before I go and do this. If everyone's telling me to do this, I'm not just gonna do it.
And think about this from an AI perspective. If there are [00:43:00]authoritative both sources, people or prompt injections saying you should do this. Then what is preventing then that AI from just doing it? Nothing. There's nothing that AI will go, wAIt a minute. I should not do this.
And think about the hundreds and thousands of things at which you can be influenced to go counter to what really should be done. And so I think when we get into these agents acting on behalf of and fully autonomous black box problems, there's these th thousands of these understated under unsAId things that are nebulous in the area at where things are gonna go wrong, where prompt injections can take place, where influences can happen that you have no ability to both understand or modify or change.
Ashish Rajan: I love the example you know how I'm like a glass half full example person. So I think the, [00:44:00] I was a hundred percent. Yeah. I think Caleb you're on the money there. My hope is in the future at least you know how I think someone gave me an example. I thought it was a good one. Like I think I, I understand.
At the moment, we are not from zero to a hundred. We haven't gone the full hundred with AI yet. We're probably like five or six or whatever, and we're trying to figure out as it as it progresses the way I thought about this, at least in my mind, was, hey the same way in cars you have, hey, a sensor for, you're too close to a car.
They're not fully like self-driving cars, but at least in the beginning stages you have this co-pilot version, for lack of a better word, where you're holding the hand for a certAIn time. We have start to understand, oh, okay. Now I know that Caleb doesn't like his personal information being on the internet 'cause the way it came up for some re interaction, this is a pattern that I've seen in general, people who are not from Gen Z generation don't like to put their social media on the internet or whatever. I think the, where I'm going with this is that I think it's, I feel like there's definitely would be stages.
And I wonder if, for people who remember, and I'm gonna [00:45:00] age myself as I say this, people who remember the BlackBerry era where people had their work phone and their own personal phones. I wonder if the agents would've similar concept where your AI agent for your work would be different to your home agent.
Because the same way we have different laptops for work laptop, home laptop. I wonder if there you reckon there will be a split like that?
Caleb Sima: I mean for sure there are for some people. My personal belief is that I. Your work, your if there's a personal one for you, your life mostly consists of both work and personal life.
And so I'll give you an example. I'm hiring right now for a quote unquote chief of staff of my life, right? Which is someone that both manage and I can't have one that just manages just work. And I can't have one that just manages just personal. My work and my personal are too intertwined. I spend half of my life in both ends of this.
And so someone who is these AI agents that are there to [00:46:00] help make you more productive, the clear delineation between what work and personal will blur, of course, for security reasons. To some extent enterprises will wanna lock down things that are both in your work versus your personal, right?
I will enforce as a CISO. A separation of duties, but that's not what people really want. Yes. If you really, you were given a billion dollars, you're gonna hire someone who can manage your life. Yeah. Given both, right?
Ashish Rajan: Yeah. Yeah. I'm with you. It'd be really interesting how it progresses and I guess maybe talking about where it's gonna progress as worthwhile talking about the sandbox that happens, the innovation sandbox.
Oh, yeah.
Caleb Sima: Anyways, just to cap off I think the agent problem and this is where I'm coming in is not necessarily gonna be quote unquote an identity problem or even an authorization problem. I think there is now this third issue where it, you can have an identity, you [00:47:00] can have authorization and access to things, but there's all of these thousands of unsAId gray areas that we have yet to have control or visibility over that.
And I think those are the things that are gonna cause a lot of problems.
Ashish Rajan: Yeah. I think the human elements of being polite or being like the things that our preferences and all Yeah. This, I know I'm a hundred percent agreeance with you on that part where we still haven't answered all the questions.
Like even I don't think, even as humans, we have answered all the questions. Even as people get older, there's so many things you come across as completely new scenarios and you're find, you're finding a new belief system at that very moment when you face that situation as well. Yeah. For whether I believe in this or not.
Caleb Sima: We should discuss this more in another MCP agent episode.
Ashish Rajan: Oh, there you go. Yes. Perfect. We should probably do that as well. I, I do wanna talk about the future as well. 'cause one of the things that happens at RSA is the Innovation Sandbox, which they review a lot of upcoming [00:48:00] startups in this space of technology, cybersecurity, and make decisions on.
Okay. Which one is your, is the judge's favorite and or maybe in this particular scenario, RSA favorite? There were a few I think you and I went through them. What was, I think I'll let you talk about the themes that were there. There were 10 finalists that came up. If you wanna kinda go through them quickly.
Caleb Sima: Yeah. So there are 10 finalists, so we've got, Aurascape. Yeah. Which is an AI security company. Yeah. Calypso AI, another AI security company. Command Zero is AI automated soc. Yeah. One called EQTY Lab Pioneers. That is a mouthful. Also an AI agent security company. Knostic, which is a portfolio company for me provides also AI security. And this is where, then there's a split here. There's Metalware, which is a infrastructure firmware [00:49:00] security company. Very interesting. Mind, which is a DLP cybersecurity company. ProjectDiscovery which to those of you that know me, this is one of the companies that I helped incubate.
So I'm very proud of these guys, which about vulnerability management, Small Step which is a zero trust company, which surprisingly, I would've thought zero Trust was way overblown and dead and died by now. So it is good to see actually more improvements on the next generation version of these things called Small Step Twine security, which is, Hey, we're gonna replace your security employees.
So they are a. Full on replace your entire staff with AI clones of them. So that's, these are the the finalists. And outta that, by the way, I might add I think only six of them or five of them were AI related. Yeah. And the other, so it's a good balance actually. I think RSA did a good job of balancing between [00:50:00]AI security versus hey, non-AI security related companies.
Ashish Rajan: Yeah. Yeah. And the ProjectDiscovery. But congrats on the ProjectDiscovery, winning the thing as well, which is proud moment for you guys as well.
Caleb Sima: They won very, and also I might note ProjectDiscovery was the only pitch that had very little, if not none, the word AI in the entire pitch.
Ashish Rajan: I'll leave the link below there. 'cause I think people would be curious to hear the pitches from of the people, but I think it's worthwhile calling it. I definitely feel there's a, going back to what we were saying in the beginning, there's a, the word AI was just being used everywhere yes.
Caleb Sima: This is a good example people that you don't have to have the word AI to win. And why, what did ProjectDiscovery do that was different? ProjectDiscovery focused on the value of the solution being used. Not about AI is about how you do something not on the value that you bring. And their message was [00:51:00] entirely on the value that they brought and recognition of that value.
Yeah. With almost no AI, even though ProjectDiscovery does have AI involved in their stuff, they did not talk about it. And yet they still won among a, bunch of finalists that everyone talked about. AI and age agentic AI at that. Yeah. And so just as a key, remember you can do it. It's hard and it's not easy, but you don't need AI in order to make a difference.
Ashish Rajan: Actually talking about getting attention from the right kind of people is also the marketing. 'cause to your point getting attention to the right people on one of the biggest cybersecurity conferences of the year. It's gonna be a hard job. What was something which stood out for you in terms of this, the marketing, how it was done?
Caleb Sima: I hate to say this, but for RSA, this works, and by the way what I'm about to say is I think for RSA, this works, but, look matters on locations. I'll give you a great example that most of [00:52:00] RSA in terms of the senior levels all happens outside the conference, right? It's the perimeter of RSA, it's all the dinners, the meetings, the hotel lobbies, and it's the, all of these are where people are commuting to and from their meetings at where it's really and the way RSA is in San Francisco and the way San Francisco's laid out.
What people with the thing that brought the most attention to me was the gorilla marketing around those big trucks with the, they had these like moving billboard trucks that would continuously drive around the block. Yeah. That's blasting annoying tunes. And they also had people were pasting all sorts of things on the sidewalk on their brands where they were people had these big, stands where they were serving donuts, serving food, these food trucks that were up, it's, those were the gorilla marketing things that I saw.
Those are the things that I took notice of. I remembered the names and I always look for the ones like I've never heard of before. And I can see [00:53:00] as they're driving around. 'cause you're just walking, you're always walking and you're going to and from meetings. Yeah. And so that gorilla marketing, I think really pays off a lot.
And in fact, I was on a panel at RSA, it was a CMO event, so it was all CMOs for cybersecurity companies on the panel. And they were asking, how do you get the CISOs attention? I brought this up. I was like, unfortunately at RSA, you have to do gorilla marketing. You have to step up your game, you've gotta be loud, you've gotta be obvious, and you gotta be noisy because that is the thing that steps you up above the rest of the crowd.
And the fact that around that, in RSA, those were the kinds of things that I noticed quite a lot. And it was really impressive. It worked, these things worked.
Ashish Rajan: It got your attention.
Caleb Sima: It got my, there's another one that got my attention that I got a call out. It at the w No, it was the St. Regis Hotel.
Ashish Rajan: Oh, yeah.
Caleb Sima: At the St. Regis Hotel. [00:54:00] Torq put urinal pads in this, in the urinals, in the bathroom.
Ashish Rajan: Oh, wow.
Caleb Sima: Which, by the way, I don't know if that's a positive or negative branding association.
Ashish Rajan: Like a competitor doing it. 'cause
Caleb Sima: it could be the competitor doing it. 'cause I'm not sure if that's a positive or negative branding association for that. Yeah. But it was funny. I have to give credit,
Ashish Rajan: Aim for the Torq.
Caleb Sima: Like that's right. That's right. And it's definitely had to be Torq because there was no branding of who their competitor. So it's, Torq, SOAR is dead was the logo on the, in the urinals. So they placed it in all the urinals. Yeah.
Ashish Rajan: It's you almost asking yourself the question, is it positive?
Is it negative? I know,
Caleb Sima: yeah. That's what I'm not quite sure. Although it worked because we're now on this podcast talking about them and the name of this thing. At the end of the day it did work.
Ashish Rajan: Yeah. I don't know if Shilpi may cut out the name, but we'll find out in the end. Oh yeah.
Company that gets you on the [00:55:00] urinal. But I guess maybe something that stood out for me, and I a hundred percent agree on all of that, and I also agree on the fact that most of the RSA conversations happens outside. So even initial numbers of 45,000 attendees to RSA, I think doesn't account for all the people who did not buy the tickets, but all hanging around RSA Yes, just walking about catching up with people.
I think the panels that I was participating, they're all full. I think the exec lunches were really full. Again, marketing has stood out for me. I think outside of those things that you mentioned, for me it was a lot of the online thing as well. Fine enough, I do this thing called RSA Fashion Week which kind of started four years ago.
You were participate as well? I think that was seen by about 50,000 people. Wow. And that was just one of the posts. The second post, I haven't seen the second post yet. The, one of the posts got 50,000 people seeing it. The other post, I'm pretty sure is like 20, 30,000 as well. So there is something to be said about, yes, you are right to have the guerrilla marketing, but you also cannot forget [00:56:00] the the online side of it as well.
Yeah. As much as you're putting all the effort for that in-person thing, there is something to be said about, Hey, how do we create something online as well? And kudos to people who came up to me and took pictures and all that. Hey, we wanna participate in your fast fashion week. People kept tagging me kudos to them I, I get irrespective of their logos and their titles, they were all represented there.
Just equal ground for everyone as long as you were looking good in the outfit that you were wearing. The other thing that we ended up doing was, which is interesting have a recap, which I would've thought is like, social media 1 0 1, Hey, day one had an amazing day or whatever.
So that, 'cause people who did not attend, they wanna know what you've been talking about. Everyone has this announcement, but they're only in person. But anyway those two I wanted to call out 'cause those two seem to work for us always and seems to be a thing. But if you wanna adopt it, I, yeah,
Caleb Sima: You need some sort of, what you're saying is some sort of viral thing that may not necessarily be associated but having the ability to brand yourself that way a hundred percent.
Ashish Rajan: Yeah. I, it definitely works. I think I'm, I can call [00:57:00] out a couple of examples. So a lot of attention at least, and by the way, people can take it with a pinch of salt. I'm not a marketing person. I think the CISO turned marketing person, so a lot of marketing people focus on how amazing my both looks.
And how stand out it is, to your point, you never went in I barely got in. I think, actually, yeah, the only time I went in is when you and I were together interviewing Phil. That was the only time I walked in. So technically, even if you had the most fanciest booth, I probably would not have noticed it as well.
Yeah. To me, that says volume for, to what you said is most of the execs who are outside trying to get a clearer picture with other peers. They might not be the ones walking the floor. Maybe the ones walking the floor are the ones who are, I want a product who, which solves my, I don't know, cloud security problem or AI problem.
So they're the ones audience definitely focus on the, Hey, I'm gonna stand out, yell on the mic. But for the people who could not attend, I think it's worthwhile saying you should not leave them hanging is where I'm coming
Caleb Sima: [00:58:00] from. Again, like it all depends on the size of company where you are as a company.
If I'm advising a startup company in the security space and they need to just get their brand aware and they're not like big, but they're, hey, we raised a decent, like at RSA investing in the, big splash around the conference is well worth it. Yeah. I think that if you can't, but how do you do that Black Hat?
You can't do that at Black Hat because Black Hat people don't walk around outside. It's too hot to go walk around Vegas during Black Hat. Things like that don't work at Black Hat. You have to find something else that is more quote unquote external and branding. And so how do you think about that in a way that works for that?
And so what you have to do is, but buying a big booth is an interesting concept as to is it worth it or is it not worth it? And to me inside the conference is an entirely different ball game versus outside the conference at RSA. [00:59:00]Yeah. Black Hat I think is a bit different because most of the time you're inside the conference.
Ashish Rajan: Yeah. Actually the locations definitely. I agree makes sense as well. I think maybe for companies who are starting course, I think another example of marketing that I saw, which was a lot of people did not go inside, did not have a booth. They all had their side events. Yep. Whichever venue they could afford to have and was not control by RSA.
A lot of people had their own events, own venues, but they all had at the, and this is not going to the whole sales marketing, but they all had meetings booked from beforehand. Yeah. So 'cause if you like, but you had to be present physically to show that you were at RSA and hey RSA crew, but none of them were actually inside RSA venue.
They're all outside, which is just posting online saying, Hey Amed, come talk to me.
Caleb Sima: By the way, just to give you, give people who are listening some tips on planning these things. If you want get, if you want to get CISOs to attend they're, number one, you have to start early. [01:00:00] Like the amount of people who email a week before or two weeks before RSA to go to a dinner, you are so late.
You need to be two, three months around that timeframe for these things. The other thing, by the way, I'm just going to give a shout my wife who happens to have Fang, which is right next also a, her tip that she complains about is people try to book 12, 15 person dinners like the week before RSA and then she's dude, we've been booked out for four months prior.
So if you are planning event and you want a big event on the restaurants and you're trying to plan, like you need to start planning that thing seven months. Prior to the event. Wow. You need to go book those kinds of spaces that much ahead. So think about it, plan it, because these things you can't book these things a month ahead or two months even on getting reservations for these events.
Ashish Rajan: Damn. So [01:01:00] tip for everyone by the way, Fang Restaurants food is amazing as well. Definitely check that out as well. But. Tip to stand out to plan early and just make sure you got you.
Caleb Sima: Yeah. And I think it's like a lot of people who do these are sometimes I think new especially in these startups, they're new to event management, understanding these conferences and you cannot, if you want good space pre like you, like we have people who book Fang out, like the, they book it out the next year right after the event.
Oh. They're like, when is RSA announced when it's happening? And then they immediately book space. So you gotta do this early if you want to get these things in because they book out months and months out in advance. Yeah.
Ashish Rajan: And I think it is definitely, I'm sure we'll have a conversation about how to do a Black Hat version, but definitely that's something up really.
Marketing is tough in cybersecurity, especially when if we have a critical hat on, we just being, trying to sold. We start the whole conversation with AI agents being mis represented in a lot [01:02:00] of ways. Any final thoughts on the RSA? I think we spoke about the three themes we spoke about.
Caleb Sima: What did Yeah, what did we do? We did agents deep into that. We did identity stuff around agents
Ashish Rajan: deploying MCP and we did, yeah, the
Caleb Sima: MCP agents, I put 'em all into Oh, yeah. Yeah. We
Ashish Rajan: spoke about, we,
Caleb Sima: we talked about the top 10.
Ashish Rajan: Yes. Spoke about top 10. We also did like that right now, the marketing that stood out as well.
Caleb Sima: We talked about Yeah. How to help secure MCP yes in your organization and then Yeah. Getting noticed. Yes. During RSA and what worked.
Ashish Rajan: I think overall that's a great RSA episode, man. I think but people have any feedback. They should definitely drop the questions when they, on the LinkedIn feed, when they see this, or on the YouTube feed if they see this as well.
But that sounds like a good wrap from our side. But any final thoughts before we wrap it up? Man,
Caleb Sima: I'm good.
Ashish Rajan: Awesome. All right, thanks everyone for tuning in and we'll see you next episode. Thank you so much for listening and watching this episode of AI Cybersecurity Podcast. If you want to hear more episodes like these or watch them, you can definitely find them on our YouTube for AI Cybersecurity podcast or also on our website, [01:03:00] www.aicybersecuritypodcast.com
And if you are interested in Cloud, which is also assisted podcast called Cloud Security Podcast, where on a weekly basis we talk to cloud security practitioners, leaders who are trying to solve different clients cloud security challenges at scale across the three most popular cloud wider. You can find more information about Cloud Security Podcast on www.cloudsecuritypodcast.tv
Thank you again for supporting us. I'll see you next time. Peace.