The Zero-Click AI Hack: How to Contain the Blast Radius of Autonomous Agents

View Show Notes and Transcript

Is an AI agent's identity a workload or an action? Ashish spoke to Elie Bursztein, Distinguished Research Scientist and co-author of Google SAIF (Secure AI Framework) about how it is neither and that is exactly why our traditional security models no longer apply to the AI era

In this episode, Ashish sits down with Elie to explore the evolution of AI from a passive "brain in a jar" to an active agent that takes actions on your behalf . Elie breaks down the reality of Indirect Prompt Injection, sharing a recent zero-click exploit where simply sending a malicious Google Calendar invite caused an AI agent to execute unauthorized commands .  

If your organization is building agentic workflows, this conversation provides aroadmap. Learn why you must treat agents like contractors with a verifiable "mandate," why the order of tool execution matters (never let an agent access private banking data and then browse the open internet), and how the industry is moving toward "semantic firewalls" to contain the AI blast radius .


Questions asked:
00:00 Introduction
02:50 Elie Bursztein’s Background & Creating Google SAIF
07:50 Defining AI Agents: The "Brain in a Jar" vs. Real-World Action
11:00 Agent Identity: Is it a Workload or an Action?
13:30 The Concept of an AI "Mandate" (The Contractor Analogy)
19:30 Translating Natural Language into Verifiable Smart Contracts
24:50 The Missing Semantic Layer in AI Observability
25:30 What’s Next: Agent Identity and AI Privacy
27:30 Indirect Prompt Injection: The Zero-Click Google Calendar Hack
30:00 Containing the AI Blast Radius & Tool Execution Order
33:30 Building a Semantic Firewall
36:00 The #1 Rule for Safely Deploying AI Agents (Start Small)
40:30 Hobbies: Writing a Book on Innovation & The Playing Card Heritage Foundation
44:50 Favorite Food: Yakiniku (Japanese BBQ)

Elie Bursztein: [00:00:00] Can we trust the AI to perform an action on our behalf? And that is the only problem we have. The agent only can interact with the world with tools. So our job is to contain the Blast Radius to access private data. And then if you call the internet, that is not good because you have an next field point.

Ashish Rajan: Is the identity a workload or is it a action?

Elie Bursztein: So the question is to realize that none of our traditional security model framework. Do apply. Yeah. It's something else. We believe that you should not get a car before you get a driving license. You are getting the genie out of the bottle. Yeah. And you're giving it access to the world.

Ashish Rajan: If you haven't been keeping tab on the whole AI space, you probably have heard the word of SAIF, specifically AI safety. But did you know there's something called SAIF as well, which was released by Google some time ago, and I was fortunate enough to speak to Elie, who was part of that team that created SAIF in Google.

At the Munich MCSC conference recently, and we spoke about what is the current challenge with AI that they are seeing, that [00:01:00] the community is experiencing, the companies that work with experience. They also spoke about what the present challenges are, specifically agent identity, which is clearly exploding everywhere.

How you could approach agent identity and what's required to be able to manage that at scale. We also touched on what the future is like different stages where what people were talking about, say up until now, who had already started a journey pretty much straight after Gemini was released or open, AI was released to agent identity, which is at the second stage of where the growing problem is coming from and where the future would be heading towards.

All that in a lot more in this episode with Elie from Google DeepMind. He is a distinguished research scientist at Google DeepMind. So this conversation was really insightful on the scale of how some of these things work and how you can actually learn something and perhaps you can use some of that for your organization as well.

I hope you enjoy this episode, and if you know someone who's working on the. AI security space and wants to understand how does something like Google SAIF comes into play and how you can use AI safely. What are some of the fundamentals you should be looking at? This is the episode for that, and [00:02:00] do share this with someone who's also working on this particular idea as well.

And as always, if you have been finding AI security podcast episodes helpful and if you. Probably here for the third or fourth time. I would really appreciate if you could take a quick second to drop us a follow, subscribe, whichever platform you listener watch this on, whether it's Apple, Spotify, LinkedIn, YouTube, where everywhere you wanna learn from podcast interviews like these, it only takes a few seconds.

It's free for you and it means a lot because it helps us to reach more people. So thank you so much for taking that. Second, I hope you enjoy this episode and I'll talk to you soon. Peace. Hello. Welcome to another episode, thanks for coming on the show.

Elie Bursztein: My pleasure.

Ashish Rajan: Maybe to set things up for motion.

If you can share a bit about yourself. What are you doing, where you are at, what was your journey into cybersecurity or AI security?

Elie Bursztein: Sure. So I work at Google.

Ashish Rajan: Yeah.

Elie Bursztein: I'm a research scientist and I've been doing AI for before it was cool. So it's like 20 years at least. Both in the context of security.

Yeah, very briefly. I work on a bunch of things like the Gmail Spam classifier, Google Antiviruses. [00:03:00] We most, we have done something like password checkup, nothing to do with AI on password bridge stuff. And then recently, I mean, in the last few years where, you know, AI exploded. So really, really focused on that.

Yeah. And uh, we, in the context of AI security, I'm one of the author of SAIF Yeah. Which is a secure AI framework by Google, which was. I think one of the first, when people start to really ask the question of what should we do with our AI workload? How do I control them? Yeah. What is the security implication?

I think it was, and then we evolved it. So that's what I do.

Ashish Rajan: Yeah. Yeah.

Elie Bursztein: I do a bunch of research on other topic not as relevant to the conversation.

Ashish Rajan: Yeah. Awesome. And I think, uh, maybe just to give, uh. A short version of SAIF because, uh, as much as people who may be deep into the AI security space may understand it, but what is SAIF for people who probably have not heard of it?

And what does it mean for the general security audience? Yeah.

Elie Bursztein: So I think SAIF is a safe intent.

Ashish Rajan: Yeah.

Elie Bursztein: When we write, rotate was first an internal thing.

Ashish Rajan: [00:04:00] Mm-hmm.

Elie Bursztein: So you have the AI moment, I would say, I mean the LLM moment. Yeah. Where people got very excited about, it GPT and the time Google's product, which was Bob

Ashish Rajan: Yeah.

Elie Bursztein: Which wake up Gemini and a lot of internal team come to the security team, right. And say, okay, I want to use ai. What is the guideline?

Ashish Rajan: Oh, right, okay.

Elie Bursztein: Right. We at Google, we have this, you have security review.

Ashish Rajan: Yeah.

Elie Bursztein: Obviously for the product. That's, and so people are like, okay, what am I supposed to look for?

Ashish Rajan: Yeah.

Elie Bursztein: And so we. We took a small group of people who had experience in running those AI workload and we say, okay, what is new? And also what is the same?

Ashish Rajan: Okay.

Elie Bursztein: And surprisingly, a lot of the principle, and there was a bunch of principle which were released, and then we wrote a larger set of document and we had control and we have risk.

And if people are really interested. The whole thing is on my website. I have slides.

Ashish Rajan: Oh, right. Yeah. Perfect.

Elie Bursztein: I have all the [00:05:00] slide public, uh, people can look at them. Yeah. Yeah. Google have publish all the things. So really it's just a facing, it became some of a, let's answer the question of what, what are you supposed to do?

And to be honest, we didn't really know. Right. Okay. Fair.

Ashish Rajan: Fair. I mean, when you start from something for the first time. Yeah.

Elie Bursztein: But, but we have some ideas and I think those high, those ideas have helped pretty well.

Ashish Rajan: Yeah.

Elie Bursztein: Okay. Like, for example, don't forget the basics, it's like. You should make sure that access control is in place and sandboxing.

Yeah. And you know, those are things that are not new.

Ashish Rajan: Yeah.

Elie Bursztein: And I think a lot of people are for very good reason, very much focused on the new thing, like prompt injection, which I'm sure have been discussed to death.

Ashish Rajan: Yes. All kinds of it as well.

Elie Bursztein: All kinds of it. And people were very concerned about the model behaving and people were con, which is a really, really important concern.

From a, an a personal standpoint.

Ashish Rajan: Yeah.

Elie Bursztein: Yes. And we also need to make sure that, well, you don't have your model which are publicly available [00:06:00] on the, on the cloud bucket. You need to make sure that you have DDoS protection. You need to make sure you do proper authentication, logging, traceability. That might sound boring, but that's what security is, right?

Yeah,

Ashish Rajan: yeah, yeah. I mean, it's all, it's all the boring stuff that keeps it safe as well. Right,

Elie Bursztein: exactly. And I think that was more like, Hey. Keep the good stuff you already been doing, add a little bit. And by doing that you can probably manage the first wave. And that was the initial intent of SAIF. And of course, nothing survive contact with reality.

So, you know, three years out it's a little bit has to evolve and it moved to, which is, uh, an alliances. Uh, we donated the standards, we become an industry thing. And you know, I think as a community, we still figure out what to do, right? Yeah. It's not like it contained. The answer. Yeah. There's no the answer.

But it was our, let's tell the world how we do it internally, where we think we are.

Ashish Rajan: Yeah.

Elie Bursztein: So that we can have a conversation and maybe you find it useful and

Ashish Rajan: Right.

Elie Bursztein: People find it useful. And then that's what [00:07:00] it,

Ashish Rajan: I'm, I'll definitely dig a bit more into this because I think a lot of people would want to know, to exactly what you said.

How do I use AI safely? And there's a lot of those questions. What I would say is the. One of the popular things that keep coming up is the agent identity, which is, and maybe the whole agentic model as well. 'cause I think agent, agent and age agent, you and I obviously are here in Munich at the moment.

We were part of that startup night last night, and it was very, in general, people have an assumption of what age agent is. And Mark is as, I think someone in their pitch even called it out that, Hey, my marketing told me to put agentic and sells more. And I think if you can lay from your understanding, how do you define agentic?

We'll start there and then we can get per word deeper.

Elie Bursztein: Yeah. So I have, and that's my own definition. I tend to try to be very practical.

Ashish Rajan: Yeah. And

Elie Bursztein: so for me, a model, the best analogy is. A brain in a jar. So you [00:08:00] talk to the genie and the genie talk with you. Yeah. That's what the first thing. It was like you talk to it and then you close the page from your perspective.

And behind the scenes the model context is offloaded. Yeah. And we don't,

Ashish Rajan: yep.

Elie Bursztein: Right. And there is no interaction with the world. That's why I create in a jar. And so you're getting answers, but then, and we, we should come back to that. I think that the best, most useful analogy, the blast radius is zero. Yeah, I mean you, that's fine.

You, I mean, it's not zero from the risk of safety. So there is problem with mental health, and that's a very different topic, but from a point, from a security point of view, the model has zero ability because it's just outputting text.

Ashish Rajan: Yeah.

Elie Bursztein: Modular sun boxing. And the model would put in code and you copy passcode.

So there is

Ashish Rajan: Oh yeah, yeah. There's a lot more layers to it as well.

Elie Bursztein: Yeah. But the point is, basic model is you talk to the model. And whatever you do with the model output is on you.

Ashish Rajan: Yeah.

Elie Bursztein: And so the security of that is a little bit different from [00:09:00] where agent is.

Ashish Rajan: Yeah.

Elie Bursztein: For me, an agent is, and that's, I think that's going to be very interesting when we talk about what we want to do for identity.

To go back to that definition because it helps anchors the discussion in practicality is. You want the AI to perform a action

Or multiple action on your behalf.

Ashish Rajan: Yeah.

Elie Bursztein: And the whole problem, and the only problem we have is can we trust the AI to perform an action on our behalf?

And that is the only problem we have.

All of, we have many, many sub problems, but fundamentally, every time people I talk about agent, it's like, do you trust the agent to take care of your health? Take care of your finance. Take care of your security. Take care of whatever you want.

Ashish Rajan: Yeah.

Elie Bursztein: And I think this is very important of this notion of the agent is acting on your behalf.

Ashish Rajan: Yeah.

Elie Bursztein: And taking action. And action have consequences.

Ashish Rajan: Yeah.

Elie Bursztein: Whereas the previous model is like, well, you talk to it well. The risk is follower. And I think that's, that's how Right. It's a good mental model [00:10:00] for security.

Ashish Rajan: Yeah. Yeah. And to your point, the difference to what you're saying is the initial version of AI was that brain and a jar.

The genie that you keep talking to respond back to you. Obviously there are a lot more layers to it. Agen is now, this brain can now take action that has hands now. It can touch things, it can reach out to your GitHub, it can reach out to your Google Cloud buckets. Can do a lot more. And I think that's where, to what you said about the trust piece, which, and the foundation too of that is identity as well to a lot of ways.

And you had this interesting thing, we, you and I were talking about how to even see identity in this ENT world. And you asked me an interesting question. I'm like, I'm I, the question was more around the fact that is, is the identity of workload. Or is it a action?

Elie Bursztein: Yeah,

Ashish Rajan: And to me and I, I think it was definitely something that I'm like, oh, actually that's an interesting one.

And for people who probably have been on red forms, there is a similar question on the, Hey, is a hamburger, sorry, is a hot dog a sandwich? Is it kind of a question where can we, depending on [00:11:00] how you look at it. So I'm curious, how do you respond to the agent identity question and where does that, how do, do you see it as a workload or do you see it as an action?

Where do you sit on that?

Elie Bursztein: So I think so. For me, the question is a so provoking one to showcase. Actually S neither. I think it's like It's

Ashish Rajan: neither.

Elie Bursztein: Yeah. And that's the interesting thing about, and actually the question is not for me, it's from Anton who exposed me to it. So credit is due to the person who Sure, sure.

The one. But I think it's interesting when we ask people where is it a workload like. Is it a process or is it a user? Like, do do we, do we show a full credential? People are like, well, not really. And so for me it's more of a then the current model doesn't work. Right. And I think the goal of the question is not to have an answer.

Everyone have a different answer.

Ashish Rajan: Yeah. Yeah.

Elie Bursztein: The what? The question is to realize that none of our traditional security model. My mental model of framework do apply. It is something else.

Ashish Rajan: Yeah.

Elie Bursztein: So that [00:12:00] is how this question is useful. I, everything's answer is interesting, you know? Yeah. You get interesting answer from it.

That's

Ashish Rajan: right. Yeah.

Elie Bursztein: And you get different thought and we get different line of reasoning.

Ashish Rajan: Yeah.

Elie Bursztein: But that really, the question don't have like a true answer. Answer is probably something else.

Ashish Rajan: Or

Elie Bursztein: in between or,

Ashish Rajan: yeah. And I guess to, uh, you, you touched on something I'd love for you. Double click on the whole aspect of, um, people look at.

I agent. How agent identity and I started my career and I am right, so I, and I'll, I'll confess. I used to be someone who could not get, could not wait to get outta identity access management. I said, this is the most boring field in the world. How many username, passwords, how many single sign-ons can I actually solve in this, in this world?

Right now, obviously this has followed me through cloud. It's following me through ai. I in my naive brain agent ai for me in the beginning was simple as, so me as a she, as a workforce employee, however you call it, I have my username, password, my quote unquote corporate identity. And if I enable and I [00:13:00] an agent, I either have a system user identity that I assigned to it, or I gave it my credentials.

Elie Bursztein: Yeah.

Ashish Rajan: Uh, I could not think of a third one, but, and you can go deeper into, it's an API credential, username, password, whatever you want to. That's how simple I kept

Elie Bursztein: it. So it's a process or it's a it's a user, right?

Ashish Rajan: Yeah, yeah, yeah. So I think that's how I describe it. But then in to what you said, uh, the, the context is the, is the missing part where, what was the context that I'm using it for?

And that then you go, go into a, a bit more deeper go, oh, what would access control look like? Would I trust everything here? So how do you, because a lot of people already have defined identity in their organization. The example that I spoke about where every Ashish in their organization has a username password.

Elie Bursztein: Sure.

Ashish Rajan: They may not have dealt with what agentic identity would be like. So how do you kind of approach it and what, how do you want others to approach it?

Elie Bursztein: Sure. So I think it's, we are not far.

Ashish Rajan: Okay. 'cause

Elie Bursztein: for, I think we just need to think about it. Into a different way. Okay. And if we, [00:14:00] if we change our mindset a little bit, I think it's, it become easier to understand what we need to do.

Ashish Rajan: Yeah.

Elie Bursztein: Doesn't mean it's easy. But I think it's simpler to think Right. Simpler doesn't mean in any shape here the way you would think about it. And I give a simple example than we can talk about more on the enterprise setting, how you would translate that. But the simple thing I like to do is an. An agent is not a, you can do it as a contractor if you like.

Your metaphor of enterprise when you bring in a contractor, you bring a contractor to help you doing slide.

Ashish Rajan: Yeah.

Elie Bursztein: Doing accounting, doing a website or whatever you want. But the contractor has what I call a mandate.

Ashish Rajan: Yeah.

Elie Bursztein: Yeah. And so same thing. Apply to many shape of life of like lawyers have a mandate from you to.

But from some legal action on your behalf. But they are not the same. If you are, if you go to your, to, to your banker

Ashish Rajan: Yeah.

Elie Bursztein: He has maybe a mandate from you to, I dunno, buy stocks.

Ashish Rajan: Yeah. Yeah.

Elie Bursztein: Maybe take

Ashish Rajan: care of your money. [00:15:00] Don't lose the money. Sure,

Elie Bursztein: sure. Yeah. But you say, well, please buy stock from me.

Ashish Rajan: Yeah.

Elie Bursztein: It doesn't mean you have give him permission or her

Ashish Rajan: Yeah.

Elie Bursztein: To Jose to have a loan and issue a million dollar loan on your loan. This is not Yeah,

Ashish Rajan: yeah, yeah.

Elie Bursztein: You ask for a, you don't ask for b. And I think what people have is kind of not tease apart just yet is when you write a prompt.

Ashish Rajan: Yeah.

Elie Bursztein: You asking the agent to perform something so air go, you are trying to get it.

You give it a mandate. And the notion we don't have is what is a mandate of the given.

And so my favorite example is if I want. And I would love to have that if I have an agent and say, Hey, go buy me a spindrift. I love those drinks, uh, on Amazon, I expect the agent to go to Amazon. Yeah. Or to Walmart or to, I don't know VU on.

Ashish Rajan: Yeah. Yeah.

Elie Bursztein: And buy spindrift for [00:16:00] reasonable amount of money. So maybe less than a hundred dollars instead of a luxury bag.

Ashish Rajan: Yeah. Yeah.

Elie Bursztein: It's not the same. Right? Yeah. Yeah. And so my mandate is very much scope to that shop, that product. Deliver to my address.

Ashish Rajan: That's right. Yeah.

Elie Bursztein: Yeah. And so it needs to be verifiable.

You need to be able as an enterprise.

Ashish Rajan: Yeah.

Elie Bursztein: But also your third party needs to be also be able to verify that Yes, Ashish ask ish agent to buy. And then you can see, I can verify the, the principle and we know how to do crypto. The crypto is not the hard part. Right?

Ashish Rajan: Yeah.

Elie Bursztein: Uh, otherwise if you try to solve a crypto problem, you are solving the wrong problem.

Or maybe accept Bitcoin, but

Ashish Rajan: Oh, yeah, yeah, yeah. We have for another conversation. Yeah,

Elie Bursztein: well, sure of that. The crypto stuff, uh, the, the crypto money stuff. If you think about the technology, the technology is a how is not a what or why.

Ashish Rajan: Yeah.

Elie Bursztein: And I think that's where the debate goes a little bit. Not what needs to be from, from a, uh.

Executive or from a ciso, even from a mindset [00:17:00] perspective, which is we need to assert

Ashish Rajan: Yeah.

Elie Bursztein: That the user really has the agent that, and the agent is exactly performing that thing and that's the mandate.

Ashish Rajan: Interesting. But then can that scale like, 'cause I you, I mean, because obviously we are talking about one Ashish

Elie Bursztein: Yeah.

Ashish Rajan: But then in an organization, say 400, or even Google for them thousands of employees, right? How have you, is that something that is a. Can Is that something that is a mandate can be applied throughout an organization easily?

Elie Bursztein: That's going to have to be a standard. It has to be something that every company understand.

Ashish Rajan: So top down, then it should come from right to the top.

Elie Bursztein: Okay. Yeah. We'll have to do standardization. I don't know where.

Ashish Rajan: Yeah,

Elie Bursztein: I don't know who.

Ashish Rajan: Yeah.

Elie Bursztein: But we'll have to agree on a format.

Ashish Rajan: Yeah.

Elie Bursztein: Which everyone can verify. And then every phone and every TPM or you know, wherever you put your crypto material should be able to.

Sign a mandate. Yeah. With your identity.

Ashish Rajan: Yeah.

Elie Bursztein: And then we can have all this supply chain and traceability. But at the end of the day, [00:18:00] even from an enterprise point of view, let's say you have employee A using agent B.

Ashish Rajan: Yeah.

Elie Bursztein: Performing agent, uh, action C.

Ashish Rajan: Yeah.

Elie Bursztein: For traceability or the ability auditability, compliance and security, you need to have this kind of supply chain.

Ashish Rajan: Yeah.

Elie Bursztein: And so that's the notion of mandate where everyone can sign them and say, yes, I verify. Yes. I verify. Other, while we don't have all the auditability, we don't have control.

Ashish Rajan: Yeah.

Elie Bursztein: And so there, this is not, this is a missing piece.

Ashish Rajan: Yeah. Yeah.

Elie Bursztein: Now that is very complicated.

Right. Give you an example. We talk about spin drift.

Ashish Rajan: Yeah.

Elie Bursztein: So it's probably like a product name, but you don't want to be too precise because the agent needs to have like this flexibility that we love.

Ashish Rajan: Yeah.

Elie Bursztein: And then we have probably a specific store, but then what happen is they don't have it in the store you want. And then what price viewpoint really have like 1299 because you know, so you want.

Less than, I dunno, $25. Yeah.

Ashish Rajan: Or you only want to buy when there's a sale.

Elie Bursztein: Or something like that. Right. And then it has to be delivered to your address. Yeah. And then maybe there's other constraints we haven't [00:19:00] sold of.

Ashish Rajan: Yeah.

Elie Bursztein: And so the cons, the constraint are very malleable.

And the user is writing a prompt.

So now we have a problem of natural language to something. Something formal and verifiable.

Ashish Rajan: Yeah.

Elie Bursztein: Which is expressive enough and to migrate demise. And I don't know how to do it. Right. To start with, I don't know.

Ashish Rajan: Yeah.

Elie Bursztein: I think. I, we sort of converging on the problem definition, which is the most important part.

But like the technical part, I don't really know how we're going to do that.

Mm.

Elie Bursztein: The thing which is the closest, and I hate to say that, is smart contract.

Ashish Rajan: Interesting,

Elie Bursztein: right?

Ashish Rajan: We are back to the crypto world now,

Elie Bursztein: sadly.

Ashish Rajan: Yeah.

Elie Bursztein: No, no. Sadly

Ashish Rajan: no. But to, 'cause you know how a lot of people talk about LM as a decision maker and all of that as well, where the policy as a code is natural language driven and we use.

The LLM decide because obviously this is a many few people have, far, few people have actually gone to that extent. A lot of people are still trying to, hey limiting the number of agent identities, how people are approaching it, [00:20:00] because it's hard to kind of say, Hey, now that I have a's AI agent identity.

His access controls are gonna be the same forever. It is hard to mandate that, to your point about building a standard, a lot of people spoke about the fact that because the agent is allowed to be, to what you were saying, be a bit more flexible. How do you have your governance policy, which is, or your mandate in your example, which is still a bit flexible, so that if I were to, in your example, if I were to say, Hey, um, my mandate is that as an organization.

I should only have encryption in things that are external facing, maybe not internal facing, but, and the policy should be on, quote unquote, mandate is still applicable for, but it's different for the external people, internal people. A lot of people went down the path of using some kind of an LLM model to do that because that allows you to describe natural language.

You can share what you want to and at that point in time. Do you see this as a possible option [00:21:00] for it to be a thing where you can use it so as a, as a, as a possible option?

Elie Bursztein: It it's possible. So we could imagine a world

Ashish Rajan: Yeah.

Elie Bursztein: Where you describe the mandate in natural language.

Ashish Rajan: Yeah.

Elie Bursztein: And we have LLM as a judge.

Yeah. Whereas LM review the mandate and decide if the mandate is correct.

Ashish Rajan: Yeah. Yeah.

Elie Bursztein: But in a way, that's what we do in life, right? So for example. If you order something at the restaurant,

Ashish Rajan: yeah.

Elie Bursztein: You pick from a list, which is always different and you order whatever you want. And sometime, you know, you would,

Ashish Rajan: you get what you want.

Sometimes you just

Elie Bursztein: want, yeah. You say, well sauce on the side, or No picker, or whatever that is. Right. And then it's taken by the waiter who pass it to, to the cook who pass it back to the waiter who pass it back to you.

Ashish Rajan: Yeah.

Elie Bursztein: So back to everyone that has a job.

Ashish Rajan: Yeah.

Elie Bursztein: And then you get your food.

Ashish Rajan: Yeah.

Elie Bursztein: But then. You hope that what you order is what you get.

Ashish Rajan: [00:22:00] Yeah.

Elie Bursztein: And if there's a problem is the order was wrong is the food was not done correctly. Yeah. And then usually we prepare your food. But you want to be able to debug the chain.

Ashish Rajan: Yeah.

Elie Bursztein: So even for, and so everything in life is this kind of multi-agent.

Ashish Rajan: Yeah. '

Elie Bursztein: cause you know, the waiter can be viewed as an agent. The crew can be viewed as an agent. Yeah. Yeah. And so. It's like, yes, we can order in plain English. Yeah,

Ashish Rajan: that's

Elie Bursztein: fine. But at some point we need some sort of traceability. That's That's

Ashish Rajan: right.

Elie Bursztein: And we need some sort of verification at the end that the me you ask is no, it's the same thing.

So can we do it in a way which is fully checkable? Can we translate natural language to like a fully checkable language or a smart contract? Maybe is it enough to sign it with your credential and everyone going along the chain to sign it and say, yes, I verified.

Ashish Rajan: Yeah.

Elie Bursztein: Maybe. I don't know. That's why I said I don't know what will be

Ashish Rajan: in

Elie Bursztein: practice, what's going to emerge.

Ashish Rajan: Yeah.

Elie Bursztein: But the concept is [00:23:00] required because if a bad action happen

Ashish Rajan: Yeah.

Elie Bursztein: And you have to report to the board. Yeah. And or you have to tell the world. You have to know what happened.

Ashish Rajan: Yeah. Yeah.

Elie Bursztein: And you can't afford, it's not a. It's like a log, right? Yeah, yeah. You know, we, we talk I know you started with, uh, doing all those cloud stuff, right?

Yeah. And the big thing in cloud for many years was observability. Yeah. How can you, uh, how can you observe a single station across multiple service for debugging, right?

Ashish Rajan: Yeah, yeah.

Elie Bursztein: And people, and we back to the safe idea of where the love thing we have done in the past. Are useful today. It's just we have to recontextualize into the agent world.

Ashish Rajan: Yeah.

Elie Bursztein: That is true. Observability. Observability is not, oh, I recall all the tool call. Sure. But fundamentally, we need to also understand what was the purpose of all those tool call and all those execution.

Ashish Rajan: Yeah.

Elie Bursztein: So that we have this notion of observability.

Ashish Rajan: Yeah.

Elie Bursztein: And we need this SE player because we are the [00:24:00] top of the stack where.

We ask semantic task.

Ashish Rajan: Yeah.

Elie Bursztein: So the ability needs to also be semantic to match the intent of the task.

Ashish Rajan: Yeah.

Elie Bursztein: Right. And so that's, that's, I think that's where we're going, right?

Ashish Rajan: Uh, yeah. Yeah. And I think, I'm glad you brought this up as well. So, uh, I was telling you this earlier 'cause I'm writing a book on how ai, security, engineering, and semantic layer keeps coming up right.

In all the conversations and all the work that I'm doing. But what I find in most organizations, a lot of people have the AI system. Which, whatever log it would provide, that becomes so observability and people move on. But to what you said we are not giving it a logical, what's the word for it? It's not a technical thing we're giving.

So I can use a signature that I can see clearly in a log. It's, it is a semantics of am I asking for the password or am I just saying I just want burgers for lunch today? There's a difference. But as a human, I would understand it because I have the context, I have the se, I understand the semantics. But their logging needs to be able to identify that at a, as a natural [00:25:00] language, quote unquote, which is where a lot of people are not building a semantic layer.

They're just going from AI system, whatever log the AI system provides, and I'll figure out the instant response, the security requirement later. What's the how do you see the semantic layer as a, as a key part of securing AI system moving forward?

Elie Bursztein: No, I think you know, this is a bleeding edge of thinking.

Ashish Rajan: Yeah,

Elie Bursztein: yeah. At least for me, right? This is a thing which is top of the mind at the moment.

Ashish Rajan: Yeah. Yeah.

Elie Bursztein: I think agent identity is a big topic, right? Yeah, yeah. Yeah. So it's two big topic.

Ashish Rajan: Yeah.

Elie Bursztein: At the moment, this is the one where we try to figure out what to do.

The one after, the one after is. Privacy, which will be a completely different ball game.

Ashish Rajan: Mm-hmm.

Elie Bursztein: So my mental model that I always try to understand is what are we doing now?

Ashish Rajan: Yeah.

Elie Bursztein: What is emerging?

Ashish Rajan: Yeah.

Elie Bursztein: And what's next? So what we do now is agent security, and we can talk about that.

Ashish Rajan: Yeah.

Elie Bursztein: I think this one, the agent identity is the emerging one, which we need to be solved next.

Ashish Rajan: [00:26:00] Yeah.

Elie Bursztein: And the one right after that, with the rise of personalization and private data into model privacy is going to be coming focus. That's my best guess.

Ashish Rajan: Yeah.

Elie Bursztein: Again.

Ashish Rajan: I mean, we can't see the future, so yeah.

Elie Bursztein: Can't see the future disclaimers just,

Ashish Rajan: But we can just record you on, on the call and just be like, Hey, he said privacy would be the future,

Elie Bursztein: I think.

Yeah. Privacy is going to be,

Ashish Rajan: oh, yeah. Yeah, a hundred percent. It's just a, it's a, and it's a multi-layer problem as well, because we go into ownership. Who's responsible for it. There's so much, and I mean, I'm not a privacy expert. I hear these things and I'm like, yeah. As a, as a. Person who has data on the internet with all these providers, I would be concerned what happens and if my agent is doing my order instead of me, and there's a whole thing there.

I'm gonna take a step back onto that first current problem that we were talking about. Agent security.

Elie Bursztein: Yeah.

Ashish Rajan: How do you see agent security today? And is that maturing or has already matured? Is that why it's a present situation today? Like how do you see agent security today?

Elie Bursztein: I think it's, it's [00:27:00] maturing. I think some, I think we found we finding the right abstraction.

Ashish Rajan: Yeah.

Elie Bursztein: And we're finding the set of tool if I take a step back to, to what we said at the beginning, we said, well, the, the initial generation was brand a jar, and then we moved to agent, which is performing action on my behalf. And so we need to change the threat model. And the threat model for the model was don't get prompt injected.

Ashish Rajan: Yeah.

Elie Bursztein: Is it still a problem? Absolutely. If you move to an agent, two things emerge. So first one is indirect prompt injection, which is this idea that, well, you are processing interested data that you get from the tool.

Ashish Rajan: Yeah.

Elie Bursztein: So maybe for the viewer who don't know, the way we have an agent is essentially the model is say given a set of tools by describing them and then it does a tool code.

So we decide, yeah. What function to call, which, which argument? And then get data [00:28:00] back.

Ashish Rajan: Yeah.

Elie Bursztein: And the data is mixed with your instruction, and then you hope that the thing is going to the right thing. Yeah. That's, that's the basic of it, right?

Ashish Rajan: Yeah.

Elie Bursztein: Yeah. And so now we have indirect pump injection where because the agent is interacting with the world,

Ashish Rajan: yeah,

Elie Bursztein: they are.

There is a surface attack surface, which is for far more complex. One example from this week, and again, it's not to single out anyone because. You know, every week is a different week. Yeah. Of who, who have problems. It's more interesting to learn about them. There is this, uh, remote, this this remote compromise on Claude where they figure out that if you interact with Google Calendar

Ashish Rajan: mm-hmm.

Elie Bursztein: If you put instruction into the calendar, event Claude will just execute it.

Ashish Rajan: Oh

Elie Bursztein: yeah. You do work for, you say, Hey, process my counter to say yes or no. Right. Something like that. Yeah. Yeah. And then the content of the event. Is processed by the context. Yeah. And say for example, visit that website and the model is like, maybe I should learn more about this event, so let me go to the website.

And obviously that's not what you want.

Ashish Rajan: Yeah.

Elie Bursztein: And [00:29:00] there you go. Zero click because you just have to send a Google event.

Ashish Rajan: Yeah.

Elie Bursztein: And it's not that Google event is insecure, it's not that crude is by itself. A problem is just that we need some sort of prompt injection defense. And that's an example of indirect prompt.

Ashish Rajan: Yeah. Yeah.

Elie Bursztein: Right. And so it's like. Exercise type two if we want to think on the web framework. Yeah. Exercise type one on the brain of the jar and know S type two is a problem.

Ashish Rajan: Yeah.

Elie Bursztein: And they might be persistent, right? You might put them into storage. And then So we're really into the exercise type two world.

Ashish Rajan: Yeah. Yeah.

Elie Bursztein: Right. And so, so reason I mentioned that is because it's very important to always uncover in what we know.

Ashish Rajan: Yeah.

Elie Bursztein: So we understand what is the delta, what we need to do. Yeah. It's not like we need to invent everything because otherwise. It's overwhelming.

Ashish Rajan: Yeah.

Elie Bursztein: At least for me it's too much. Right?

Ashish Rajan: Yeah. Yeah.

Elie Bursztein: Being very concrete. And so if we're in that world, we know that exercise, for example, can only be contained. And so we have a emerging good mountain model, [00:30:00] which is a blast radius. I don't know about who come up with it. I use it,

Ashish Rajan: so I think a military term. Yeah,

Elie Bursztein: yeah. Military term, but I don't know who brought it to the agent.

Well,

Ashish Rajan: oh yeah,

I think it's been clean, the cloud word, but Yeah. But people understand what Blast Radius

Elie Bursztein: is. Yeah. Yeah. So you have a blast radius and so the agent only can interact with the world, with tools.

Ashish Rajan: Yeah.

Elie Bursztein: So our job is to contain the blast radius.

Ashish Rajan: That's right.

Elie Bursztein: Right. That's all there is to, again, simple, not easy.

Yeah. But like, if we simplify the model, we, we keep the eye on what's important.

Ashish Rajan: Yeah.

Elie Bursztein: And so depending on the tool the agent has,

Ashish Rajan: yeah.

Elie Bursztein: You are going to have a different blast ratios. Yeah. And I'm going to give a few example.

Ashish Rajan: Yeah.

Elie Bursztein: Before I do that, I think where the model is still matching

Ashish Rajan: Yeah.

Elie Bursztein: Is people are treating the tool as a list, as a static list.

It is not the case.

Ashish Rajan: Yeah.

Elie Bursztein: The question is in which order are you, you calling the tool? Mm-hmm. And I'm going to give a concrete example so to, you know, to make it very concrete, which is if [00:31:00] you access private data, the chat support, that is fine. And then if you call the internet, that is not good because you have an xFi point.

Ashish Rajan: Yeah.

Elie Bursztein: Right?

Ashish Rajan: Yeah.

Elie Bursztein: Now, if you do the reverse, which is like you ask for, I don't know, a iPhone model.

Ashish Rajan: Yeah.

Elie Bursztein: Right? So you, you say, Hey, I want to buy this iPhone. Totally fine. You get the webpage from Apple, you're good. And then you fax the bank information to see if I can pay for it.

Ashish Rajan: Oh,

Elie Bursztein: perfectly reasonable.

One is SAIF.

Ashish Rajan: Yeah.

Elie Bursztein: The other one where I faced first my bank account and then I go to the internet, not so much. Right? Yeah. Yeah. And so we're talking about two type of tools.

Ashish Rajan: Yeah.

Elie Bursztein: And so I think that's very powerful. Yeah. It is. In MCPI see the T-shirt.

Ashish Rajan: Yeah, yeah, yeah.

Elie Bursztein: So it's in the MCP 3.0 specification.

I think now they landed, which is annotations. And we have an notation like is it open world? Meaning it interact with, you don't know if it's recording or not. Does it have PII. And did it have right capability? Yeah. And did it have [00:32:00] destructive capability?

And I think by those type of capabilities, we can reason about it is fine to have PII but can't be destructive.

Yeah. Or it's fine to have open world but can't be destructive. Yeah. And but the other matter, right? You can decide to mutate or record and then access the world.

Ashish Rajan: Mm.

Elie Bursztein: But you can't access the world. And then you see, and there's a reverse from. World to privacy.

Ashish Rajan: Yeah.

Elie Bursztein: So other matters a lot. I think that is still not the case.

I think a lot of, a lot of security system still do per tool call instead of reasoning of that. And then the next step that fill out, which is more like of the research on a deep mine and elsewhere and Google and everywhere else Yeah. Is uh, dynamic policy, which is you could imagine a world and that seems to be very promising up to a point.

Right. Nothing is perfect where you would. Look at the intent. Yeah. So back to semantic, and you say, well, here's my prompt. And then a reference model or security model will say, okay, here's a tool you can use.

Ashish Rajan: Yeah.

Elie Bursztein: And here's the order you can use them. [00:33:00] And if you don't do this, there is a question.

Why do you need that tool? Like back to the mandate question. See, everything is in interconnected. Really?

Ashish Rajan: Yeah.

Elie Bursztein: Is if you want to buy on Amazon, you should access the Amazon API.

Ashish Rajan: Yeah.

Elie Bursztein: You have no business accessing the Bank of America API.

Ashish Rajan: Yeah.

Elie Bursztein: Why would you do that?

Ashish Rajan: Yeah. Yeah.

Elie Bursztein: And so, you know, that's, that's kind of the, the granularity we don't have yet.

Ashish Rajan: Yeah.

Elie Bursztein: Agent is like you have an MCP. Yeah. And it's, or disabled.

Ashish Rajan: Yeah.

Elie Bursztein: Like, it probably has to be per session. Yeah. And they are control. And most frontier model have, you know, the ability to disable and enable different tool for each session.

Ashish Rajan: Yeah.

Elie Bursztein: But the user has to do it.

Ashish Rajan: Yeah. Okay.

Elie Bursztein: And you, you probably want something, whereas the, back to your idea of like.

So a model deciding what tool you should use for your session, right?

Ashish Rajan: Yeah,

Elie Bursztein: yeah, yeah. And so we have this more like of a, I won't use a bad word, but like a semantic firewall. I'm sure marketing will come up with a or semantic gateway. But yeah,

Ashish Rajan: we call M Firewall as well is a good one. People have been using it.

Yeah, yeah.

Elie Bursztein: Something like that. And I think it's really this question, so I think that's, that's really [00:34:00] what we try to do today.

Ashish Rajan: Yeah.

Elie Bursztein: Understand better the tool, better annotation, better traceability, and better reasoning on the execution of tool. So that doesn't solve the intent problem.

Ashish Rajan: Yeah.

Elie Bursztein: But that avoid if people are okay with the restriction and that we can talk about the restriction, then that limit to blast Radiuss.

Ashish Rajan: Yeah.

Elie Bursztein: And at the very least in a, uh, enterprise settings,

Ashish Rajan: yeah.

Elie Bursztein: You can probably have this conversation of like, well, those system are off limit for the agent. This, this system are fine. This system needs confirmation. So you could basically contain your blast radius.

And I think that is today mandate and 90 is big emergency.

And then when we have better graphs on both things, we'll move to privacy.

Ashish Rajan: Sounds good. Yeah.

Elie Bursztein: I think that's the another I have.

Ashish Rajan: Yeah, I agree. I, I'm glad we kind of spoke about the present, where it's going and potential future as well. The only thing I have left is people, obviously, CISOs and security leaders out there are trying to implement and get their head around this.

[00:35:00] Some people have just started their journey. They've just done. AI productivity. They're not even at that agent stage yet. Any thoughts on how people should start, and I, I think you kind of answered this with your SAIF framework that you give, that you already have a lot of things that you can use, reuse.

Is there a mental model that you feel like, Hey, if you look after all these top three things, you're kind of okay, rest, everything you've done in the past so far for security, that covers you for most base. But if you look after these three or two, or whatever the number is in your mind. That you think that, hey, these are usually the blind spots that people end up having when they are coming from zero to suddenly.

I have agents everywhere. Are there any thoughts or top two that, or top three that come to mind?

Elie Bursztein: I think I only have one.

Ashish Rajan: Oh, perfect. What's the top

Elie Bursztein: one? I have one. You know, I think, I think it's, I like thinking in framework. I think it's good to have a rule of thumb because every company is different, every user is different, every is different.

So it's really hard to have rules.

Ashish Rajan: Yeah.

Elie Bursztein: But a mental model [00:36:00] I think help. And I think for me. One of the most important thing we have in security for I long as security existed, is least privilege. Right? Yeah. We, we believe that you should not get a car before you get a driving license, and you should not have access to everything before you know how to use a

Ashish Rajan: security.

Yeah. Yeah. That's right.

Elie Bursztein: I think when we talk about agent,

Ashish Rajan: yeah.

Elie Bursztein: Remember what we talk about is you are getting the genie out of the bottle.

Ashish Rajan: Yeah.

Elie Bursztein: And you're giving it access to the world. One conversation you can have is. What is the first tool am I going to give?

What is the first thing that my agent needs?

Yeah. To perform. My first idea don't be, don't, don't give 20 tools to the agent and then figure out what you want to do. Have a conversation. And even security. It is very hard, right? Today.

Ashish Rajan: Yeah.

Elie Bursztein: Today was very hard for security. Security team. Everywhere in the world is everyone is moving to ai.

And to be frank, there is no reason to not move to ai. And yes, it has to be fast because people are very excited about it. So that [00:37:00] is a reality we live in. We can't change the reality. It's not like, Hey, give it a pause. We're going to fix it and then you can use it. That's never going to work. Right?

Yeah. So we have to, as security practitioner, be very cognizant of the need of speed.

Ashish Rajan: Yeah.

Elie Bursztein: And the need of, of accepting this big change and do what we can with what we have.

Ashish Rajan: Yeah.

Elie Bursztein: Knowing it's not enough, but that's what we have. And be very much like containing the blast Radiuss. Yeah. So I think the conversation needs to be about yes.

First, the most important is yes.

Ashish Rajan: Yeah.

Elie Bursztein: Not blocking you. Yes, you can do it. What do you need to try? Don't throw the whole key thing. Don't start to integrate all the a p at once and see, because I, the agent needs everything to be able to your task. Like that is rarely the case.

Ashish Rajan: Yeah,

Elie Bursztein: I mean. I can't think of a workflow where you need a bank information and internal system and ware this assembly capability.

That's a little bit too much. Right?

Ashish Rajan: Yeah.

Elie Bursztein: And in real life, when you go to a shop [00:38:00] or when you do an activity as a human, you, you don't have access to everything at once. That is not the thing, right?

Ashish Rajan: Yeah.

Elie Bursztein: Yeah. So the conversation is more than discussed about, okay, what is the thing you need?

Ashish Rajan: Yeah.

Elie Bursztein: And people were like, well, I need to, like I say, fine.

Put it into a vm. Here's your VM with your agent. So we contain the blast reduced to the vm and you know the VM die Well, the VM die.

Ashish Rajan: Yeah.

Elie Bursztein: They don't want vm. You're like, fine. Can I rema your, your, your laptop?

Ashish Rajan: Yeah.

Elie Bursztein: Fine. So let's make sure there is nothing on the laptop that is of value. Because there is a risk, right?

Yeah. So like, fine, you want to do it, here's a guideline. Backup up everything you have on your laptop. Yeah. Remove things. So we, we reduce the blast Radiuss. And then let's decide which of the system you want to use first.

Ashish Rajan: Yeah.

Elie Bursztein: It's not, you cannot use everyone's thing else after it's, let's start and talk carefully about what is the minimal thing you need to get started.

We will help you. And I think that's a more positive attitude. And I think that is a key linchpin. It's is Azure security team. We need to [00:39:00] learn to grow with the team.

Ashish Rajan: Yeah.

Elie Bursztein: Whereas for the last 10 years it has been, we've been parachuted in the existing system. Right. And it's a very, very different world.

Ashish Rajan: Yeah.

Elie Bursztein: You're not parted in existing. You are need to accomp, accompany the business need and be on the journey with people and be a partner.

Ashish Rajan: Yeah.

Elie Bursztein: But a positive one.

Ashish Rajan: Yeah.

Elie Bursztein: Right. You can only own business care.

Ashish Rajan: Yeah.

Elie Bursztein: You have to. And so I think that that's a very big mortality shift.

Ashish Rajan: Yeah.

Elie Bursztein: And it's very, very hard for physically, because we are very cautious people.

That's right. And so we see always what can go wrong. And we've been asked to, to stretch, right? Mm-hmm. And I think that's the mountain model. So I think that's the only thing I, I think is the most protest thing people can do is on Bray Yes. And model and try to figure out how to make it a happen while trying to reduce the blast reduce.

And I think if you do that, you'd be fine.

Ashish Rajan: Yeah. Awesome. No, that's a great advice as well. Those are obviously the technical questions I had. I've got three fun questions as well.

Elie Bursztein: Three fun questions.

Ashish Rajan: Okay. Yeah. First one being, what is it that you spend time on most when you're not trying to [00:40:00] solve the SAIF challenges or AI security problems of the world?

Elie Bursztein: Okay, so I do two things. One is I'm writing a book on innovation.

Ashish Rajan: Oh.

Elie Bursztein: I'm hopefully having a draft this year, so I'm excited to get our file better reader, if people are interested. That's that. It's really try to to condense. 15 years of learning at Google. We, we went too many project from ideas to, to launch.

Ashish Rajan: Yeah.

Elie Bursztein: I think as I mentioned, one of our biggest project is password checkup, who went two billions of users. Yeah. So, so the whole journey, I'm trying to say, I think there are some interesting principle. I don't believe there is rule, right? Yeah. But you can have, you can, it's

Ashish Rajan: the first principles are always there.

Elie Bursztein: It's useful, right? Yeah. I think it's useful for people and so I'm trying to codify them and I would love to get them into the hand of people. And I love sharing and, you know, getting interaction, I think. So that's, that's one. So the thing which is probably more fun is I have a art foundation.

Ashish Rajan: Oh,

Elie Bursztein: right.

See? Very well,

Ashish Rajan: I mean, I can you have to look for an art foundation as well.

Elie Bursztein: I have to look. Okay. Okay. Well it's, [00:41:00] um, it's about preserving and promoting the heritage of playing cards.

Ashish Rajan: Ah, interesting.

Elie Bursztein: So the idea is we try to get acquired playing card deck from ancient time tower deck.

Ashish Rajan: Interesting.

Normal deck. How old can we go in cards?

What's the oldest?

Elie Bursztein: The oldest, we don't know for sure, but probably like on Egypt. Minus 5,000. Minus 6,000.

Ashish Rajan: Minus 6,000?

Elie Bursztein: Yeah.

Ashish Rajan: People have paid cards back in the day with like solid wooden stuff, I imagine. Or stones.

Elie Bursztein: Yeah, they had a little bit of paper. In India we have plenty of game, like Gen gfa.

Yeah,

Ashish Rajan: so

Elie Bursztein: playing card is very, and we have the modern version. Pokemon car. Yeah,

Ashish Rajan: yeah,

yeah.

Elie Bursztein: Magic, you know, has been for a very long time, part of a humanist story.

Ashish Rajan: Yeah.

Elie Bursztein: And so we try to, to acquire the interesting deck because we can't have all of them.

Ashish Rajan: Yeah.

Elie Bursztein: Interesting. And then try to put 'em online. And then that's called the et a foundation.

And then that's what I try to do. I try to, to get that to the people

Ashish Rajan: that is very unique and, uh, card foundation, like a playing card foundation. I did not even know playing cards have been there for over 5,000 years,

Elie Bursztein: I think. [00:42:00] Something like that. We're not sure for sure. It's called the Manu look.

Ashish Rajan: Okay.

Right,

Elie Bursztein: right. The original one. Yeah. Allegedly from Egypt. You can find them in the Carroll Museum, I believe.

Ashish Rajan: Wow. Wow.

Elie Bursztein: But yeah, we don't have those.

Ashish Rajan: Of course. Yeah. Yeah. So that's, so that's your, so the second love of your life, I guess

Elie Bursztein: it's not the love of my life is I, I really wanted to, to do something to help the world.

Right. Something which is not technical. Yeah. Which should be like, art is kind of more stable than the crazy world we're living, so,

Ashish Rajan: yeah.

Elie Bursztein: And so a little bit something different.

Ashish Rajan: Yeah.

Elie Bursztein: My mom used to be someone who was passionate, but also I grew up with that. And so I wanted to do something and then I realized there was no such a foundation door.

Yeah. Which is kind of the only one maybe that I know of. Of course. There might be something else.

Ashish Rajan: Yeah, yeah. Yeah.

Elie Bursztein: So I thought it was providing value to a community. I like,

Ashish Rajan: yeah.

Elie Bursztein: I think it was an interesting topic. So that's,

Ashish Rajan: yeah. Fair. I mean, and I, I generally meant it because I was like, you do look like an artist the way you dress, in the way you talk as well.

So I was like, yeah, I'm not surprised. You have some kind of an artistic, uh, inkling, whether it's with cards, but at least you have some kind of an artistic inkling, uh, inkling there as well. [00:43:00] Second question that I have is. What is something that you're proud of that is not on your social media?

Elie Bursztein: I don't have much on the social media.

Only post about research. Right. I view myself as a scientist.

Ashish Rajan: Yeah.

Elie Bursztein: And one thing I value about scientist scientist ethos is we are a political and we don't sell products. Yeah. We're just here to try to understand the problem and help solve the problem. And it's about science. So. I'm kind of a reserve on social media, so I have quite a bit.

Ashish Rajan: Yeah.

Elie Bursztein: Something I'm proud of. I think the thing which warms the most, my heart is all the people I talk to through the year and that came back and say, Hey, that's what helpful.

Ashish Rajan: Oh, wow.

Elie Bursztein: And I think that is something I, I really enjoy. I do try to talk to two to three people every year on Friday on whatever they want.

Yeah. And try to help out, bring something ideas together to, to, to have a conversation. Right. It's always interesting to hear what people are looking [00:44:00] at and try to, to get ideas. It's how you, you get creative, express yourself sort of ideas, and you can make triangle. That's the basic of innovation.

Ashish Rajan: Yeah.

Elie Bursztein: And I think it's very, it's very cool to have people. Reaching out like, you know, two or three years after say, Hey, we had this conversation. It really helped me.

Ashish Rajan: Yeah.

Elie Bursztein: I'm like, wow. That's great. Right. I it felt that you investing in people matters.

Ashish Rajan: Yeah. Awesome.

Elie Bursztein: And I love that feeling.

Ashish Rajan: Glad you do that, man.

And uh, final question. What is your favorite food or cuisine you can share with us?

Elie Bursztein: Yeah. Can

Ashish Rajan: say restaurant or cuisine,

Elie Bursztein: Yako.

Ashish Rajan: Ya. Noco?

Elie Bursztein: Yes. Japanese, ya, NOCO.

Ashish Rajan: Is that a type of food or is that a

Elie Bursztein: restaurant? So it's a barbecue. It's like a coin barbecue, Japanese style. So you go and you grill your own meat and your thing and it's super Oh, super convivial.

Ashish Rajan: Yeah, yeah, yeah. Yeah.

Elie Bursztein: So I, I really like going with French because then, you know, you cook food as you cook and make Oh, you're

Ashish Rajan: fair. Fair.

Elie Bursztein: It's very, very nice. There is the, uh, European version, which is where you have your, the

Ashish Rajan: cheese.

Elie Bursztein: Yeah, the cheese,

Ashish Rajan: yeah, yeah, yeah.

Elie Bursztein: And I think this is the type of food I like.

It's like, like [00:45:00] more of this, like everyone do their own thing a little bit. It, make it for a very, uh, warm and happy meal.

Ashish Rajan: Yeah. Yeah. Yeah. Fair. Very

Elie Bursztein: social

Ashish Rajan: fair. I mean, it's, it's funny, I think, uh, so, I used to go to Korean ba quite a bit. One thing I did not get, it's funny 'cause it's funny, but I was like, wait, why am I paying money for me to cook food?

Is that by, that gives you my psychology. I realize that over time to you what you said, it's about the company, it's about your, 'cause you obviously sometimes it's hard to do all that cooking at home and all that. So over time, but Korean barbecue, I did not even know that Japanese barbecue had name. So now I'm gonna check that out, man.

But, uh, okay. Okay. Go. So that's most of the questions I had. Where can people get in touch with you, connect with you and find out more about the work you're doing, link LinkedIn, whatever.

Elie Bursztein: So I have, I have LinkedIn I have

Ashish Rajan: a blog

Elie Bursztein: Twitter.

Ashish Rajan: Yeah.

Elie Bursztein: And I have my personal website called ad.net.

Ashish Rajan: Yep.

Elie Bursztein: I'm sure you can put that on the video.

Ashish Rajan: Yeah, I would put that as well. Yeah. Yeah.

Elie Bursztein: Fact that I, yeah, people should get in touch. I do my best answer to people if I don't, I apologize. Got a lot message every day. I, I really try.

Ashish Rajan: Yeah.

Elie Bursztein: Sometimes they say on, but yes.

Ashish Rajan: Yeah. And I, I think we, [00:46:00] they should check out your blog as well. We have got a research.

A lot of the work you do is also there as well, right?

Elie Bursztein: I try we post, I try to post every talk we make and every research paper we make, I'm a little bit behind. And I've not been as active as I wanted on the blog because I'm writing the book.

Ashish Rajan: Yeah.

Elie Bursztein: And as you know, writing the book is a lot of time.

Ashish Rajan: Yeah, yeah, yeah. And

Elie Bursztein: so I only have so many hours during the week to write outside of work. Right. So that's why I've been a little bit silent.

Ashish Rajan: Yeah.

Elie Bursztein: But that's it. And on LinkedIn, I post every and Twitter.

Ashish Rajan: Oh yeah.

Elie Bursztein: I post every week. Uh, one paper people should read.

Ashish Rajan: Yeah.

Elie Bursztein: The idea here is. I read a ton of paper every week and I try to curate one.

Right. And the deal is one.

Ashish Rajan: Yeah. Okay. Yeah.

Elie Bursztein: One thing is like, this is the one thing you should read during the weekend and here is why.

Ashish Rajan: Yeah.

Elie Bursztein: And you will find hopefully interesting, and I think that's what people want the most for me.

Ashish Rajan: Yeah. Yeah.

Elie Bursztein: Which is a way to keep up with ai.

Ashish Rajan: Yeah.

Elie Bursztein: So that's what I do.

Ashish Rajan: Yeah. Awesome, dude. I'll put those things in there, but thank you so

Elie Bursztein: much for coming. Thank you so much for

Ashish Rajan: Oh, dude. Like, I mean, I, I, I enjoy this conversation, but [00:47:00] whatever else. I'll see you soon. Thank you for watching or listening to that episode of AI Security Podcast. This is brought to you by Tech riot.io.

If you want to hear or watch more episodes of AI security, check that out on ai security podcast.com. And in case you're interested in learning more about cloud security, you should check out a sister podcast called Cloud Security Podcast, which is available on Cloud Security podcast.tv. Thank you for tuning in, and I'll see you in the next episode, episode.

Peace.

No items found.
More Videos