How Lovable Manages 100+ Daily Changes, Vibe Coding & Shadow AI

View Show Notes and Transcript

What does it actually look like to run security inside one of Europe's fastest-growing AI companies? In this episode, recorded live at the Munich Cybersecurity Conference (MCSC), Ashish Rajan sat down with Igor Andriushchenko Head of Security at Lovable, the AI-native platform that lets anyone build and ship full applications without writing a line of code.

Igor joined Lovable as employee #40. Six months later, the team had grown to 150+. Developers were running multi-agent workflows overnight, PMs were pushing pull requests, and the volume of code changes was hitting numbers that challenged every traditional security process they had. This is the security story nobody talks about in AI-native scale-ups and Igor lived it.

In this episode, they cover: why your CI/CD pipeline is being load-tested to destruction by AI-generated churn · how to use PAM (Privileged Access Management) as a practical guardrail for AI agents that can't escalate to production secrets · why the allow-list vs deny-list logic is reversed for AI agents compared to traditional security · the overlooked SCA supply chain risk when AI recommends unmaintained or hallucinated packages · why old SAST tools are failing and what the new generation of agentic code scanners does differently · how to identify and manage advanced, intermediate, and basic AI users in your org without killing their productivity · and the practical "crawl, walk, run" approach to building internal AI security tooling that actually sticks.

Igor also shares how Lovable's security team built an incident response AI skill, uses reachability analysis agents to triage SCA findings for enterprise customers, and why the real investment isn't in the AI model, it's in the skills ecosystem and data connections underneath.

Questions asked:
00:00 Introduction: Securing the AI Workforce
03:50 Who is Igor Andriushchenko? (Head of Security, Lovable)
06:10 The Churn of Change: Why AI Will Break Your CI/CD
10:40 The FOMO Problem: Don't Force AI Adoption
11:50 The "Air Pocket" Strategy for Safe AI Experimentation
14:00 The Context Paradox: More Access = Dumber AI
17:40 Managing Agent Sprawl and "Advanced" Users
19:40 Why You Must Treat AI Agents Like Human Developers (PAM Controls)
22:30 The Need for AI Telemetry & Visibility
27:50 Blurring Roles: When PMs Become Developers
31:30 Why You Must Use "Deny Lists" Instead of "Allow Lists" for AI
34:30 AI SAST vs. Traditional SAST: Finding Business Logic Flaws
39:40 Supply Chain Risks: When AI Recommends Dead Libraries
45:40 Building Custom AI Skills for Incident Response
52:50 Fun Questions: Battlefield, Team Culture, and Comfort Food
--------------------------------------------------------------------------------📱AI Security Podcast Social Media📱_____________________________________🛜 Website -  https://aisecuritypodcast.com/✉️ AI CyberSecurity Newsletter - https://www.aisecuritynewsletter.ai/
LinkedIn:   / ai-security-podcast  

Igor Andriushchenko: [00:00:00] This old rails that CI CDs run on, they just getting load tested and reaching its limits in every organization, right?

Ashish Rajan: Yeah.

Igor Andriushchenko: Me asking AI about, okay, what is the good library for PII Sanitization and go, and it gave me a library and it said it's actively maintained and the last commit was eight years ago.

Ashish Rajan: The more context you give to an ai, the dumber it gets,

Igor Andriushchenko: are you going all in on speed?

You cannot expect. Security and quality to stay at the same level. We literally like, we step on the same problems we've been dealing with for decade. Yeah. But now we speed run through it.

Ashish Rajan: You may have heard of a company called Lovable, where you could create websites in a matter seconds, which is one of the most popular AI companies coming out of Europe, and it's being used globally, everywhere to create websites in a matter of minutes.

Not just a regular website, but you can also see the code as well. Now I had the fortune of talking to the head of security at Lovable Igor at the Munich Cybersecurity Conference called MCSC, and we spoke about everything that he is dealing with at a scale up. He joined when he was [00:01:00] employed, number 40, to now there are 150 people with large developers, AI native.

How does he approach security in an organization, which is AI first, and what are some of the things that they're seeing from. The security that's required for developer adoption of ai, the project manager adoption of AI features coming out in huge volume. What does that look like? I have to say, this is like part one of this conversation because there was so much to talk about, so I'll definitely be bringing back Igor once again.

But this specific conversation was primarily focused on security for ai, especially when used by the workforce. And even how should security teams even use this? Like I've been talking about this in Tech riot.io for some time on how we as security people need to be more. AI fluent as the word goes in terms of how we can use it ourselves in our teams, but also how we can create tooling and things that enable developers.

And I mean, overall it was a great conversation. I think if you are someone who's trying to figure out, Hey, what is my role as security person in this world of ai? What is my [00:02:00] ROI for the use of ai? How is a builder or a leader, you can increase the adoption of AI in your. Teams and maybe in your organization as well while being developer friendly.

This is definitely the conversation that you want to hear. Definitely share this episode with other people who are working on increasing the adoption of AI in their organization. Share it with the CTOs, and I hope you get as much value as I did from this particular episode. We're definitely bringing back ego again for another episode after this, but.

I just wanted to share this while this was fresh in mind, because this is very hot topic for a lot of people who are being asked by everyone in their organization, Hey, we want to increase the AI adoption, but what does that look like? How does security play a role in this without creating a lot of resistance?

This is gonna be epic. I think this will definitely be shared quite a bit. So I just wanted to give you a heads up. Uh, make sure you have your notepads out and definitely share this with other people who should be aware of this as well. And as always, if you have been finding episodes of AI Security Podcast valuable, and if you are here for the second or third time, maybe the fifth time or sixth [00:03:00] time.

I really appreciate if we take a quick second to hit that subscribe, follow button no matter which platform you're on. Spotify, apple, LinkedIn, YouTube, we are on all the podcast platforms. It's free for you and it only takes a second, but helps us reach more people,

really appreciate the support that you guys have been showing so far and continue to do that at conferences like MCSC. Otherwise as well, I hope you enjoyed this episode with Igor and I'll talk to you soon. Peace. Hello, And welcome to another episode

I've got Igor with me, dude. Thanks for coming in and uh, maybe to start things off if you can share a bit about yourself, your journey, cybersecurity and where you are now.

Igor Andriushchenko: Yeah, well. Hi everyone. I'm Igor Andriushchenko. I'm, uh, head of security first engineer who worked on the security at Lovable.

Ashish Rajan: Yeah.

Igor Andriushchenko: Um, you know, Lovable probably needs no introduction, but just in case you know, you, you missed it somehow. Uh, it's, uh, this, uh, product that allows to create, allows you create anything with, with ai. Um, we started as websites. So you can create any website, you can host it. You don't have to code at all.

[00:04:00] You don't even see code. You can, you can see code, but you don't have to see the code. And then now, you know, you can add videos, audio, you can make it multimodal. People build games with it. So. Literally like the, you know this, your imagination is the ceiling here.

Ashish Rajan: Yeah.

Igor Andriushchenko: Uh, and I run security for it, and we go into like hyper growth periods.

So I joined, I was like number 40 or something six months ago, and now we're 150 plus.

Ashish Rajan: Yeah.

Igor Andriushchenko: And our engineers discovered ai and that's been such a wild journey. So probably we'll talk about that. Yeah. And about me, I've been, I'm come, I come from a engineering security background. So I started as DevOps and then moved to DevSecOps, and then, yeah, and then it's, uh, all the, all the security rollercoaster started from there.

Um, I worked in Sweden, Finland, Canada. Um, been at Shopify Saana that was recently acquired by Workday now. Uh, Lovable. So I've seen things. Yeah. Uh, and of course I think now with ai, everything just, uh, exploded in complexity and it's such an interesting [00:05:00] field to be in. I'm, I look forward to this hour with you.

Ashish Rajan: Yeah, but I'm looking for this conversation, but I think, so maybe, uh, the way I was thinking about this, I can divide this into two conversations. One is obviously coming more from a security for AI where, uh, most people like yourself who are other leaders as well, they had this explosion of developers using coding tools.

You and I are mute today. We had a lot of, we heard a lot of AI security startup yesterday that we were judge judging and there were a few products about code security. We obviously have the ai, so there's so many AI things that are happening for ai, for security. But if I were to just boil it down to the workforce and how the workforce have adopted ai, how do you look at security for that?

Uh, especially in like a hyper scale up kind of environment. 'Cause maybe baby steps. Where do you start? How do you look at it?

Igor Andriushchenko: Yeah, no, uh, I think it's changing. That's the key part about it. It's constantly. Changing or since I started global, it changed a couple of times, uh, because the companies are [00:06:00] changing themselves, how they work, how they are, and AI is changing how how people work as well.

Uh, so what I've. Seem is that, uh, with the discover when people adopt more AI agents,

Ashish Rajan: yeah.

Igor Andriushchenko: Then they adopt running AI agents in parallel. Uh, and it's all done in for the sake of speed, velocity, and kind of maximizing the performance, maximizing the output, right? Every developer wants to be productive.

Every developer wants to solve the hardest challenges. That of course. And then there is a tooling for it. Yeah. Right.

Ashish Rajan: Yeah.

Igor Andriushchenko: And I feel like we at Lovable, we are living on the edge of it. So we are like at the forefront of it. So we are seeing what others will be seeing in six months, one year from now.

Maybe. Maybe in one month. I dunno, depends on how fast you move. Uh, but of course the complexity. Of, uh, of usage of AI explodes the rate of change, the churn of changes also explodes because now every developer can create maybe five, 1,000 times more changes than before.

Ashish Rajan: Yeah.

Igor Andriushchenko: And for security [00:07:00] changes risk, right?

Because like you, you know, the state of the system or like, you can, okay, you can be like, okay, system in this state is reasonably secure because we've done a lot of reviews. We've done like, you know, like there are automated things, there are things with AI you can do, but in general, like, like you can. You, you try to keep it in the secure state.

Yeah. But if the state of it is changing a hundred times per day

Ashish Rajan: Yeah.

Igor Andriushchenko: You need, you need something else. Right. There is not enough humans in the, there is never enough. There would be never enough humans. And like I see that old methodologists, they're just failing.

Ashish Rajan: Yeah.

Igor Andriushchenko: Because of the strain. It's like if you bring, if you build your, uh, if you build a small app with like, you know, flask or like something unscaled, you know, like some, some kind of development framework, which is like for starting.

Yeah. Then suddenly there's a million of users on it. It doesn't scale. Right. You have to like, okay, now I have to move to something else.

Ashish Rajan: Yeah.

Igor Andriushchenko: So similar, I feel like, like this old rails that CI CDs run on, that developer organization runs on, companies run on, they just getting [00:08:00] load tested.

Ashish Rajan: Yeah. Yeah.

Igor Andriushchenko: By the churn of change, and I think we need to adopt that first.

Ashish Rajan: Yeah.

Igor Andriushchenko: We need to understand, okay, how do we support developers enough and other employees? I can cover that later.

Ashish Rajan: Yeah.

Igor Andriushchenko: Yeah. Uh, because it's, uh, it's, it just is being law tested. It is being, uh, weight tested and you know, like, uh, yeah. It has its limits.

Ashish Rajan: Yeah.

Igor Andriushchenko: And probably it's very quickly reaching its limits in every organization.

Right.

Ashish Rajan: Yeah. And I guess you, what you said as well, uh, obviously you're focusing on security for AI, for the workforce, and what does, what does that look like? There's the developers who are adopting there, the PMs who are adopting now, the roles are blurring as well. The, the PM is also now producing the bird type.

They're also pushing to pr. Developer is now doing PM work, so the access is different as well. Access is evolving from being a developer who could only push to prod, to now they're, Hey, I won't be able to access Figma or whatever else. Like, oh, okay. I feel like it's very, the access control conversation is quite fluid as well, but my, my curiosity is coming from.

In terms of people who [00:09:00] are looking at this and to what you said from the time when he first came in to now obviously is quite a bit of change. A lot of people are seeing the beginnings of this in their organization today, where they have this immense investment from companies investing into their development team, engineering team to, Hey, we want you to use more ai, we want you to integrate AI into applications.

How do you think about this? And if I just go on for that security for AI layer for now. We'll start with the workforce using secure workforce, using ai. We'll talk about that first. For workforce that has to use ai, whether it's the developers and there's a lot for complexity there, or whether it's the non-developers, how do you approach that?

And in a way that, to what you said, because traditional ones are failing at this point in time, my whatever the IPS ideas has, has no context of semantics and now all of that. So how do you look at that today?

Igor Andriushchenko: It's a very hard problem, and as you mentioned, it's even describing the problem with all the levels.

AI for security, security for [00:10:00] ai, um. It's, it, it's, it's, it's, there is a lot of embedded problems you cannot solve just at one layer. You have to solve like through, through all the layers. And if we think about access, what it looks like today that everyone needs access to everything.

Ashish Rajan: Yeah.

Igor Andriushchenko: So like, again, if you are a PM who codes, how do you, how do you.

Push code, right? Yeah. If you developer who, PMs Yeah. How do you update the roadmap? Yeah. Like are you even allowed to do it? Should you be allowed to do it? That's a question and I think we should go back. Like I, I, I, I really want to push back on the narrative of companies like push, um, sort of there's this.

I feel like a lot of companies drive their AI adoption because of fear of missing out.

Ashish Rajan: Yeah.

Igor Andriushchenko: They're like, our competitors will get ai, they will get super good. We gotta, we gotta force it down people's throats. We gotta force people to use it. And that's like, nothing good comes out of it. Right? Yeah, yeah.

Resentment, uh, bad mistakes, security breaches. [00:11:00] Absolutely. Uh, so like. I would focus on solving the problem here. Yeah. And if the problem is like, okay, we're optimizing for speed.

Ashish Rajan: Yeah.

Igor Andriushchenko: Or we are optimizing for this much speed and this much quality. Yeah. And security is often seen as part of the quality uh, domain.

Yeah. So. You can, you need to be mindful and intentional as an organization. Leadership about the trade off.

Ashish Rajan: Yeah.

Igor Andriushchenko: You're like, are you going all in on speed? Are you just like putting the gas pedal down to the maximum that is possible with ai? Absolutely.

Ashish Rajan: Yeah.

Igor Andriushchenko: But you cannot expect security and quality to stay at the same level Yeah.

As it was or as you want it to.

Ashish Rajan: Yeah, yeah, yeah. Ideal situation, that's not the case.

Igor Andriushchenko: Yeah, exactly. But also for vice versa, let's say I get a lot of questions from banks, like from big, big banks like. Thousands of hundreds of thousands of employees. How can we ab adopt ai? How we like, it's so scary and like find, well advising is finding this air pocket.

So imagine like you are in the cave and then cave is like flooded. Yeah. But there are this air pockets where you can breathe and you can do [00:12:00]something there. So. Finding those air pockets in your organization where it's safe to use AI to some extent. Yeah. And where it solves some problem.

Ashish Rajan: Mm-hmm.

Igor Andriushchenko: Right. So it could be, okay, let, let's do simple prototyping.

Yeah. No data involved, just prototype something. Do you want a new dashboard for login for like c your bank app, whatever, or internal tool prototyping you, you can do it right. And then you don't connect it to anything to start with. Yeah. And then you get people to like it. And then a lot of people approach say Yeah.

With the, with. After we started building it, we, we saw more value in it. Yeah. And by then you find more air pockets where to put it. And then suddenly you realize, okay, they're interconnected. Yeah. There is a lot of value to be uncovered. Yeah. Yeah. And one of my advice is like, don't try to use AI for the sake of ai.

Ashish Rajan: Yeah.

Igor Andriushchenko: Right. You just need if, if it solves your problem. Incredible. Then you don't have to sell it to your stakeholders in inside the org. Yeah. You can say, Hey, we can 10 x your development. And you don't say like, with ai or like, you can of course, yeah. But if you manage to 10 x your dev, their development [00:13:00] speed or a prototyping speed or something else.

Ashish Rajan: Yeah.

Igor Andriushchenko: Like everyone will just praise you. And you know, as a stakeholder you'll be highly regarded. You have a vision and everything. But if you start like, okay, I want AI and everything that's, that's, that will fail in my opinion.

Ashish Rajan: And I guess your point, so the idea is not to rush into it. Instead, take a moment to understand where and what use cases ideal.

When to start. If you, about the air pockets where correct. There are always some air pockets that you can find for like, I think we were talking about this earlier as well with a few people. The idea that I want all the access in day one is poly falses apart. There's, I'm sure there's some access required, which requires you to define a use case.

Now, once you define a use case, you know where, where your pockets are as well. I think the, the overwhelming sensation maybe people have is that no, no, I have to look at everything. So I wanted to have access to everything so that when I get to it, but I think, I dunno, people know this, but the more context you give to an ai, the dumber it gets.

Igor Andriushchenko: Just about to make. Yeah. And same with [00:14:00]people, right? If you have too much, like if you go to the supermarket and there's like 50 kinds of brands

Ashish Rajan: Yeah.

Igor Andriushchenko: You will, you will be stuck. You will be like, oh, what do I choose?

Ashish Rajan: Yeah. And they all aredo as well.

Igor Andriushchenko: I, I, I think there is this behavioral research saying that if people presented too many choices Yeah.

They, after making the choice, they're less happy. Yeah. They would, would have been if there were fewer choices and they made like, oh, I just made this. I choose one of two, or one, one or another. Same with the access, right? If you, if you sort of, if you have access to everything, you, you know, your AI just goes astray, just comes up with some weird ways of solving the problem, and it drifts you away from.

Actually the value that you're adding.

Ashish Rajan: Yeah.

Igor Andriushchenko: Because if you can do pretty much anything, a lot of people, like, I think a lot of organizations, they lose focus. Yeah. And they start like finding this local minis or Maximus depends on how you look at the chart, where it's like, yeah. I mean, that's great, but is that the right solution for the right problem?

Ashish Rajan: Yeah.

Igor Andriushchenko: Um, so yeah. Yeah. I think with ai, of course, that enables a lot of [00:15:00] experimentation. Yeah. And you need experimentation. But also like, I think. Like ideally, and we don't live in the ideal world, but like of course access permissions, like we started with access and workforce. Like you, you should be able to solve problems with the access you are given or request more access.

Yeah, right. But you should not be kind of basing. Everything, like kind of, yeah, I need, I need everything, and then I'll be able to solve everything. Then you solve nothing because you'll like focus.

Ashish Rajan: Yeah,

yeah,

Igor Andriushchenko: yeah. So from my perspective, I think if like the organization should start first identifying the problems they want to solve with AI and then understanding, okay, this will require this level of access, uh, this, uh, for this groups of people.

Ashish Rajan: Yeah.

Igor Andriushchenko: And it might be quite broad. Yeah. Right. But then the organization needs to make that conscious choice. Okay. Speed. Versus quality, right? Yeah. Yeah. Where do we, where do we tune it? Do we put a gate on that? Access some? Does someone have to approve it first or is it auto granted or is just available by default?

Yeah, that's also possible. Yeah. Like if your organization of [00:16:00] has like very low threat profile and like no one will, or you think so, uh, will want to hack you, or like if a data breach happens, you know, there's like, ah, no big deal. We ready for that. I mean, there are organizations like that. I mean, they can move super fast.

Ashish Rajan: Yeah, yeah.

Igor Andriushchenko: Uh, but again, like. It's not about using AI for the sake of ai, it's not about giving access so that everyone can use it.

Ashish Rajan: Yeah.

Igor Andriushchenko: That's just, you know, that's like asking for trouble. Yeah. Where it's more like if you do it intentionally and you know why you give everyone admin access on everything, well maybe that's good for you, but at least, at least you think about it and you do it intentionally.

Ashish Rajan: Yeah. And I guess you've thought, thought through the case as well, but maybe turn to a double flick on this. How does one approach, so face of face, suppose for example, you define a case study, right? Okay. I've got a use case. That's just one team. I've got 25 teams. I've got, technically, I've got 25 use cases.

Some of them would may have where a developer group is now trying to use ai. Some of them could be that I have a SaaS application that's AI and I need to have, so they're obviously both parts to it. As a [00:17:00] security person, if you just take the first use case of the developer using AI for coding, how do you approach that today in terms of the security of it?

While maintaining the speed and quality, a balance of that is important.

Igor Andriushchenko: Yeah, great question. I start with the problems I'm seeing and more like for whoever listens to this, maybe it'll resonate because like there is such, such a big number, such large number of different. AI agents. Yeah. One can use starting from completely yellow ClawdBot, MoldBot things to solve any problem, you know to cold, cold cursor.

Uh, you can use Lovable, you can use, you can use so much ai. Yeah. Right. And then if you let your people experiment, you'll end up with like, everyone using everything.

Ashish Rajan: Yes.

Igor Andriushchenko: So I find like one of the bigger challenges is like convincing your developers. To narrow down their choices a little bit because like [00:18:00] AI enables experimentation and people get very used to it.

And then you say, okay guys, we can secure these two agents. But the third one is actually weird and it works in a different way and like. It's hard to enforce guardrails on literally every agent out there because they're all different and work differently, right? So then you start like, then people start feeling okay, it's slowing us down there.

There is some general friction unhappiness, but I think this is a conversation that needs to happen. It's like it. If you as an organization, you solve certain, uh, set of problems. It's like, this is what the tools that you solve it with. Yeah. You cannot have all the tools from the market.

Ashish Rajan: Yeah.

Igor Andriushchenko: Because like then people, you know, like, like, you know, you know this tool, you work well with it, but you don't know that one.

And like, it just, it just creates a lot of fragmentation.

Ashish Rajan: Yeah.

Igor Andriushchenko: So, so that's one thing, but more, more broadly on the topic of. AI or security for AI development. So as a developer group, you know, developers usually, in many companies, they have quite a lot of access

Ashish Rajan: right here. [00:19:00]

Igor Andriushchenko: Uh, maybe not directly, but they can request it.

Yeah. And I, I think a good metaphor here when, when trying to threat model or think, okay, what can go wrong in the AI development is thinking of AI agent as a, well, it is an agent. Yeah. A agent is there. So it has agency. Yeah. So it will think that will act on the behalf of developer.

Ashish Rajan: Yeah.

Igor Andriushchenko: So developer federates.

It's access, it's credentials, it's it's knowledge to, to the ai. Yeah. So you should almost see it as like, you know, another developer, essentially. Yeah. And if your developer can get access to something and you want to guard it from ai. You need to put something like, uh, kind of human controls in foreign.

So for instance, you need to go and request like a PAM permit somewhere else to, to access production secrets.

Ashish Rajan: Yeah.

Igor Andriushchenko: Because for instance, let's say you have developer secrets available for development environment for everyone. Like, you know, you can easily update, you can everything, but then production secrets, you need an escalation for that.

Ashish Rajan: Yeah.

Igor Andriushchenko: And then AI will [00:20:00] not be capable of doing that escalation. It'll be like, okay, now you develop or need to go and grab that credential for me.

Ashish Rajan: Yeah.

Igor Andriushchenko: But that will you, that will. Introduce healthy friction. Yeah. And it's all about, you know, we're, you know, I don't have to tell you the idea of having guardrails.

Yeah. The blessed path, the healthy friction, where it's like you start moving all the blessed paths, uh, the more friction add, but this is exactly it.

Ashish Rajan: Yeah, yeah,

yeah.

Igor Andriushchenko: So we need the same concepts. We, we liked implementing or like we would tried implementing as part of DevSecOps, whatever the name is.

Secure development, um, programs. We need to apply them to AI in our own way and find a good injection point for it. So like I, I find, like I, I personally found, uh, using Pam as a good way to segment it a little bit. That's right. Right. Yeah. So you have different permits for different actions. Yeah. And for all the tasks that people don't need to do.

Every hour.

Ashish Rajan: Yeah,

Igor Andriushchenko: you can grab it once a day. You can grab it like when on, based on the need. Right? But then AI cannot do it for you. And that's [00:21:00] important. Like you make it so that AI cannot go there, it cannot authenticate, it cannot go there and do it. And then because it cannot, it will find well either other way of solving the problem.

Uh, but. Also, it'll not make that mistake where it just because it can, yeah. It goes there and it, it does completely something completely outrageous.

Ashish Rajan: Yeah. Yeah.

Igor Andriushchenko: Even without being hijacked by malicious attackers or something like, you know, RMRF and your home director is one of the examples. Um, so probably some controls around it.

And another thing is, of course, uh, the visibility of what happens. Imagine that sprawl of agents, sprawl of, uh, development methodologies, how people, how people create the code, like. All companies will need some sort of telemetry on that. Yeah. Understanding who is using, which agents, where do they send data to what are they creating which MCP servers they're using, which tools, which plugins they're using, which skills they're using because there is now, there are marketplaces of skills they're at, uh, supply chain attacks against skills.

Yeah. From those marketplaces. So [00:22:00] we literally, like, we step on the same. Problems we've been dealing with for decade.

Ashish Rajan: Yeah.

Igor Andriushchenko: But now we speed run through it. So there is a lot of things that can go wrong. And I think as always, like, you know, if you look at the NIST framework or something, inventory or visibility of things, it's where, where the cybersecurity starts,

Ashish Rajan: that's right

Igor Andriushchenko: you need to know what you protected.

Yeah.

Ashish Rajan: Yeah.

Igor Andriushchenko: So the same thing, my advice would be just like, yeah, go get some solution, build it yourself if you are like, you know, super powerful, uh, AI users or, or buy it. And then so that you can capture. How AI agents work on each computer. Yeah. And be ready for that to change constantly. Yeah. Because people will discover, you know, tomorrow one, a big, uh, provider will read this new coding model.

Yeah. And that all performs or gets viral and, you know, everyone is on it.

Ashish Rajan: Yeah. I, I think I, you said something interesting, right? Because the observability is an interesting one because a lot of times I think last year when I started experimenting with Amazon Bedrock and others were, were. Even Azure, Google Cloud as well.

What we found was sometimes there's not a lot of telemetry [00:23:00]available to apply control around. We just assume that, oh, it's an Amazon thing, or it's a Google thing, or whatever. And so I, I think I, I'm I, I'm with you on that. The first step is inventory. Understand what you have, figure out some way to understand the telemetry, whether even if it's bad, ai, shadow ai, whatever the AI may be.

At least find some way because to what you said as well, I may think that my developer is only using Cursor because that's for a mandated, but they could be going on using Claude Code skills or whatever, Claude Code or whatever, and you're like, oh wait, I did not realize. So having some kind of discovery ability is important.

And maybe if we were to take a step further, 'cause you mentioned advanced users of AI as well, and I realized in all the conversations that I've been having in the advisory board with that, we had on. People have different definitions for adv advanced users. So a lot of people are like, oh, I use ChatGPT

I mean, personally, I think that's like, I mean, level zero, level one. I don't know for me personally, but where do you sit on the, what, what do you consider is like a moderate user to an advanced [00:24:00] user and does, do security teams need to be at some level in there in of, of usage for that as well? And what would that look like for securities?

Igor Andriushchenko: Right. Well, I, I think. If we were to talk about identifying the user group, let's say, in our organization.

Ashish Rajan: Yeah.

Igor Andriushchenko: There is definitely key, key groups there.

Ashish Rajan: Yeah.

Igor Andriushchenko: And they differ by how, how much they do with ai. Yeah. Right. Like what do they connect it to? Because AI on own needs to just, you know, it, it gives you answers to some questions or like it ask question itself and then acts on the result.

Right. Does the agent loop essentially. But, but like, it's more about like, what, what else comes to the picture? What else do you bring?

Ashish Rajan: Yeah.

Igor Andriushchenko: And in my experience, the more advanced users, they just bring more, they bring more context. They may bring more integrations. They, they have bigger impact with it. And by bigger impact, they mean, okay, let's, let's go from the top, right?

Yeah. Let's say the most advanced are some developers that run. STEM agents in parallel using some kind [00:25:00] of work restructure using super advanced skills. Yeah, like hundreds of skills, hundreds of subagents.

Ashish Rajan: Yeah,

Igor Andriushchenko: like the surface of. Impact for those, for that use is very, very high.

Ashish Rajan: Yeah.

Igor Andriushchenko: The risks are also because the surface is very, very large.

Sorry. Uh, it's, uh, it's multiplied by the surface. Right? Yeah. So this is a more risky users

Ashish Rajan: Yeah.

Igor Andriushchenko: But also they're advanced and they probably have bigger impact.

Ashish Rajan: Yeah, yeah.

Igor Andriushchenko: On, on like a product for instance, because usually this, this is some sort of developer there. And then that's going back to the, where we started from.

Where, where as the organization, where do you balance speed and quality and security? Yeah. So if you want to them to be able to do it, I mean fine because like probably they will be like worth each of them will be worth like a small team of engineers.

Ashish Rajan: Yeah.

Igor Andriushchenko: Yeah. In the previous, in the past. But of course there are some risks associated.

Yeah. So you should probably have some controls around this group. Yeah. But the controls that do not hamper their productivity. If you optimize for speed, right? Yeah. So you can just like put some telemetry around, um, anything [00:26:00] external, leaving the organization. Okay, let's try to protect against pro injections.

You know, let's see, uh, let's see, you know, data exfiltration for pro injection or something. Um, yeah, let's, uh, widely allow, list some some domains that we know are good and then see how that impacts their performance. Are they complaining a lot? Maybe, yeah, maybe we, we need to scrap this because performance is important.

Yeah. Or if our organization's like, no guys, sorry, we gotta, we gotta have quality and security in place. So like, just let us know all the, all the domains that you allow to, we allow you to send the data out.

Ashish Rajan: Yeah.

Igor Andriushchenko: And that's, I mean, that's a pretty powerful control. Right. And then like, you, you. Build it, but like I think there is like the core here is like they, whatever they work on.

Yeah. Like if there is like this box of in which they're super productive. Yeah. You should very be very careful of like going into that box or into that space because this is what makes them the super advanced. So we need to be mindful that advanced users more impact.

Ashish Rajan: Yeah.

Igor Andriushchenko: Then of course there's just like other developers that, uh, well [00:27:00] use ai.

They like using it. They, they still review their code. Probably they, by default, their code is more qualitative and secure.

Ashish Rajan: Yeah.

Igor Andriushchenko: Just because if, if they, if they don't rush, they don't do as much contact switching. If they, they don't run 10 agents in parallel they have more time to think about it. Yeah.

And that's important. And that's also it's a good, good group of users to have.

Ashish Rajan: Yeah.

Igor Andriushchenko: Yeah. And I would optimize. I think, you know, like I think at Lovable we also optimizing for having. For catering, for needs of both groups. So you have super advanced, super productive ones. And then there are ones that are also very productive, but they're a bit more like, you know, they move slowly on purpose.

Ashish Rajan: Yeah. Yeah.

Igor Andriushchenko: Slower on purpose. And it's still very fast compared to like, traditional development two years ago or three years ago. And we need both groups. Yeah. They just need to focus on different problems.

Ashish Rajan: Yeah. Yeah.

Igor Andriushchenko: And then there is a group of, uh, as you say, there is, uh, PMs using, uh, cursor or cloud on web.

Ashish Rajan: Yep.

Igor Andriushchenko: And running web agents, you can connect GitHub repository. Um, you can let them do it. And how we approach it [00:28:00] is very simple. Uh, we allow people to create pull requests. But to merge them, you need to someone else to look, uh,

Ashish Rajan: like Yeah,

Igor Andriushchenko: yeah. Someone, like someone from advanced or less advanced developer group to look at it, at it and then, and merge.

Ashish Rajan: Yeah.

Igor Andriushchenko: So this way, like everyone can create.

Ashish Rajan: Yeah.

Igor Andriushchenko: Uh, but then it's like it stays outside of production. Uh, and then you also, like, they don't, well we prefer, they don't use like CLI tools, right? They don't mess with something that they're less, uh, less, uh. Well, proficient quiz. Yeah. Um, and also something that is more powerful as a tool.

Yeah. But online agent, you know, we, you go there, your repository is connected, you can ask for changes. You can see, you know, if you have a good deployment system, you can see the changes live, uh, in some better deployment.

Ashish Rajan: Yeah.

Igor Andriushchenko: Great. Like that's, that's your surface. Like what's the worst thing that can happen there?

Well there, there, of course there are, there are things that can go wrong, but I think this is a good way to enable them be almost developers, right? [00:29:00]They be able for, for they will become like creators. They will bring value, uh, without, without you blocking them all the time.

Ashish Rajan: Yeah.

Igor Andriushchenko: So, yeah. And I mean, these are different security models in each of these groups.

Ashish Rajan: Yeah. And I guess you pointed as a. Person who's looking at doing this in their organization, you should identify who those groups are and figure out somewhere, because your point I, I'm, I'm glad we started with the inventory visibility piece. Now, now that we established it, the level below that is like, Hey, identify all these different groups.

You have what the level of usage is, and based on that you can have some like, Hey, where the speed is important, quality is important, but having the, the. Informed decision of where would there be a human approval kind of coming in and how that would go all through. Because I guess to, to even add another layer, the sub Asian piece that you were talking about for advanced users, a lot of them run them autonomously, they just run overnight.

Could you, as a developer, going back to sleep next morning, you wake up, the subagents have done their job and you just reviewing the code like it, you can be really advanced in this kind of [00:30:00] things as well today. So maybe. Though that's the, the workforce as your engineering team, that's the production engine team.

What about the securities role in this? 'cause you started this conversation, and I'm kind of glad you called, called up, but the traditional SCLC, which you may have had done dev cycle for a long time. You would've had the SaaS, the das, everything going for a long time, s Cs and all that. What is the role of security in this new world that you see?

Igor Andriushchenko: Crucial, absolutely crucial role. Because. Somebody runs 10 agents at parallel, overnight. That's a lot of changes. Yeah. The changes are being pushed to production. Who reviews them? Well, sometimes humans, sometimes agents. So all of that has to be. Somehow assessed or like security has to be injected into it.

And there is a bunch of solutions on the market that guarantee or claim to guarantee kinda agent time security. So things that inject guardrails into the coding. So let's say the agents are coding, but then it [00:31:00] calls some tool, you can build a tool pretty easily yourselves, right? It could be an MCP server like some API, which says, okay, follow these guardrails.

Ashish Rajan: Yeah.

Igor Andriushchenko: These are the important things for this context. Uh, and then the agents are creating. And then once it stop, stop creating code. When it has a diff, the diff send back to the tool saying, okay, it doesn't make sense. Do the guarders still make sense?

Ashish Rajan: Yeah.

Igor Andriushchenko: The ones that were in place, right? So something like that is needed in my opinion because like you have to teach agent how to write secure code.

And another thing that we saw working pretty well is just explaining, giving some very general high level security model context to like. Cloud MD or having a knowledge file, like, you know, each agent works a bit differently, but, uh, they're all similar in files in terms of like, you can create some rules for, for agents how to behave.

Yeah. And then this would be like, uh, okay, here's your security context.

Ashish Rajan: Yeah.

Igor Andriushchenko: So just put like high level business threat model saying, okay, what are the key components of the product? What is, what needs protecting, what needs attention? Where you need to move slower. And with [00:32:00] that knowledge agent knows well quite, uh.

Quite a lot. And then, uh, I read a conversation online recently that you need to be actually restrictive in your prompting. You have to say, don't do this because if you say, do that, it optimizes

Ashish Rajan: Yeah, yeah, yeah. Almost like your, your white listing. What is good?

Igor Andriushchenko: This is such a fun thing because insecurity, we always prompted, we, we, our internal prompt in our head is to prefer. Allow list over denial list.

Ashish Rajan: That's right. Yeah. But,

Igor Andriushchenko: but for the agents, it's reversed. Yeah. Because if you, if you do on allow list it'll always, always do the focus against that. Yeah. And like the, the, the real world is much, there is so many more passes, but it will always choose the passes that you say you are allowed to do this.

Ashish Rajan: Yeah. And I think you do to kind of doubled it on what you were saying as well, because. That model used to work for us because we had solutions that were designed for it.

Igor Andriushchenko: Yes.

Ashish Rajan: But in today's world, I mean, 'cause we have, we knew that we knew what an excesses [00:33:00] is. We knew what a seco injection is. Exactly. So there was, there was a pattern for us to go, there is no pattern here.

Yeah.

Igor Andriushchenko: It was a infinite surface. Or like if Phoenix, it's like. It feels like we are now dealing with a much larger attack surface and the threat surface.

Ashish Rajan: Yeah.

Igor Andriushchenko: And

Ashish Rajan: previously you make one as well. Yeah. Changes evolve.

Igor Andriushchenko: Yeah. But I see like allow list, were good. Like if you, if you a static allow list, that's great.

Yeah. If you have instructions for agents, those, those things work differently. Yeah. Right. So then yeah. Telling them what not to do.

Ashish Rajan: Yeah.

Igor Andriushchenko: Is, uh, is a good practice.

Ashish Rajan: Yeah. A hundred percent. And I Do you find. I'm glad you called it out as well because I think def, I definitely encourage people to actually call it out in there.

This is where the system prompts and we can go onto all that as well. But how so with the security teams that are now working with these developers who are in these different, I guess usage levels, A, to what you said, and I think that's what you kind of meant as well. The traditional security can still required it, still need the SCA, you swing, the SaaS, all that, but the, the volume has Yeah.

Changed quite a bit.

Igor Andriushchenko: [00:34:00] Right.

Ashish Rajan: How, I know you've been DevSecOps before as well. Why? Because a lot of people have DevSecOps teams. They go, and I know so many people who have gone down the path of saying, actually it's getting overwhelming. I do. I They're just raising their hands and going, I have no idea.

Would be able, so how are you guys approaching this volume of change coming through,

Igor Andriushchenko: right? Mm-hmm. I think one of the key solutions, like kind of key value unlocks here is getting the right tooling for, for this AI age. So, for instance, in the SaaS world, there are two distinct tooling categories.

Ashish Rajan: Yeah.

Igor Andriushchenko: There are these old mastodons that has been there forever that are in every big company's, uh, CICD pipeline. Yeah. Then of course they are AI features. Of course. They say, yeah, we have this ai, Andrea Asia is amazing number here, but try running it and then compare it to the new, like you can find some new SaaS.

Yeah. They don't even say they're SaaS because that's a curse word. Yeah. Uh, new AI security for code solution that performs agent scan, right where it tries to [00:35:00] build the application business model where it gets the code builds like, you know, its internal representation. It understands how the data is flowing, how the, what are the user flows.

Then try running it against your code and you'll see very different results. And the second tooling will be much better at finding, uh, the, the AI based tooling will be much better at finding business logic issues and things that actually are very high on the oas. Top 10.

Ashish Rajan: Yeah,

Igor Andriushchenko: just normal. The classic of top 10 list.

Misconfigurations like access. Access, wrong access patterns, and those are the things that are cursed and that we want to protect against, right? Yeah. Access says because of, there are so many good frameworks nowadays that are used, and AI is primed to use the good frameworks. You know, if you write in types script, you know, AI is perfect in times.

TypeScript. Of course it's still possible to get accessed, but it's much harder.

Ashish Rajan: Yeah.

Igor Andriushchenko: But what would the real problems come from? Business logic. From from access controls and so on. So AI is good at spotting it.

Ashish Rajan: Yeah.

Igor Andriushchenko: [00:36:00] But another thing is what's important, AI is good at. Taking feedback. Right. So you can always, like, if your tool offers you to send feedback back to the ai Yeah.

And not just, you know, the, the old tool will just update some Rex pattern.

Ashish Rajan: Yeah.

Igor Andriushchenko: But in this, in this case, imagine the tool telling you about a business logic issue and you say, no, no, no. It doesn't work like that. And you explain how it works actually in the real life and then it takes it and learns from it and it takes, it puts it into internal memory file and the next time it runs there is no such issue and it better understands the context.

Ashish Rajan: Yeah.

Igor Andriushchenko: And what happens when there is a critical mass of changes of comments like that from developers like that context sourced from this, from the source of truth.

Ashish Rajan: Yeah.

Igor Andriushchenko: Then it starts being very, very good at what it does.

Ashish Rajan: Interesting.

Igor Andriushchenko: Yeah. And we've seen it. We've seen it in action. That was actually working super well.

Ashish Rajan: Yeah.

Igor Andriushchenko: Uh, and again, another thing is because you mentioned that, um, the. The amount of changes is pretty high, is pretty, uh, pretty, pretty. Uh, the

Ashish Rajan: volume is quite [00:37:00] high.

Igor Andriushchenko: The volume is super high. Like the tooling needs to be performant, you know, it needs to complete the scans.

Ashish Rajan: Yeah.

Igor Andriushchenko: And like in very short time.

And then it has to be able to run like with thousands of scans per day.

Ashish Rajan: Yeah.

Igor Andriushchenko: So like if asking that from a young company, like, I think that should be one of the buying criteria, right? Yeah. Like can you reliably put up with our. Change pace that may actually 10 x from the moment we get you to the moment where, you know, it starts working in productions.

Yeah, yeah, yeah. So like, it, it puts high demands on the vendors as well.

Ashish Rajan: Yeah.

Igor Andriushchenko: Because all the change is happening, you know, the, the volume of changes are happening. And then another thing is, I think, you know, you mentioned dust.

Ashish Rajan: Yeah.

Igor Andriushchenko: There is a lot. I mean, who doesn't love dust or, or maybe who loves dust?

Did it ever work? That's a good question. Uh, now we talk about a lot about like automated attack, surface management and discovery and so on. But SAS provides you a theoretical feedback, right? In theory it should be like this.

Ashish Rajan: Yeah.

Igor Andriushchenko: But we do need some revolution. I feel like in the AI pen testing [00:38:00] or AI surface discovery, uh, space.

I think AI SAS is there.

Ashish Rajan: Yeah.

Igor Andriushchenko: There are solutions that work well.

Ashish Rajan: Yeah.

Igor Andriushchenko: If they have the features that I described, maybe some more features coming soon. You know, it's very quickly evolving, uh, evolving, uh, but AI. Pen testing or AI does, I feel like it's still very generic, but I feel like AI is, could be a right solution for this problem because you need to be able to say that theory or you need to be able to meet theory and practice.

Ashish Rajan: Yeah. Yeah.

Igor Andriushchenko: And then in theory, it's good in practice, oh, it's not so good because the agent Asato solution that deploys 200 agents tries to break your app. Broke it.

Ashish Rajan: Yeah.

Igor Andriushchenko: And then it knows where exactly it broke it and gives you a screenshot and then you can quickly, like follow up on this. Right? So you need something, uh, not just giving you, Hey, there is a vulnerability, but you need a screenshot or you need something to run the code execution on your computer and be like, Hey, here's a screenshot I, I got, I, I I see your credentials, or I see [00:39:00] what you're doing in real time.

Or I go to your service as a server, as a safe keys dumped, and something like that. Right? Then that's, that's real feedback. Yeah. And then you, you close the loop.

Ashish Rajan: Yeah. Do 'cause, so SCA is still relevant. But the volume of SCA increases, I guess, but I guess maybe that may not be that dramatically different, or is that also a volume to your point about the volume of SaaS is increased, the volume of SCA is also increasing.

Igor Andriushchenko: I, I, I also think SCA is the most, maybe one of the most overlooked surfaces, especially in the age of ai because. I just posted on my LinkedIn recently about me asking AI about, okay, what is the good library for PI sensitization and go, and it gave me a library and it said it's actively maintained. Oh.

And then I went, I went, I checked it on GitHub and the last commit was eight years ago, and the library had three commits.

Ashish Rajan: Oh really?

Igor Andriushchenko: Yeah. So it was literally somebody pushing, the first commit is create a repo. Right. So there was two more commits. That's it.

Ashish Rajan: Right. Okay. Wow.

Igor Andriushchenko: Yeah, so, so like, and that's a supply chain problem.

Yeah. Right? How do I [00:40:00] trust that my AI brings the right package into the play? There is the whole thing with dependency confusion, right? Two similar package names. Yeah. You just, oh, uh, I know there were some researchers that proven, I think they were at Black Heart recently, uh, that. They have approval method of generating package names.

Ashish Rajan: Yeah.

Igor Andriushchenko: So if you know that your, uh, company, that your target company uses a package name X,

Ashish Rajan: yeah,

Igor Andriushchenko: you can generate a name that in 1% of users or in some low percent, but not 0% of uh, AI invocations AI installs of this library will be confused and they will. They, they exactly. That library that they created and prepared and put on NPM

Ashish Rajan: Yeah.

Igor Andriushchenko: Uh, register or somewhere else

Ashish Rajan: Yeah.

Igor Andriushchenko: Will be downloaded.

Ashish Rajan: Yeah. Yeah.

Igor Andriushchenko: And that was, that's a scary thought, right? Yeah. So like the whole, what, what happens after like, that's, that's for security teams to lock down and manage, uh, and how to avoid that. So there is, uh, that's, that's a very large surface. And then now it's extended by skills.

Ashish Rajan: Yeah.

Igor Andriushchenko: And are there any other [00:41:00] marketplaces?

Ashish Rajan: Your point is third party as well. Now it's not just that. The code that you're, so we were talking about product managers, developers and engineers, all that using code generation thing, but it's also SCA from a. I am using a AI tool that has a third party like you to point the skills that is used, or I'm using a SaaS service, which is used for observability, that has a AI capability as well.

There there extension of ai now, not just from a SCA, a open source library being used in my code, but open source library being used in my third party, which is also AI capable as well. So there's like complexity in that, in that context as well. I'm, I'm curious in terms of approaching, like, so we spoke about SCA, SAST and DAST and the volume of increase that has happened.

How should people who are watching or listening to this, how should they, they, should they make their teams DevSecOps, team AI start using Claude Code skillset or that to start addressing the volume? Or do you find that because it's. [00:42:00] I'm trying to think of a way for, because most people have the mandate for, hey, just, you know, show me a, a increased AI usage in your team.

And we spoke about the use case part, whether it is the right use case or not. But in this particular scenario where the volume is quite high, is that a use case for people to start considering AI capability in their teams as like, if I have an AppSec in today, is that maybe the direction that I should encourage them to use, start using some of these AI capabilities to do test?

Or is that the wrong approach?

Igor Andriushchenko: I mean, I think the answer will be the same. I think the approach is slightly wrong. It will be more like, I think you should ask the question. Okay. With with existing tools? Yeah. And humans in the loop or people we have with existing expertise, can we do something about it?

Ashish Rajan: Yeah.

Igor Andriushchenko: If the answer is no, absolutely not. We're getting overwhelmed. It's just like we're drowning in, in alerts and everything. Then you're like, okay, let's look for a solution.

Ashish Rajan: Yeah.

Igor Andriushchenko: And okay. Is the solution to bring in new tool, sometimes there are super good tools for that particular case. Yeah. Or you can, of course, start building your own [00:43:00] thing.

And I, I listened to some of your previous, uh, podcasts and then like where you guess of telling about the, the tools that they, they they're building or like the agents that, I think it's with Caleb who was saying Okay. For the, um, uh. For the vulnerability management and so on, and then it works and it patches everything.

Like that's a great use case, right? Yeah. I would encourage, you know, when you feel like, okay, you're dead end and your management is like, yeah, let's use ai, but you're drowning in alerts and you're like, I have no time for this. Yeah. But also like you put from both ends.

Ashish Rajan: Yeah.

Igor Andriushchenko: Like start experimenting.

Right? What is the most, what takes the most of your time?

Ashish Rajan: Yeah.

Igor Andriushchenko: Is there a way like. Could you describe that as a problem to an AI agent?

Ashish Rajan: Yeah.

Igor Andriushchenko: And ask for, okay, how, how, how would you solve it? Your AI agent? Or is there a way like, write me a, I dunno, Bash script. Let's start with a Bash script. Like go simple. And then you realize, okay, there are ways to solve it.

And then you run against product. Or again, some, maybe not production, but some subset of data and you [00:44:00] realize, okay, it helps me. Yeah. And then you can find your own way. It doesn't have to be the tools way, it doesn't have to be forced AI usage way. I, I really love how recently the conversation changed from, Hey, let's put CPS into everything.

And now it's like, oh, let's not use CPS for everything because there are some super good CLI tools that AI is already very capable of using and they're much more deterministic. Yeah, yeah. Than using CPS that sometimes run, sometimes don't. And you know, there are some troubles with that same thing here.

So like use CLI use Rex or use whatever, you know? You know, you are the skilled one.

Ashish Rajan: Yeah.

Igor Andriushchenko: And AI is just tool to solve this problem.

Ashish Rajan: That's right. And,

Igor Andriushchenko: and likely at the end of the day it'll converge to you building some AI agent that uses the mix of the things and probably maybe buying some tool for part of the problem.

Yeah. That is, uh, too heavy for you to solve here.

Ashish Rajan: Yeah. Do do you actually, do you end up using AI quarter bit yourself as well? Like I think do, do you end up using or obviously, but it's productivity or whatever the case may be. 'cause there's a lot of thinking around the fact that the future workforce is.[00:45:00]

AI fluency should be high, is the word a lot of people use, where the team AI fluency should be high. The leaders AI fluency should be high.

Igor Andriushchenko: Yeah, yeah. I mean you, I think like the how to drive the AI adoption

Ashish Rajan: Yeah.

Igor Andriushchenko: Is actually of course not through the pushing and down people's throat.

Ashish Rajan: Yeah.

Igor Andriushchenko: But through the it, through the tooling, right? Through the ecosystem that you create internally in your organization. So what you need to do, you need to have a lot of good skills, good connections, that good primitives that people can use with ai when they tell their AI agent, Hey, go grab me a log logs from my development server, or go grab me logs from our observability solution.

But. Does have to be that, right? You can create a skill that performs the analysis of the logs and you can be just like saying something like, okay, I am investigating this incident. Please, you know, just follow up on this ip. Where do you see it? And tell me more.

Ashish Rajan: Yeah,

Igor Andriushchenko: tell me more. I need to know more. I need to know this particular, um.

Cases for, for, [00:46:00] for its usage. Where and how and what, and then it goes and it figures it out for you.

Ashish Rajan: Yeah.

Igor Andriushchenko: So for instance, we do have a incident responder skill, right? When we get some, some suspicious alert from our SOC or something. There is very, there is an easy way to trigger an agent to go and investigate it and bring back, uh, some results for you.

Yeah. And, but, but the skill needed to be created, the connections, like the data connections needed to be made. The MCP servers sometimes had to be built.

Ashish Rajan: Yeah. And

Igor Andriushchenko: I think that's the overlooked part. That's like, that's non-sexy work.

Ashish Rajan: Yeah.

Igor Andriushchenko: That organization usually are like, yeah, let's just put copilot or something into everyone's environment.

Yeah. But then what? What is it connected to? What it can do, how people use it. And another is another thing is. Educating people, having those knowledge sharing sessions where you tell, okay, here are the skills. Here's how you can use the skill, and here's what you can extract out of it, because it's much less deterministic.

Ashish Rajan: Yeah.

Igor Andriushchenko: Because AI uses the skill. Yeah. And then you realize, oh, the combination of these three skills allows you to actually build like, almost like AI in [00:47:00] certain response, right?

Ashish Rajan: Yeah.

Igor Andriushchenko: Similar with any other skill. So it's like a, it's ingredients that you mix and then you get like a new nice dish and you're like, okay, I want a recipe of the dish.

And then you put it like, okay, this is one of the team's recipes. Right? So I think it's a nice way to think about it

Ashish Rajan: and I, that's the new evolvement of the whole multi-agent world as well. 'cause you started the smallest problem for, I just want the agent to be able to go talk to, say A-W-S-N-C-S and get the latest audit trail information for this particular incident.

It could, it could be a simple use case that, and then you go, oh, okay, so it can do that. Let me add, let me add another layer. I want you to tell me about SV buckets as well, or a Google bucket or whatever. Yeah. And you start building this a little ingredient list slowly. Yeah. And you start seeing a pattern for, oh wait, this is technically a response to, if there was an incident in NWS, I would want it to do this recon.

And gimme the information. I think to your point, that's where the subagent people are. I'm running multi-agent, blah, blah, blah. That's where that comes in from. Would that be fair?

Igor Andriushchenko: Yeah, absolutely. But I, I, there is a common pitfall here and [00:48:00] a lot of, as we are all engineers or many of us are engineers, we like, sometimes we tend to like more complex solutions because they're more technically beautiful.

They're like, I'm solving the entire class of problems. I want to have my AI's. Agents running super autonomously, solving all those problems, and you try to do it and then you, you spend a lot of time on something that, you know, you maybe don't have to spend time on. Yeah. And I hear a lot of like people jumping from using no AI to having something completely autonomous, solving everything.

But it's like in between there is so many niche, like those for those, uh, the, the the air pockets. Yeah. Yeah. For those air pockets within your organization, there is so many niche use cases of ai. Yeah. You can use it securely solving the problem, bringing the value. Yeah. And I think we should, we should like walk, what is it?

Crawl, walk, run. Yeah. And not just like go run and then like fall and then, you know, it's, it's all like miserable story. And then you're like, okay, let's get it rid of the ai. Let's go back to how we were before.

Ashish Rajan: Uh, I'm, I'm glad you said this because I guess maybe, uh, one final question on this is. [00:49:00] If you were to start, and I, you gave some good examples of where people can think about using this.

Are there any obviously there are hundreds of AI tools out there. Have you found any favorites at the moment that are good for 'cause I imagine even similar to the developers of product managers that are similar skill sets in security as well. There will be advanced sec AI users, medium AI users, and probably some people are just like in that beginning stage of, oh.

My scene is gonna make the skill. I'm just gonna use the skills. And that would make my life easier though. Just still productive, just still a good thing. You find that there are ai, it could be a, it could be an AI tool or it could be an AI skillset, or it could be a use case that you find that people can start experimenting with, or one or two use cases.

This response as a use case. Are there any, 'cause there is the governance aspect as well for people who are there. There is AppSec as well. There is security. There's so many more. Are there any other pockets of security where you found that AI could be useful and, uh, any skillset [00:50:00] that you think were helpful to build that?

Igor Andriushchenko: Yeah, I mean, I think in all the areas, because. Each team works differently. Yeah. Right. If your team has challenges with addressing SaaS findings, let's say you have AI SaaS Yeah. But you still have to verify the findings.

Yeah.

Igor Andriushchenko: Like I would create a skill, uh, like set of skills that pretty much helps you verify the findings.

Yeah. And what you can do is actually go to GitHub comment, fetch that comment.

Ashish Rajan: Yeah.

Igor Andriushchenko: Uh, then. Build the app and try to exploit the, or like try to confirm that it's a real problem. Yeah. Right. Yeah. So the same thing that would a, uh, real security engineer would do. Yeah. They would try to Okay. In my like test environment.

Yeah. Okay. If that works, like as a tool saying, I need this permission, I need this user, I need to perform this, uh, request here, AI can do it. Mm. If you equip it with skills to send API requests to your test environment, read the logs. Yeah. And so on. You see, the complexity is not in using ai, not like adding ai.

Complexity is in creating those skills ecosystem, and yeah, like all the infrastructure, [00:51:00] AI infrastructure that is needed to actually reliably confirm or perform that action. And we just looked into one pocket there. This is just, uh, confirming sa findings, investigating incident response investigation and incident response.

Surely, uh, ca uh, we just recently had a case where, um, we like send the ASBOM. Yes. That's the thing. We send ASBOM to a bank.

Ashish Rajan: Oh yeah.

Igor Andriushchenko: And then they came back, okay, guys, like this vulnerability, we want you fixed. And we're like. Okay. Yeah, we'll do it. No problem. Because like, I mean, uh, there, there are some issues that they, they wanted to, to be addressed and we run, uh, we had the skill for AI to run the reachability analysis.

Yeah. And then we were like, okay. I mean, it's not, uh, not critical severity something, but I mean, it's good to take care of this, right? It's like just maintenance, uh, uh, good maintenance effort. So. That reachability analysis is good. A good example. And you can have something like running autonomously eventually, right?

You can get all the reports from SCA and you do reachability analysis in some like cloud VM where you can even try to [00:52:00] exploit things. You know, you can go as far as possible. Yeah. But like thinking about AI as like just the tip of the iceberg, everything but the infrastructure. Below it. Yeah. And ability like to, to give, give you the ability, uh, an access, good access.

Not the all, all around access, but good access to use tools. Cps, you know, you name it, right? It's all code use bash on your computer or maybe not. That depends, you know, on the scale, uh, that all come, you know, you, you need all of that, and you need then, then it becomes like, okay, this is our superpower.

Mm-hmm. But before that, it just like populistic thinking, you know, like, here is a simple solution to a complex problem. What can go wrong?

Ashish Rajan: Yeah. I mean, do, I've got so more to ask you, but I think I, I might have to do a part two of this because it was like, I'm just got the time 'cause I know you need to go back as well.

Maybe I'll, I'll ask, uh, I've got three fun questions as well. But I definitely promise to do part two for this though. Think there's definitely required, 'cause we have, there's, there's a whole other set of conversation about security for ai. We've covered, but they haven't done the AI for security kind of thing, so we need to come back to it.

But the three fun questions I go. First one being, [00:53:00] what do you spend most time on when you are not trying to solve the AI security problems of the world?

Igor Andriushchenko: Like outside of work?

Ashish Rajan: Could be outside of work. Yeah.

Igor Andriushchenko: Well. Most time on? Well, I spend most time at work, unfortunately, or fortunately. But fortunately because I think I'm like, like I think a lot of people who work in this AI first companies would agree that like we work a lot, not because we are forced to.

Yeah, yeah. 'cause it's so much fun. It's like. Oh my God. We're like, you see the world changing like at your feet and you have also the ability to influence it. Yeah. And that's the most incredible thing. So I'm like, I'm almost don't want to do much outside of it, but also like, you know, we are all human beings.

Yeah. So like when I'm tired, for instance, or when I'm like, okay, I cannot go anymore, I need something else. Like what? Like I really like doing something that has absolutely zero intellectual value. It's zero. Turning my brain off. And for that I, I play some online shooters. You know, I just go online to pull up my PlayStation and I just play some battlefield.

So, [00:54:00] yeah. And I'm, I'm not good. So if you see some bad battlefield player, that could be me.

Ashish Rajan: Yeah. Yeah. Fair. I mean, well, well, if we handle second question. What is something that you're proud of that is not on your social media?

Igor Andriushchenko: Wow, that's a hard question. Uh. I'm not super active on social media or, and like I would love to be more active there.

Yeah. But it's like as you sit there, like all the

Ashish Rajan: focus, proud think to we not be on social media as well.

Igor Andriushchenko: Yeah. Proud. Proud. No, I mean, I mean I think I'm proud of my team. I know it sounds cheeky, but, but like the team we get and like the people I get to work every day and like in sec, in the security domain at Lovable, they're just the most inspiring and crazy.

People ever, and like, they just like such power powerhouses all of them in their own unique way. So I think this, you know, like the field and like this kind of building, being in that molten core of this field attract, I'm really, really interested individuals to it. So, I mean, I'm just, uh, soaking all [00:55:00] that experience and I'm trying to learn as much as possible and like in both technology and like just human way, you know, and I'm getting inspired every day with every interaction.

I mean. That's, that's just something crazy.

Ashish Rajan: Yeah. Awesome. And final question what's your favorite restaurant or cuisine you can share with us?

Wow. Where, my guess

Igor Andriushchenko: I actually, I really love that. I love, I love all the food actually. Like, I, when, you know, when there is company events or something, I say I eat everything.

Ashish Rajan: Yeah.

Igor Andriushchenko: So yeah, I love it. It depends. Uh,

Ashish Rajan: current favorite cuisine?

Igor Andriushchenko: Current favorite, i, I have, I mean, I, my favorite cuisine is unhealthy food. Everything that's fried, uh, you know, deep fry.

Ashish Rajan: You mean comfort food? Is that

Igor Andriushchenko: Yeah, the comfort food. I love burgers. I love the Good burger. Good Shava, good, good Indian food.

Like, I mean, there is more healthy and less healthy Indian food, but the one is like. Greasy and you know, just go off the taste. I, I always take butter chicken or something like not just, uh, chicken masa as butter, chicken give butter. [00:56:00] Um, yeah, just going all in. Of course. Not doing it too often.

Ashish Rajan: Yeah. Yeah.

Igor Andriushchenko: And yeah. But yeah.

Ashish Rajan: Awesome. And we can people find and connect with you if they wanna know more about the work you're doing and everything else, uh, LinkedIn, whatever. I can obviously put the links for it, but where do you normally hang and where can we pick, find new connect with you?

Igor Andriushchenko: Right. I promised myself to do a bit more like LinkedIn post because like I think there is some, some.

Things I'm seeing in the field that, that are worth mentioning to others and like, probably, okay, are you guys also seeing that or am I the crazy one here? So I will be more active there. And yeah, I think that's, that's, that's a good place to follow me.

Ashish Rajan: Awesome. All I'll, uh, put that on the show. So, but thank you so much for coming on the show, man.

Igor Andriushchenko: Yeah, thanks for having me. Thank you. Uh, thanks for opport tuning in as well. See you next.

Ashish Rajan: Thank you for watching or listening to that episode of AI Security Podcast. This was brought to you by Tech riot.io. If you want to hear or watch more episodes of AI security, check that out on ai security podcast.com.

And in case you're interested in learning more about cloud security, you should check out our sister podcast called Cloud Security Podcast, which is available on Cloud Security [00:57:00] podcast.tv. Thank you for tuning in and I'll see you in the next episode. Peace.

No items found.
More Videos