AI in Cybersecurity: Phil Venables (Formerly Google Cloud CISO) on Agentic AI & CISO Strategy

View Show Notes and Transcript

Dive deep into the evolving landscape of AI in Cybersecurity with Phil Venables, former Chief Information Security Officer at Google Cloud and a cybersecurity veteran with over 30 years of experience. Recorded at RSA, this episode explores the critical shifts and future trends shaping our industry.Caleb, Ashish and Phil speak about

  • The journey from predictive AI to the forefront of Agentic AI in enterprise environments.
  • How organizations are transitioning AI from experimental prototypes to impactful production applications.
  • The three essential pillars of AI control for CISOs: software lifecycle risk, data governance, and operational risk management.
  • Current adversarial uses of AI and the surprising realities versus the hype.
  • Leveraging AI to combat workforce skill shortages and boost productivity within security teams.
  • The rise of "Vibe Coding" and how AI is transforming software development and security.
  • The expanding role of the CISO towards becoming a Chief Digital Risk Officer.
  • Practical advice for security teams on adopting AI for security operations automation and beyond.

Questions asked:
00:00 - Intro: AI's Future in Cybersecurity with Phil Venables
00:55 - Meet Phil Venables: Ex-Google Cloud CISO & Cyber Veteran
02:59 - AI Security Now: Navigating Predictive, Generative & Agentic AI
04:44 - AI: Beyond the Hype? Real Enterprise Adoption & Value
05:49 - Top CISO Concerns: Securing AI in Production Environments
07:02 - AI Security for All: Advice for Smaller Organizations (Hint: Platforms!)
09:04 - CISOs' AI Worries: Data Leakage, Prompt Injection & Deepfakes?
12:53 - AI Maturity: Beyond Terminator Fears to Practical Guardrails
14:45 - Agentic AI in Action: Real-World Enterprise Deployments & Use Cases
15:56 - Securing Agentic AI: Building Guardrails & Control Planes (Early Days)
22:57 - Future-Proof Your Security Program for AI: Key Considerations
25:13 - LLM Strategy: Single vs. Multiple Models for AI Applications
28:26 - "Vibe Coding": How AI is Revolutionizing Software Development for Leaders
32:21 - Security Implications of AI-Generated Code & "Shift Downward"
37:22 - Frontier Models & Shared Responsibility: Who Secures What?
39:07 - AI Adoption Hotbeds: Which Security Teams Are Leading the Way? (SecOps First!)
40:20 - AI App Sprawl: Managing Risk in a World of Custom, AI-Generated Apps

Ashish Rajan: [00:00:00] What are some of the enterprise versions of Agentic AI, if you have seen any and if, is that a hype or are we

Phil Venables: No, no. You see a lot of, so it's very much in the prototyping stage for many organizations. But it's across the full spectrum activities. So there's the classic customer support examples where there's an agent interfaces with a customer that directs activity on a backend.

Yeah. There's activities where you may have a personal agent on your behalf that does travel, booking, make appointments, schedules part of your life. Yeah. There's agents that you can dispatch to go research information for you and then help come back with recommendations for certain things. It could be, again, part of running your aspects of your life.

Yeah. It could be aspects of running your business. There's agents that are providing more like professional services, activity to analyze contracts, to analyze information. So it's anything that can be autonomous and goal-directed and tool using is what you might think of as an agent.

Ashish Rajan: Welcome to AI Cybersecurity Podcast. We've got Phil Venables today at RSA. Phil, welcome [00:01:00] to the show. To start off with, could you just share a bit about yourself, where you are today? How did you get stuck in cybersecurity?

Phil Venables: Yeah like I've been in cybersecurity for the longest time, probably well over 30 years.

I've been been at Google as Google Cloud's Chief Information Security Officer for about four and a half years, just transitioned into more of a strategic advisor role for the company, as well as some some other things outside of Google. Really great to be here this week.

Ashish Rajan: Awesome. You've had a quite a bit of different experience in different industries as well. Financial sector tech now.

Caleb Sima: We're talking about your origin story.

Phil Venables: Origin story. Oh, no, look, if you're gonna go back to the origin story. So I started off as a software engineer. Oh, wow. Building variety in a variety of industries. So coal mining, oil, defense.

I was one of the original kind of, developing industrial control systems Oh wow. Years and years ago. And I first got into security by building cryptographic software for banks, high value payment systems. And then, showing my age a little bit, in 1995, I built one of the world's first [00:02:00] internet facing banks.

Ashish Rajan: And internet banking as, yeah.

Caleb Sima: So what was it called?

Phil Venables: It was, so this was for a a British bank called Standard Charge. Oh yeah. It's still there. Working with some of your audience may remember these things. Netscape? Yeah. Some Check Point firewalls, the whole stuff like back in the middle of the nineties when we were first bringing banks online.

And then after that I was, spent working up until Google working in financial services, spent the longest part of my career as the first Chief Information Security officer for for Goldman Sachs. Yep. And and then came to Google.

Ashish Rajan: Yep. That's, I, that's when I first met you as well.

And now we are in this AI world. This is the AI Cybersecurity Podcast. Just to level the playing field first for how do you describe AI security to people today? I think a lot of conversations I had at the on the floor, at least for the past couple of days, has been more around, people are still trying to figure out what is agentic AI, which is find funny because we've been talking about agentic AI for a long time on the podcast, but seems like people are still confused about what that is.

So in the conversations you've been having about ai I guess what's the landscape look like for you and how do you describe that to [00:03:00] other people today?

Phil Venables: Yeah. I think it's important to remember before talking about kind of new forms of AI, just to keep remembering that we're still getting massive amount of value out of existing deep learning, machine learning technologies, the more what we call predictive AI.

So that's still an immense amount of what's going on in the environment. Moving on to generative AI, though, we're seeing a lot of applications clearly across all different types of business, but also inside security, whether it's automating activities, vulnerability discovery, automating the generation of secure configurations, a whole array of activities.

So there's been a lot of value seen in that. Yeah. And then also now with agentic AI, it's taken it to the next level of being able to automate tool use and have agents orchestrate security activities to deliver those outcomes. So it's across the full spectrum of activities, but also, we've obviously gotta remember that most security teams, as well as using AI for their own purposes are busily supporting their businesses and their organizations in the wider, [00:04:00] safe, secure, and compliant adoption of AI for their own business purposes as well.

Caleb Sima: Yeah, so we've seen when you look around at the enterprises, everyone obviously is talking about AI, but there's all, there's been this struggle that we've seen where every security company is touting all these security capabilities. This is where we're gonna do secure ai, but when you actually talk to the business, they're still, even today trying to figure out how they even use AI in the business and make it even into production.

I heard a stat that was quoted to me just recently that less than 5% are really using AI in production environments. What's your view on it? Do you feel that's happening? Do you think that this is a hype train that will come crashing down, or, hey, there is a, there's a lot of length and capability and we're just not there yet?

Phil Venables: No, I think the exact opposite, but again, I've got like an observation bias that we're a very large cloud provider and a very large AI company. And so the use cases we're working on, it's literally thousands of companies are running on our platform with AI tooling, [00:05:00] delivering a whole array of different business value, whether it's customer support automation, even things like using AI for new pharmaceutical discovery, materials discovery.

It's all sorts of things to do with language translation, data analytics. It's a whole array of things. So we saw last year a lot of prototypes and pilots. And organizations really experimenting broadly, and that has narrowed in to what they're focusing on, their actual deployments, where they're getting significant value.

So I think a lot of that is because the experimentation phase is drawing to an end, and the actual production application is really hitting hard now. So last year there was a lot of prototyping. This year there's a lot of implementation and real value being being achieved across all different types of businesses.

So I've very much seeing huge amounts of value being unleashed by the use of AI.

Caleb Sima: As CISO now, and as in Google, what are the top things when you're looking at, now these things are shifting into production, what are like the top three things you're like. Okay, this [00:06:00] worries me. How are we gonna get a grip on this and how do we manage it?

Phil Venables: So the, coincidentally the three big things we focus on is really three pillars of controls. So we spend a lot of time working with organizations about how do they shift from that prototype to production phase and really helping them think through the controls. So there's really three pillars.

One, this is a software lifecycle risk and, some of the work you've been doing has echoed that around how do you maintain and manage the end-to-end software environment that delivers AI, whether it's in the training, the serving, the inference, the fine tuning. So that's one pillar. The second pillar is it's a data governance pillar, so you've gotta manage the lineage of the training data, the fine tuning, the test data, but that whole data governance approach is critical.

And then the third and final piece is the operational risk management of deployment, deploying AI with appropriate guardrails, appropriate checks, input output filters. Yeah. And so organizations are doing well on getting their AI deployed safely and in control. Really focus on those three pillars, software, data governance, [00:07:00] and the operational risk management of the deployment.

Caleb Sima: Like I know that you operate at a large scale, but if you can, like what advice would you give to people who have teams of small resources? If they can't do all three of those, where do you think the biggest challenges and biggest risks are in those three phases?

Phil Venables: So I think, the big opportunity for all of them, and this is a slight sales pitch, I'm I have to do it, is you've gotta buy a platform.

So one of the things, again, there are other great tech companies out there, but what we've done at Google in building our Vertex AI platform Oh yeah. Is this has got all of those controls built into the platform itself. So if you are a small company or even a large company, you just want to get value out of the technology.

You wanna bring your data to the controlled platform. You want to have the flexibility to use many different models, not just the models that we might provide, and have that as one integrated platform. You inherit those controls as part of the use of a platform, and I think that's one of the most important things.

Yeah, there are lots of small companies out there that give you a very particular bit of technology, right? But they may not give you the testing, the operational [00:08:00] controls, the data governance controls. That's why it's important to have an end-to-end platform for utilizing and managing your AI technology.

Caleb Sima: Okay, so we know, okay, go to Google, use Vertex. But even then, as a say, CISO, where do you think it, what areas of those are a you should focus here in the start of your program?

Phil Venables: So again there's two sides of it. One is making sure as the CISO team, essentially CISOs are becoming more like chief digital risk officers for their businesses.

So they're helping businesses think about not just the security of AI, but the controls, the data lifecycle, the compliance, the privacy, the safety deployment, all of that kinda stuff. So the CISOs are very focused on that for their organizations. Then the second piece is CISOs as a business unit in itself adopting AI to transform the security program, whether it's vulnerability, discovery, security, operations, automation. It's a whole array of things. So we see CISO teams doing those things, concentrating on supporting their organization's safe adoption of AI [00:09:00] and using it themselves to transform their own operations.

And I think those are a dual track approach.

Ashish Rajan: I was gonna say outta curiosity with the Google Next event, talking about CISOs as well, you guys held an event, especially for CISOs. I'm curious as to hear what was from the top of mind, things that people were asking for in terms of, we are obviously talking about the three pillars you mentioned for software as people are adopted in both sides, whether we are doing security for AI security in the current 2025 RSA or before the at Google Cloud Next, the conversations you had with the CISOs that were in that room, I'm curious what was off top of mind for them in terms of what they see as risk? And you don't have to be specific in terms of company names, but more in the context of the general theme you found that was top of mind for, Hey, this is what I see as a risk.

Caleb Sima: So I'm curious about, I'll give examples, like I hear it all the time that people are like, oh, data leakage and my private data going are huge concerns. People saying, oh, prompt injection is my biggest concern. Or people, oh, agents. They're gonna control agents. Yeah. Or DeepFakes are the deepfake. Yeah. Yeah.

Ashish Rajan: Email being a thing as well. Like phishing emails [00:10:00] being more specific. Curious.

Phil Venables: So there's a few things. So one though, very curious on our position around adversarial use of AI. For example, so our Google threat intelligence team has published some great work on Mandiant.

Yeah. The Google tag Mandiant combination, we call Google threat intelligence. So they've done some great work analyzing what actually attackers are using AI for and it's actually interesting because they're not using it for that many sophisticated use cases yet. We see and have profiled both Russia, China, North Korea, Iran organized criminal groups.

They're using it to generate better phishing emails. Yeah. Which is not surprising, but the mitigation for that is phishing resistant, strong cryptographic authentication. Yeah. Not anything AI doesn't need to mitigate that. No. They're using it for fraudulent activity, constructing fake voice, video image to dupe and defraud people.

Okay. So again, that's why it's important to have a broader set of controls against that. But we're not really seeing attackers using it yet, but we, they probably will [00:11:00] for kind of advanced vulnerability discovery. Yeah. Now I'm both happy and sad about that because that means that attackers are not having to go to that advanced level yet.

Yeah. Which means they're still finding, pay dirt in existing attacks. They're figuring it out, what they'll think they'll go. But generally speaking, what adversaries using AI for is what a lot of people using AI for, which is to automate their activities, improve their workflow, enhance their productivity.

Yeah. Yeah. So there was a lot of curiosity about that. Yeah. Again, to your point, many organizations are figuring out how do they improve their identity verification in the face of deep fakes. A number of companies out there GetReal Labs and others that, that can help with that. Yeah.

There's plenty of other threats to their identity and business processes that they're interested in mitigating. Yeah. So that was one set of topics. The second thing is a big focus of a lot of the security teams is how do they deal with their workforce skills, challenges? Oh yeah. So how do they use AI to amplify productivity?

Yeah. To democratize talent, to increase the expertise level of teams by [00:12:00] augmenting people with AI. So a lot of focus there. Yeah.

Caleb Sima: AI for security.

Phil Venables: Exactly And then the final thing is just how do they organize to support their businesses in their safe adoption of ai and being very focused on things like you mentioned, which is not using AI in a way that would leak their data to other customers of that same AI platform.

Yeah, and again, one of the reasons we spend a lot of time at Google putting in kind of isolation layers between our customer environments is to make sure that customers can use the models. Yeah. Fine tune it with their proprietary data in highly controlled ways. Yeah. Without there being any risk of that data leaking through the platform to other environments.

So we've invested a huge amount, again, in that platform of doing that customer data isolation. And that's been very, that was a big a thing that a lot of the CISOs wanna learn more about. And and expect more from the AI and the cloud providers to do that. And so we spend a lot of time delivering on it.

Ashish Rajan: A couple of years ago everyone started the AI security conversation with Terminator situation has arrived. Threat to humanity. Yeah. [00:13:00] Yeah. Today, almost two years of people using all kinds of LLMs, building their own ones. There seems to be a lot more maturity. And what I'm hearing from you as well, conversations about deepfake, better phishing emails, identity and how people can use it to enhance their productivity.

It seems to be a lot more maturity in how people have an understanding of this.

Phil Venables: I think people have become more familiar with the technology. Yeah. They're more familiar with building the guardrails in their business to regulate for themselves how AI is being used to do things. So for example, financial institutions building AI that can interface with customers and then make changes to backend systems.

They've built in guardrails and circuit breakers and checks to make sure that even if that particular use of ai, goes off the beaten path, that there's a surrounding check to make sure that's controlled. So everybody's over the, I think over the past 18 months, everybody has more organizations have learned how to build those guardrails in and so they get more comfortable with the places they can deploy these [00:14:00] things.

Now, of course, a lot of interest as we get into kind of agentic AI where we're gonna have more agents acting autonomously to achieve goals. That's where we're spending a lot of time and with organizations figuring out how to put the checks and the controls into agents. Yeah. So the agents operate within certain parameters to achieve an outcome.

This is now threat to workforce. Threat to workforce.

Ashish Rajan: Yeah. I'm glad you mentioned agentic AI is the autonomous thing. I've had so many conversations over the past couple of days where every time I talk about agentic AI and I try to ask, so what is it doing? It's always, it always boils down to the chatbot.

Caleb Sima: And I think I had a debate about this, just last night

is exactly, no agentic is defined like this. And I'm like, I don't know. Agentic should be defined that way.

Ashish Rajan: Yeah, but I, but I'm, the reason I bring that up is out of curiosity. There are only a handful of examples in the true agentic AI that at least all of us seem to believe.

Have you seen examples in organizations where, and you don't have to be calling out client names or whatever, but i'm [00:15:00] curious as to what are some of the enterprise versions of agentic AI, if you have seen any, and if that, if, is that a hype or are we

Phil Venables: No, no. You see a lot of, so it's very much in the prototyping stage for many organizations.

But it's across a full spectrum activities. So there's the classic customer support examples where there's an agent interfaces with a customer that directs activity on a backend. Yeah. There's activities where you may have a personal agent on your behalf that does travel, booking, make appointments, schedules part of your life.

Yeah. There's agents that you can dispatch to go research information for you and then help come back with recommendations for certain things. It could be, again, part of running your aspects of your life. It could be aspects of running your business. There's agents that are providing more like professional services, activity to analyze contracts, to analyze information.

So it's anything that can be autonomous and goal directed and tool using Yeah. Is what you might think of as an agent. And that can be quite broad.

Caleb Sima: So I'd love to get into the details of, one of the [00:16:00] things you mentioned is, which is the big thing is, hey, these agents are doing things right.

Yeah. They're on a consumer level, they're gonna go order your DoorDash for you or look up and message a friend. But on the enterprise level, they're gonna be automatically coding, checking in code, reviewing what you're doing, managing your Jira tickets. And they're all gonna be acting on behalf. So you also mentioned that in Google, you guys are really focused on what are the barriers, what are the judges that you're gonna be implementing to ensure that these agents now are acting on behalf of someone?

How do you even think about that? Like how do you, can you wrap for the audience? Sort of what does that look like? How do you think that security is gonna play out?

Phil Venables: So there's multiple levels to this, and again, we're at the early stage of a lot of this. There's a lot of work going on in the standards communities.

We're doing a lot of work in the Coalition for Secure AI, and as with the Cloud Security Alliance, to think about what does that control stack for agents look like? And again, not just the base level of security. Things like authentication, authorization, delegate permissioning, but what does a [00:17:00] control plane look like?

To control the behavior of an aggregation of agents that orchestrated to deliver an outcome. That's, we're of very early days of what that looks like. But when you start going up the stack, so for example, we have MCP, the model context protocol, a lot of work going on to improve and add security capabilities for how AI will use tools and to make sure that can be done in a safe way.

And then there are things like Google Next, Google Next. A few weeks ago we announced the agent to agent protocol in partnership with about 50 other companies. So this is all about how do you agent communications with other agents and doing that with authentication, commissioning and authorization built in.

So we're building and with a lot of the other tech companies that stack of control to permit the safe, secure, and controlled use of agents. But we're very much at the early stage of all of this.

Caleb Sima: One of the things that has always been a situation that I've been worried about, that I feel like I don't have or have seen answers to is it's a it's not a permission problem on agents.

A great [00:18:00] example is I might have an EA agent, and my EA agent does have to have permission to my email and may have to have permission to my social media, but how do I prevent that agent from posting my email to my social media? And so there's almost like this intent problem.

Yep. Where I know if I hire a true person, there is accountability for that individual where they will get fired. Yeah. Their family and likely, likely their salary will go away. Yeah. It could impact their references and so they have a long term problem, so they you know that there shouldn't be this kind of thing happening, right?

Yeah.

Phil Venables: And that's exactly the kind of thing that needs to be built into these agents as guardrails. To enforce that kind of control in the way it's used. Plus, how do you not just delegate permission to an agent, but how do you delegate the permission in a way that the agent can obeys your instructions about how it would further delegate those permissions to other agents it may use to achieve the outcome

Caleb Sima: and somehow keep it from going off track in odd ways that where this would occur. That's in this scenarios, right?

Phil Venables: But again, [00:19:00] the, that's so on the organizations that build agents to do these things comes a lot of responsibility to architect those business control principles in as part of the platform.

Now, clearly over time we'll see more standards, more approaches. We'll probably see more products that help organizations control and manage that. But again, we're at the early stage of figuring out how to build these build these agents. I think the other thing we have to be careful of is across an entire industry is think about what's the behavior in the future of trillions of agents in the environment.

So you can imagine a imagine an agentic flash crash of some organization offers up a cheaper price and all of a sudden billions of agents are descending on it to grab that deal. Yeah. This is something that we've had to deal with as part of regular e-commerce over the years.

Caleb Sima: Yeah. When a Nike shoe or tickets come on the they do that, the, that's right. Yeah, that's scalpers are automatically,

Phil Venables: But the good news is as part of all of this is at some level, while it's all new technology and people are figuring out, it's the same good old business [00:20:00] principles of how do we provide that control?

Exactly. To your point, and again, you think about it in the future is. Everybody will be a manager. You just might be a manager of agents rather than people. But everybody's gonna have that responsibility to figure these things out. Yeah. Yeah.

Caleb Sima: So even on continuing on the future of sort of agents and how we're thinking about that ha. One of the big areas is I've heard some comments about MCP obviously being the new enablement of agents with tools and now maybe products in the future. They're no longer gonna start offering user experiences. It's just gonna be headless with their own MCPs that are now gonna be integrated in a master chat or a dashboard that you can generate your own.

That's right. Controls and dashboards without ever actually really, you really, the product itself just becomes the usefulness that provides the interface to that.

Phil Venables: That's a really good point because I think, again, in the future businesses and all organizations, you have to think about the user and customer experience.

Over the past few years we've thought about API experiences, right? The developers building on your [00:21:00] platform. And now we have to think about tool experiences for AI's agents to be using those tools. And I think successful businesses in the future are gonna think about all of that. It's programmatic interfaces, it's AI interfaces, and it achieve

Caleb Sima: it's another abstraction on top of that API that is the new abstraction that you started using.

Phil Venables: And then not just that, but the, not just the agent to tool use through. CP, but the agent to agent work for being, having your agent be part of a bigger orchestrated framework of other agents using them to deliver an outcome. So it's gonna be a tremendously interesting time over the next few years.

Ashish Rajan: Where do you stand on the whole open sourcing? 'cause you used MCP as an example. I've been having a lot of conversations about the balance. I won't answer the question, but I'm curious to know from your perspective, the open source world that has opened up here. Yeah. No. That technology that we have used has always been proprietary.

It's always had an open source route somewhere. Yep. With the AI space. And currently there's a set standard for how things should be.

Phil Venables: No, it's interesting. There's a lot of work going on. So for example, you know the reason we co-created the Coalition for Secure [00:22:00] AI Yeah. With a number of organizations, which is a, it's an open foundation under Oasis, and it's all about developing standards, frameworks, tooling to help manage the security of AI. We also have the Frontier Model Forum, which is an organization of all of the big AI labs to provide for controls and safety around the development of foundation models. And then there's ML Commons which has been providing a lot of open toolkits for how you do manage machine learning and generative AI and agentic AI.

Yeah. And that is increasingly working on security standards. So the good news about all of this, including things like MCP is a published protocol, A2A, the agent to agent protocol is published and open. So the great thing about this space is there's a lot of stuff being done in open source that everybody can contribute to.

Yeah. And in fact, we, the reason we set up the Coalition for Secure AI is exactly to encourage that kind of customer participation, not just tech company participation.

Ashish Rajan: So then someone who's trying to build a security program or [00:23:00] maybe in most enterprise case uplift the security program today. We've spoken so far about the, I guess the theme of things people are worried about, right?

For people who are uplifting a program today where they need to think about AI from a security from AI and AI for security lense. What's your, especially when you don't have a clear future for where this is going. It could be autonomous, could not be autonomous. We could be seeing human approval.

What should they consider putting in their security program? And it could be going back to the basics as well. What do you recommend to people who are CISOs who are watching this or listening to this that, hey, its 2025, you're walking away from RSA. What should you be thinking about in your security program for the next, maybe let's just say six months?

Because AGI maybe right on the corner.

Phil Venables: So I think so for security teams that want to improve their risk management of the adoption of AI in their organization, I think the CISOs and their teams are gonna have to keep taking a bigger leadership position. So again, I mentioned before, I'm seeing a lot of CISOs becoming more like chief digital risk officers.

Yeah. [00:24:00] So when they're approaching AI just as security and not thinking about the broader safety, trust, reliability, quality, compliance, and privacy arrangement. The CISOs are having to take on those things for their organizations. There's a few exceptions, like large banks typically have risk and compliance teams that, and they spread that responsibility.

But I think a lot of security teams are realizing they're gonna have to move beyond just security to think about the broader control requirements for AI. Then on the adoption of AI by the security teams, I think the main message we have for people is just get going on some prototypes.

Because you learn as with any other technology change, you learn what you really need by experimenting and doing. Yeah. I think a lot of organizations are finding the low hanging fruit of security security operations automation. Yeah. There's a lot of low hanging fruit around using AI to generate secure configurations and generate the configuration as code environment.

The setup, secure environments, there's a lot of good stuff happening there. Yeah. And we're [00:25:00] seeing the early stages of software vulnerability, software discovery as part of the development life cycle. But I think the main thing is teams just need to get going and experiment. 'cause you're not really gonna know where the value is until you prototype and experiment.

Ashish Rajan: Something I came up with as I've been talking to a lot more people is I don't know if a bias comes into when people who have a lot of experience in cybersecurity started learning AI. I was talking to someone about this is it a smartphone moment? Like when my parents first, I gave them a smartphone. For me personally, it was very intuitive. I like, oh yeah, this works . But then my parents, the first time they saw a smartphone, why can't I press the button? What is, I can't, and I always bring that lens to the AI space to go. When I try and learn as a security person who has years of experience in cybersecurity, I'm looking at AI and trying to understand prompting like I'm asking the right question.

When is it good enough to like, hey, pick stick with one AI model, or keep experimenting with all the AI models. Cause in the smartphone case it was easy. I only had an Apple or a Google, but in the AI cases, we're in the

Caleb Sima: tech AI confi, oh, use this model for that.

This [00:26:00] model, you have to make this prompt this way, that part of this way.

Phil Venables: There's a few ways of looking at this. So again, when you're building applications, we're finding more, if you go back 12, 18 months, you would find companies build an application and they would use one model. They're now building to be using multiple models.

That's right. Yeah. And so it may be on for they'll build an application and it will use an open model with a low per token cost that is not a particularly sophisticated model, but it's good enough for most cases. And their application knows when to refer certain things to a much more high-end, sophisticated model might have a,

Caleb Sima: the CEO model is what I'm say.

Phil Venables: Yeah. So typically a lot of applications now are using multiple models and they determine which model to use for a particular task. So it's getting more sophisticated in terms of application construction. Now, in terms of your broader question, I think the other way of looking at this is it's not so much about the model you use, it's about the.

Product and application that you use that surrounds that model. So I'll give you an example. So we've seen massive take up of the Google [00:27:00] tool Notebook LLM Yeah. So notebookllm.google.com

Ashish Rajan: makes the podcast.

Phil Venables: Yeah. Is, you can,

Caleb Sima: it eliminates this. We wouldn't have a future.

Phil Venables: We should upload this podcast to see if the virtual podcast host,

Caleb Sima: it'll make it better. It'll make it better.

Phil Venables: That's an example where you see people that are even not really contemplating the fact that they're using a particular AI model, are using Notebook LM as a tool. To upload large amounts of information to then ask it questions and get some analysis. And similarly, you just look across an array of products.

Yeah. And the model itself is blending into the background, underneath an actual product that makes it usable for anybody. Not just people that are the early adopters.

Caleb Sima: To your point, no one really cares about what language the code is written in. That's right. Yeah. Yeah.

It's about the usefulness. What do you, what's the value you produce? Yeah.

Ashish Rajan: What do you want to use it for? Yeah.

Caleb Sima: We and the AI, since we're so we're all geeky about the models behind the scenes, but really at the end of the day, it doesn't matter. What can you produce?

Phil Venables: Yeah. And look, it's the same with software development.

You bring up coding. That's where you see developers [00:28:00] adopting AI aggressively, not because they're necessarily always using it to generate all their code, but they're able to use it to generate the scaffolding of an application, the data access libraries, the test coverage. That leaves them the focus on the high end developer activity of building the business logic and the architecture.

And everybody finds a unique way of using these tools. But again, that's the exact example of the model itself blends into the background under a great tool.

Caleb Sima: So this is a great bridge into vibe coding. Alright, so Phil, so like one of the many things that I've talked to people around vibe coding is it.

Obviously vibe coding in its current state is pretty interesting and is phenomenal at writing these little one-time applications. But when I actually talk to others who are like, you're technical at your core, I'm also technical. They are CTOs, VPs of engineering, CISOs are now vibe coding and they're so ecstatic because it makes them feel back, like they feel like they get this, that I can code again.[00:29:00]

When that was like, oh, I was, that was way outta my league at the time, and I can go reconnect and do and build cool things. What's your feeling?

Phil Venables: So I'm, I'm back doing some coding again. Not because necessarily like it's helping it, it's not about the generation of the software, it's the fact that many modern application environments require so much setup.

And the time to Hello World is quite long for many modern application frameworks. But the great thing about, again, what I use AI for is just say, generate me an environment in which I can build, actually write the software. Yeah. And it's reducing the, the time to Hello World

Again what we're finding with developers in many organizations that we work with. Again, they're using it for the same thing. They're not necessarily using it to generate entire software stacked, but they're using it to set up an environment. They're using it for configuration. They're using it for the basic, yeah.

You got a spot code. Yeah. The building blocks of generating their IDL for the data access and they, what we're seeing a lot of organizations use it to develop [00:30:00] tests. And improve their test coverage, which is a great thing for security teams. Yeah. 'cause we always want developers to have more regression tests and test coverage.

AI helps a lot with that and so we think it, it helps people that are less skilled at coding build more code. But I think one of the things that's going probably under-reported is just how much it's amplifying the productivity of actually, professional software engineers in removing a lot of the toil associated with software engineering so they can maximize their focus on building software.

Caleb Sima: So when you think about it's, this is a great sort of getting into the future of coding like today, the way you think about it is these guys are at like very junior engineering software levels. When you look at vibe coding today, and to your point for a senior engineer, it's great because I can like.

Okay. Give you this file. Give you this file. Okay. I have a very specific problem. Help me figure out the scaffolding. And they, okay, it'll help me think. Boom. Boom. For the others, they're like auto generating these codes and it's actually doing quite good. Even in that. Where do you think that is going? Do you think that's gonna sort of Peter [00:31:00] out or that's just gonna continue to the point where we are now starting where junior software engineers are having trouble getting jobs is this is where this is gonna start taking where they're started eating software engineering as a entire,

Phil Venables: I think things are just gonna change. I think this ultimately, like with a lot of these of AI ultimately democratizes people's ability to do things. Yes.

Which will still create more appetite for more of that software to be delivered. So again, when we've talked about this for years, with kind of no code, low code type environments, right? This is just an even further enhancement of that. So if you look across an organization, everybody can be a developer.

Yeah. Now security teams and IT teams need to provide the platforms and frameworks for that to happen safely with the right testing. But it's great ultimately that everybody can experiment at all levels of organization on building software for the organization.

Caleb Sima: So a product manager can now be an engineering manager and build the products.

Phil Venables: Well, exactly. It might be that, yeah, it might be exactly a product manager may be able [00:32:00] to generate the prototype. Yeah. And when that has been successfully determined that's something that they want build, then all levels of developers from junior to senior can take that. And then production hard on that initial set of code to be fully production ready. And again, this is more about how do developers, both junior and senior, use the tools to deliver an outcome.

Caleb Sima: So let me paint a scenario for you. In a future world, we are now at a point where the actual coding itself has been abstracted to this AI machine.

They create these black boxes, and the way you now code software is through requirements, engineering requirements, product requirements. What sort of, risks do you see? Creating more security issues. And where do you see actually maybe solving or remediating older security issues?

Phil Venables: So I think it's gonna, it's gonna keep being very important for security teams to have even more of their traditional approach for software vulnerability identification for identifying structural security and design flaws in software.

So that's never gonna go away. [00:33:00] But on the flip side though, I think there's more opportunity where having the AI generated code be generated in a way that is pre hardened, that is using standard security libraries, that is automating a way, the toil of building some of these things in. Yeah. So I think there's more opportunity there, but again organizations have to do this with a control mindset. Yeah. And so for example, there was some discussions recently about AI generating hallucinated names of packages. There's that risk, so it splatting a slot. Slot squatting slot squatting. That's right. And so the answer to that is not necessarily that the AI needs to improve and not hallucinate those things, although that would be nice.

Yeah. But the real is you put a guardrail around that environment to say, I'm gonna cross check my code to make sure it's only using registered packages. And that's a software quality check. Just like you would do that. Anything else? Yeah. And so I think. At the same level, we should have high expectations of this tooling to deliver transformational benefit, but at the same level, we should also not expect it to be totally magical.

And [00:34:00] we should keep the controls built around it.

Caleb Sima: There's still gonna be these engineering focus sort of harnesses that you will always have to pay attention to that sort of contain and manage the AI in whatever it's creating.

Phil Venables: That's right. That's right. Yeah. Look, if you think about the, an AI agent.

As almost human-like.

Yeah.

You would put the same controls around that as you would put around humans. It's quality checks, it's perhaps even later training about fine tuning what you want them to do in accordance with enterprise policies and standards. Yeah, there's it's a very similar analogy.

Caleb Sima: I was just gonna say that, I'm generally the pessimistic kind, but in this scenario I am actually pretty optimistic because I actually feel that. If we get to that world code itself can then become more standardized. It can be consistent. Yep. Yep. And that's pretty amazing because then you can really define what is then consistency in standards for me as an organization.

That's right. Yeah. And have it be boilerplated. Yeah. On any, so then it's not just up to an engineer to be a good engineer or a bad engineer. It actually really just becomes a boilerplate thing. That's a, that's amazing.

Ashish Rajan: I think it'll [00:35:00] be a transition period though. I think what happened with most organizations where.

There were people who loved Java, people who loved Node.js. There were people who love different kinds of offers. I imagine there'll be people who would love different kind of coding assistance sofware. When people like in the security would've to take a step back and go, okay, we got, we are defining a standard for what kind of coding assistant, how much of it can be used?

Where can it be used? And what environments could we deploy them in? But the gist of what you were saying, what's going through my head was a PM being able to write code. Every technical leader watching this gonna normally they were like, oh, how long would this take? I'm like, I don't know, man.

It be one month at least. And they're like, oh, the product manager goes to his AI agent, types it up. Took me like five minutes. What are you?

Caleb Sima: So yeah, you can build any prototype in five minutes.

Phil Venables: Yeah. No, but I'm going, the hard part is that's the key step is to, is to not immediately put the prototype in production.

Yeah. But that's the thing.

Ashish Rajan: That's another risk that I am seeing as more people try and become coders who have traditionally not been coders. There is that I guess for lack of a better word, that's an unknown risk at [00:36:00] this point in time where do they become, oh, 'cause we will probably get the same level of AI access to them as well as anyone else, right?

Because you don't want them to not be productive. If that means I can have a prototype ready, I can give to a senior developer or senior tech leader person going, Hey, this is what I'm trying to build. How much of a is can be productionized? What is practical?

Phil Venables: So this is where it's also gonna be important for more and more things to be put into underlying technical platforms.

So again, if you think about the difference between somebody generating some code that's production ready or not production ready. Yeah. It's all of the qualities of reliability, load balancing, security, a whole array of things. Now you can train the AI to do more of that, but also I think this is what we're calling, we've talked about moving from shift left, so shifting security and reliability earlier in the design lifecycle.

We now have to think more about shift downward. Oh, so going from shift left to shift downward where more of the controls are baked into the underlying technical platforms. So no matter what code you are building for business logic, yeah, it's [00:37:00] inheriting security, reliability, resilience, a whole array of controls from the technical platforms like the cloud platform it's running on.

That's not necessarily a, an AI derived thing. That's just shifting downward into the platform.

Caleb Sima: It's like building an Android app on, I don't need to worry about the security of the platforms and anything else. I just build my functionality. Yep. The OS manages all the rest of that platform. That's so how does an enterprise do that?

Phil Venables: No, that's right. So I think this whole notion of moving from just, not just shift left, but shift downward is gonna be important.

Ashish Rajan: But would you say then the frontier model would have to take more responsibility? Like for example, what happened in the cloud world, Google Cloud world, where a lot of the responsibility is shared responsibility, which is the same case as frontier model as well.

I may use a service, but at the end of the day, I'm still responsible for the data that I'm putting into it. That whether it's sensitive, not sensitive, whichever the direction may be going. If I'm shifting down, do you see a future where probably all of that security is baked into the frontier models or the large LLM providers?

Phil Venables: So the frontier models in particular models that are fine tuned on [00:38:00] security data. Yeah. Exhibit naturally better properties of generating more secure code, more, other types of usage. But again, building more controls into the underlying platforms so that the software written on those platforms inherit that control is not necessarily something that has to be done with AI.

That's just a part of shifting down into the platform itself.

Ashish Rajan: Or I guess when I was going with this was that if it becomes default to your point, like they already would come with the default security, right? Because they're learning on that data. Oh, yeah.

Phil Venables: So they may need to fine tune on how to make the right calls into the platform to use the secure element.

Ashish Rajan: Yeah, this ones do I need to use for my software versus I can, like the, I'm gonna hate calling our compliance standards, but not everything listed in the compliance standard is applicable for every company you pick, the ones that use is relevant for you.

Phil Venables: This is why fine tuning becomes important.

So a lot of organizations that are adopting AI to generate larger portions of their software. Are using the base capabilities of the foundation models, but they're also fine tuning on their own code. Yeah. Their own standards, their own libraries. So [00:39:00] that those frontier, those foundation models that are used for this have also got the capability of generating organizational specific nuanced code.

Ashish Rajan: Yeah. I was gonna ask in security teams that you've been talking to at Google Next and otherwise which vertical in security, whether it's AppSec or security operation or whatever, which ones do you see as the ones that, as we're talking about future, which ones are adopting AI more, or which ones do you see would be the first cab of the rank for either the

Caleb Sima: most impactful for using AI to improve security?

Ashish Rajan: That's right.

Phil Venables: Yeah. The big one at the moment is security operations, right? And I think that's largely because people can imagine how to do it. It's lot of data as well. Historically, quite a toil. Toil some activity that people can imagine and want to automate. And there's a lot of a lot of organizations in their security operations teams have moved from ops analyst roles into more having security engineers in the security detection response team.

So they're more equipped and able to adopt automation technologies based on AI. So it's the security operations environment. Then [00:40:00] we see a lot, actually in a lot of the risk and compliance teams automating a lot of the kind of compliance attestations. Oh, and other type of analysis is we're seeing a lot of, and then we're also starting to see early adoption in some of the software penetration testing, vulnerability discovery teams.

Yep. Yep. But I think, first off, the first off the rank was was security operations. Definitely.

Caleb Sima: I was gonna just talk a little bit more on the app side. One of the key things when we're talking about all these people generating applications is I look back at what Cloud did to servers in IT, where this allowed you to now one button click and deploy an instance in a server, and then you, it created this whole cloud sprawl problem.

And then now, are we seeing that same evolution now from an applications perspective of coming in the future where AI can now almost one click generate custom unique applications, and we know this as even engineers, they're now gonna say they love building things. So do we see less sort of commercial [00:41:00] versions being dropped and buying outside software to solve things and enterprise using their engineers to build these one-off unique application. Yeah. Then it'll create a sprawl inside.

Phil Venables: No. So I think there is more of an ability now for more organizations than they used to be to build their own custom applications. Yes. Yeah. So I think what we're seeing is early experimentation, but I still think it's gonna be the same that. Any organization small, medium or large is gonna take a portfolio approach to this.

So there'll be some places where they experiment and prototype and use their own code. There'll be a lot of places where they still acquire third party software because that just makes sense for them. And and I think it's gonna be more of a hybrid and organization of the future will be lots of different agents, lots of different components.

And the IT team with the businesses and the security team are gonna have to take a, almost like a portfolio management approach of. Is all of that hanging together in the right way? Is it conforming to business policies, security policies, and is it being effective in mitigating the risks to that environment?

[00:42:00] And I think that's gonna be a portfolio approach.

Caleb Sima: Do you think that will start creating app sprawl inside of these enterprises and now you've got. Instead where you used to have, let's say 50, 60, now you're thousands and thousands with agents now all communicating these apps,

Phil Venables: Potentially again to your analogy about kinda servers it's many organizations that have aggressively adopted cloud, they don't really think or count servers anymore because they're happening. They just think about a flexible workload deployment to achieve a business goal. Yeah. And it could be the future of software in, especially in and around agents is.

Nobody really cares whether you have a hundred applications or 10,000 applications. Yeah. The question is, are you managing that portfolio of activity in a risk managed way to deliver against a set of objectives? And it may be the old ways of counting things don't matter anymore. What matters is the end to end risk mitigation?

Caleb Sima: That's fascinating to think about. Think about like dynamically generating. Custom applications based off of custom use cases and destroying, or creating [00:43:00] them on the fly would be one insanely amazing to think about, but like scary all at the same time. Yeah. And also we have nothing to think about for that.

Phil Venables: Well, so the, the good news for security teams in all organizations is the security team is even more needed to help their businesses migrate safely to this new world. Yes. And I think it's a tremendous opportunity for the CISOs and their security team. It's again to exhibit that business risk leadership, not just the classic kind of IT security leadership.

Yeah. So a lot of CISOs and teams have wanted to be more risk focused. Yeah. And this is a tremendous opportunity to do that and add a lot of value to their businesses.

Ashish Rajan: That's a high note. But any final thoughts for CISOs or cybersecurity leaders watching this? Final thoughts for anyone who's gonna,

Phil Venables: In the context of AI, just what I said, which is engage with your businesses, think broadly about risk, not just the security risk.

It's an opportunity to add a lot of value. And then also think about how you, the security team are gonna be a, an early adopter so [00:44:00] you can improve your own productivity and capability, but at the same time, learn the technology so you can be more useful to your businesses. I'm gonna keep writing more about this stuff at philvenables.com, my my blog.

There'll be more stuff that I put out on this topic in in the future.

Ashish Rajan: I look forward to more conversation as well. But thank you so much for coming on the show as well. That's great. Thank you. Thank you. Thank you so much. Yep. Cool. Thanks everyone. Thanks you next time. Thank you so much for listening and watching this episode of AI Cybersecurity Podcast.

If you want to hear more episodes like these or watch them, you can definitely find them on our YouTube for AI Cybersecurity podcast or also on our website, www.aicybersecuritypodcast.com. And if you are interested in Cloud, which is also a system podcast called Cloud Security Podcast, where on a weekly basis we talk to cloud security practitioners, leaders who are trying to solve different kinds cloud security challenges at scale across the three most popular cloud provider.

You can find more information about Cloud Security Podcast on www.cloudsecuritypodcast.tv Thank you again for supporting us. I'll see you next time.

No items found.