Anthropic's August 2025 AI Threat Intelligence report is out, and it paints a fascinating picture of how attackers are really using large language models like Claude Code. In this episode, Ashish Rajan and Caleb Sima dive deep into the 10 case studies, revealing a landscape where AI isn't necessarily creating brand new attack vectors, but is dramatically lowering the bar and professionalizing existing ones.The discussion covers shocking examples, from "biohacking" attacks using AI for sophisticated extortion strategies , to North Korean IT workers completely dependent on AI, simulating technical competence to successfully gain and maintain employment at Fortune 500 companies . We also explore how AI enables the rapid development of ransomware-as-a-service and malware with advanced evasion, even by actors lacking deep technical skills .This episode is essential for anyone wanting to understand the practical realities of AI threats today, the gaps in defense, and why the volume might still be low but the potential impact is significant.
Questions asked:
00:00 Introduction: Anthropic's AI Threat Report
02:20 Case Study 1: Biohacking & AI-Powered Extortion Strategy
08:15 Case Study 2: North Korean IT Workers Simulating Competence with AI
12:45 The Identity Verification Problem & Potential Solutions
16:20 Case Study 3: AI-Developed Ransomware-as-a-Service (RaaS)
17:35 How AI Lowers the Bar for Malware Creation
20:25 The Gray Area: AI Safety vs. Legitimate Security Research
25:10 Why Defense & Enterprise Adoption of AI Security is Lagging
30:20 Case Studies 4-10 Overview (Fraud, Scams, Malware Distribution, Credential Harvesting)
35:50 Multi-Lingual Attacks: Language No Longer a Barrier
36:45 Case Study: Russian Actor's Rapid Malware Deployment via AI
43:10 Key Takeaways: Early Days, But Professionalizing Existing Threats
45:20 Takeaway 2: The Need for Enterprises to Leverage AI Defensively
50:45 The Gap: Security for AI vs. AI for Security
Ashish Rajan: [00:00:00] We're going through a report, Anthropic Threat Intel Report August, 2025. The first one, the most interesting one, was the biohacking one that was used for data extraction, which is kind of different to data exfiltration
Caleb Sima: by using AI to build the strategy and the negotiation tactics between the attack. And the victim.
AI helps you write the message, tell you exactly how much you should ask for. You're not even talking to the real
Ashish Rajan: person. You're just talking to clot cord in the background.
Caleb Sima: This represents a new paradigm where technical competence is simulated rather than possessed, yet they're successfully maintaining employment at. Fortune 500 companies,
Ashish Rajan: if you're wondering what kind of a real AI attacks exists, and Anthropic was kind enough to release their AI threat in intelligence report in August 25. Caleb and I went through that report in this particular episode with 10 case studies that was shared, and the difference is remarkable.
Some of the case studies showed people could not even speak English as a language. Some of them were Russian, [00:01:00] Spanish, and otherwise were building malware, ransomware. Other kinda capabilities to have an impact, either positive or negative on Fortune 500 companies or different industries across the globe.
This episode is for you. If you are curious to know more about the case studies, I'll leave a link. For the case study in the description of the comment as well, so you get to read the entire report In this episode. We also spoke about the surprising things we found in the report, and based on what we are hearing from other CSOs and companies across the globe on how much AI is being used and what kind of AI security is being done, what are some of the gaps that are existing today, and how far are we truly from some of these attacks being done at scale?
Again, shout to and tropic for creating a report, probably the first of its kind. I'm hoping OpenAI and other companies can start coming out of these as well for the threats that they see their AI being misused for. Probably Amazon and AWS as well, if you folks are listening in. But if you know someone else who's interested in threat intel reports on how AI could be used for malicious purposes, definitely share this episode with them.
And if you have been listening or watching an AI security podcast episode for a second or [00:02:00] third time, I would really appreciate. If you can quickly check that you have subscribed to us on Apple or Spotify, if that's where you're listening to this, or YouTube or LinkedIn, and that's where you're watching these episodes, your support means a lot.
It's a free thing you can do to check that right now, and if you haven't, please hit the subscribe follow button in whatever platform you're listening or watching this on. It means a lot that you support the work we do and also means that we get to be discovered by many more people like you as well.
Thank you so much for your support. I hope you enjoy this episode and I'll talk to you soon. The first one, I think for me the most interesting ones, the one is definitely was the biohacking one, where it was more about someone who was able to penetrate into, uh, many industries specifically. I mean, they went across not the big enterprise giants.
They went for things like your churches, your, uh, healthcare, government, emergency service. And they were basically using Claude Code. Oh, actually, yeah. So for context, every attack that was mentioned, or every case study that was mentioned in the report uses Claude Code in some way, shape, or form. This is a report, [00:03:00] so Yeah.
Yeah. I mean, obviously we don't know of the, are there any other versions that are out there? 'cause Open AI doesn't have a copy like this yet, so maybe this encourages them to come over the report. So it, it's what, what was fascinating for me in that vibe hacking one was I think they. They're trying to categorize it.
I think they wanna make it a regular thing. But it basically, the one example that was called out, and this is part of the video that was shared as well, is that they, uh, they basically were jailbreaking Claude Code to pretend to be network pen testers going through VPNs that were available, publicly accessible VPNs that were available from these in specific industries that traditionally are not known to have huge security I guess from a capability perspective.
And they were brute force everywhere through it. Do a lateral movement across the, the org structure to figure out where's the sensitive data. Like, for example, in the church, in the case of the church, they found the list of don donors, how much money and their addresses and personal information. I guess I was [00:04:00] available.
And that was used for data extraction, which is kind of different to, uh, data exfiltration where someone just takes out the data and puts it out for money. They basically were. Tell, letting the person, letting the organization know that, hey, if you do not gimme money, whatever the amount is, we would make it public.
It's almost like threatening them like a goon would. Uh, that's what they were able to do with Claude code. And I think the funniest thing was it kept referring to, uh, and I'm sure we were talking about this, about the other one that. The North Korean one, which, which we'll talk about after.
Caleb Sima: Oh yeah.
But hold on. Before you move, I just wanna talk about the extortion and ransom thing. One, which is, the thing that seems pretty interesting about this is like, by and large a lot of these attackers aren't, sometimes, aren't necessarily the brightest, right? And so what's pretty crazy about the report is by using AI to build the strategy and the negotiation tactics.[00:05:00]
Between the attacker and the victim and being able to say, Hey, the best way to make use of this attack is through direct extortion. Mm-hmm. Or through, you know, like ransomware you know, like, and then it goes, and it, AI helps you write the message, tell you exactly how much you should ask for. That's right.
What your demand is. What is the levels of things if you don't pay, and what's the escalation schedule if you should not pay, and how much damage you can do until they pay. And like giving all of the strategy. And so what's what's interesting is, is as like. As someone who has sort of been through this scenario in real life, generally what we do or what you should do in this scenario is you go and you, if you're the victim, if you're at a company, you go and you hire these sort of [00:06:00] ransomware, you know, attack negotiators, right?
Who are really good at what they do. And what they do is they, they come in, they set up the, the secure messaging system, and then you have a conversation directly. With the attacker and the tactics that these people use are like, Hey, what we're going to do is we're gonna ex, you know, the goal is to extract as much information from the attacker that you can without paying anything.
That is the goal for the ransomware person. And to that. Well, as in like the victim
Ashish Rajan: you mean?
Yeah, yeah. The, yeah. The victim's goal is to extract as much from their attacker. Yep. As possible. Meaning how much data do you actually have? What is your identity, right? Like how long have you been in the system?
Like anything you can get your entire goal is to be a detective and extract as much information as possible without ever giving. And there's all these tactics that you use in order to do this.
Ashish Rajan: And the proof of life thing as well, whether the [00:07:00] data is actually legit.
Yeah. Which is, which is by the way, um, a whole point of like.
Hey, I wanna see how much proof of life. Yeah. Do you send me all of it, because then I know how much exactly where you were, where you're at. So then, then my security team can immediately go, oh, I know what system that came off of. Boom. Like, go do forensics, right? Yeah. Yeah. And so like, you know what's really interesting is like in, at least in our negotiation.
Like they're not the brightest at doing this because you have these really smart people that you hire who are all social engineering. The attacker. Yeah. In order to extract this information, but now they're using ai. To do this for them, uh, which makes them super smart. And like that's kind of scary.
Ashish Rajan: Yeah, because I think 'cause uh, for, I highly encourage people to look at the actual report, not just the article, but in that on towards the end of the case study, they actually had like the, the script that was used where in 48 hours. A certain message goes out, a 72 hours, [00:08:00] certain message goes out and right at the end, just before the case study ends, it says, deadline hours specified.
Do not test us. We came prepared. It's like, why?
Caleb Sima: Yeah,
Ashish Rajan: it's, that's like, as a psychological tactic. 'cause the thing is you would not even know you're talk. You're not even talking to the real person, you're just talking to CLA cord in the background. There is actually absolutely no emotion there.
Yeah.
Yeah. It's really, it just, it's just up levels their intelligence, uh, which actually goes down to then North Korean IT workers. Yeah. Finding employment. Yeah. Yeah. I gotta, I gotta do this quote that I was talking to you about before.
Ashish Rajan: Oh, yeah, yeah. Find it actually, while you do that, I'll just give a quick context of, uh, this was basically North Korean remote workers were designing intra I guess they were generating profit for the regime.
Using, doing basically contract work across the internet. That's the TLDR gist of it. But there was a quote that stood out for you
Caleb Sima: and getting hired and so, so in this report, which was funny, this is the thing I was calling out. [00:09:00] Here's the quote. Most concerning is the actor's apparent dependency on ai.
They appear unable to perform basic technical tasks or professional communication without AI assistance using this capability to infiltrate high paying engineering roles. That are intended to fund North Koreans weapons program. This is another one. Okay. So I gotta, I gotta say this. So, uh, hold on.
Here it is. The most striking finding is the actor's complete dependency on AI to function in technical roles. These operators do not appear to be able to write code. Debug problems or even communicate professionally without Claude's assistance. Yet they're successfully maintaining employment at Fortune 500 companies, passing technical interviews and delivering work that satisfies their employers.
Mm-hmm. This represents a new paradigm. Technical competence is [00:10:00] simulated rather than possessed.
Ashish Rajan: Actually, someone we were joking about this the, uh, the other day and the question was, if you were a threat intel person and you do find the, say, north Korean people in your workforce who are doing a great job, probably a better job than the existing workforce then you are bit of a dilemma, whether do you get rid of them, but they're doing such a good job.
Probably are a cheaper resource compared to what you would have otherwise as a company. Do you choose to continue to fund North Korea while getting your work done or, you know what?
Caleb Sima: This is the thing. It's like, so you look at this as either two, two areas. Either the bar is so low that these people can continue to operate this well.
Or actually the bar is high enough, but with ai. These people can get the job done and to actually to a Fortune 500 company. Who cares? Yeah. Right. Like it's, you're getting the job done. Yes. Right. Your work is sufficient.
Ashish Rajan: I mean, even the, uh, the [00:11:00] transcript of messages sent to Claude code was funny as well.
There's this thing where, what does this mean? So what, what does mean this? It's not even proper English. It's like, what does mean this? We had our first picnic of the season with a smiley face. Yeah, and it's like, and then there's an emoji who says, what does the above thing mean with an emoji of two ang angular brackets.
There's another one that I thought was funny. It is like, how do you use Outlook application? How to please revise my code, how to set up this project, how can I develop, how to check go installed? I'm like, I guess those sound like real things and if you just follow them. But maybe to kind of bring it back to this one, I guess the, the most concerning one to what you were saying as a Fortune 500 dilemma you're sitting with is whether, if you do detect a North Korean worker in your organization, do you.
Get rid of them, or do you change your interview process because it's against your ethics or policy or whatever that, I think that's the problem being created by here, uh, by this [00:12:00] particular use case study. Unless they're not trying to extort money. They're just basically helping companies do better jobs while finding their own, like everyone's in profit.
Yeah, but I mean, to, to the other extent, right? Is. You know, what they're saying here is they are generating hundreds of millions of dollars annually by getting these jobs. Mm-hmm. Uh, they are delivering actual effective work and like they're doing their job. Now they could be also exfiltrating depending upon where they get jobs at, but they're also stating here that each operator can now, now maintain multiple concurrent positions.
Which, which have been impossible without AI assistance. And so my big question is like, actually my prediction in this is like you, especially with Fortune 500 companies, generally there's some sort of background check Yeah. That occurs, right? And so either there's two things happening, like the background check is not occurring or the background check that they do do [00:13:00] are, is so simplistic.
That you can take any stolen SSN or something and any very, very surface level identity theft can now pass to get employed by these identity checks because these guys should not be able to get hired unless they're doing some pretty impressive or thorough identity theft, right? That occurs. So that means if this is continuing to happen, which will continue to happen.
That companies, anyone who is doing identity checking. Yeah. That you have to now go to their higher tier, more thorough for everybody. It doesn't matter who you are because you cannot continue, quote unquote to, to do this per se. I would imagine. So I would think that these services should explode.
Ashish Rajan: Actually, that's a good point because every job that. Anyone technical gets hired for this before you even get the job, there's technically a background [00:14:00] check for your driving license, passport, some form of id, government information, social security number. There's so many things that you kinda have to. I guess pass on and unless there's actually genuinely a gap on the other side, but people just collect information and they don't check it because it's technically most, in most scenarios, it's usually a third party.
It's not someone inside the organization. It's usually a third party.
Yeah. Yeah. And again, I think that usually there are probably tiers there. They might say, for lower level workers, we don't need identity. Or if we do on these things, they're the very, very basic, which is take an SSN, make sure that there's no, criminal record.
Right? Mm-hmm. Versus take the image, take the person their identity match to make sure that they are correct, look at their background, make sure it matches the background, what they're saying. Like, none of that happens. And so I could foresee here where, you know, companies like Checker and [00:15:00] others have to start offering a deeper.
Identity background process, and they would make a ton of money where Fortune five hundreds now have to put things in place to do these identity checks at a deeper level, right? Because you can't have this continue to happen.
Ashish Rajan: Yeah. And I guess maybe to going on that same mitigation path, the first use case we spoke about where there was the person who's basically using, uh, yacking as we were calling it, I think is there a mitigation there?
I think we kind of, we kind of went through what the thing was, but is there is. Because once you're impacted by it, you're already impacted. To your point, the only mitigation you have is use that opportunity to extract as much useful information you can to identify how a proof of life that we were saying, and b.
What systems potentially have been impacted, identify them and try and forensic, but the data's already being extorted, then you're
Yeah, there's, there's there. Yeah. At that [00:16:00] point, what we're just talking about is fully automated, ransomware negotiation tactics, right?
Ashish Rajan: Yeah.
Uh, which of course, the, both the attacker.
And the victim will have to figure out. I mean, the only, the only thing I can think of is actually as the victim, I would try to prompt, inject the attacker.
Ashish Rajan: Yeah. Once you realize, yeah. It was the, the interesting thing, the reason I brought that up is also because in the uk I think in the last few months, there's actually a bill passing where companies would not, it would be illegal to pay ransom for ransomware, kind of, uh, or, and you have declared, so there's something on the lines, I can't remember the exact specifics of it, but essentially it would become illegal.
To pay ransom if there's a ransomware situation or whatever. But then the question is people may choose to not declare it's a ransomware as well. How do you, oh yeah. You can't really go into a company and just verify ransomware. Which kind of also is a good segway into the next one, which is also a UK based ransomware as a service, which is, they're calling it RaaS.
A UK based threat actor was able to leverage [00:17:00] Claude to develop market and distribute ransomware with an advanced evasion capability, uh, in the dark, uh, darknet dark web forms, uh, for as low as $400 for ransomware. DLL is almost, I think. If you were to look at these three scenarios the, in the injection point is still very traditional, like your network is still, which anything which is exposed to the internet.
That was the first two that I can think. First one I can think of, the second one is the insider threat that we were talking about. The third one also seems in a similar context, it can go down the path of a an external, like what whatever external resources are, or scam or it's not that. It's, it's a, it is a different way of doing an existing security attack is where I'm coming from.
Yeah. I think actually what it does is what AI allows you to do is lower the bar of expertise, right? Yeah. Yeah. So for example, when you're talking about ransomware, obviously. The way these things work is there are generally a set of very smart people. Who build [00:18:00] ransomware frameworks and platforms.
Yeah. And off of these platforms, all the script kitties use them in order to encrypt and, you know, do all their stuff and build their ransomware and then they go and then deploy them. Right. But there's actually very few suppliers of these really smart ransomware tech. Um, and that has allowed, I think, on the defensive side, the ability to identify.
Standard methods, signatures and abilities to detect these kinds of frameworks in at least some degree. However, AI now allows the common person to develop their own version of this, right? Like for example you know, I'm reading this direct from the report. Most concerning is the actor's apparent dependency on ai.
They appear unable to implement complex technical components or troubleshoot issues without ai assistant assistant yet are selling capable malware, right? And so what this does, this means that. Versus now what it [00:19:00] used to be. Let's just make up a number. There are 50 platform providers in the malware, malware world.
You can now have hundreds and thousands of malware providers because it, it, each person is uniquely just using AI to generate their own versions of it, right? Yeah. Which also I think on the defensive side means signatures become much more difficult to create, uh, way, way more randomizations of it. The capability of these things is rises and there's just so much customization.
It becomes harder to be able to say what is the right method, what methodology is being used versus not.
Ashish Rajan: Actually, it is kind of, uh, we were talking about this on the episode we did with Jason Haddix and Daniel Miessler where. The whole concept of script, Katie, running a Metasploit, uh, was, was a joke that was running around for a while where, hey, the difference between someone who's a script kit versus not the, I guess the, the person who's a bit more experienced understands what happens at the end of the, or the result from Metasploit, whereas the script kit just running through it and go, Hey, finds a [00:20:00] few things and I just wanna copy paste a script that I can find.
This is an interesting one because I could be listening to this interview. Use that as information, just the information that it is possible to do this as a starting point for me to explain, or for me to ask Claude code or OpenAI, whatever, and just say, Hey, explain to me how, if I were to create ransomware as a service, what would that require?
What kinda skill set would I require for that? And how far can I go with it? What can you help me with this? And hey, you don't have to be a networking expert. You don't even have to think about this. You can be, Hey, I'm kind of thinking let's go for the financial services. Let's find the ones. Which ones have the lowest security in financial services?
Oh, the startups. Let's just go for that. You can just take that chain of thought and drag it all the way, and your AI would, that's a great idea. Let me find that for you. Like it would just keep going,
Caleb Sima: but then this then gets into. We should maybe discuss this at the end, but you know, obviously Anthropic produced this report and their [00:21:00] mitigations are, they are both identifying these types of techniques and tactics and are also removing accounts or banning this kind of behavior.
Yeah. Um, and there are two ways at which these attackers are abusing Claude, in order to do this. Obviously the first is they are jailbreaking, right? Yeah. Like there are plenty of ways to jailbreak, uh, even existing today, Claude. Uh, and two, they are in some sense jailbreaking by the sense of saying that they are, manipulating the system to say that they are active white hat security.
Practitioners trying to do tests.
Ashish Rajan: Yep.
Caleb Sima: And this is where, you know, I, I think you and I have had many sort of themes around this in the past, but like, where is the line that gets drawn where, you know, doing things for safety and control reasons, you know, is now treating the people who use these models like kids versus [00:22:00] where it isn't.
Where is the line that you draw? Is it at, obviously there's a line that gets drawn on child pornography for generated imagery, right? Yeah. Everyone can quote unquote agree on this. Yeah. But then when you start getting into this, where I can use AI to do attacks against other systems. Yeah. To the average person, they would say, of course you should.
You, you should monitor that. Which by the way says another thing about privacy, right? Mm-hmm. Which this report clearly states we have no privacy in interacting with philanthropic, uh, and probably any other model for that matter. But, then it also says, well, hey, you know, there are hundreds and hundreds of thousands of people who legitimately need to use AI to do exactly what this.
So, oh, so then you have to get certificates, like then you have to get permission from Anthropic in order to do this. Then does that reduce the amount of innovation that can then occur from people who [00:23:00] are, you know, just a kid in their bedroom wanting to come up with the new security tool, which by the way does happen?
Yeah, a lot. Yeah, by the way so now there's this very, very gray line around, around this, and so like, again, we could probably talk about this.
Ashish Rajan: Yeah. But I guess to add to another layer to that. It's the, the general, the acceptance of AI usage across the defense has been slower as well, so that doesn't really help in this particular case because as much as to your point, we have read this report, there's a huge bias towards, it's a very Claude focused report, a in my mind.
They're starting to position themselves as a service provider, which is why they're doing threat intel report the same as Amazon does, same as Microsoft does. I'm hoping OpenAI and other people start doing it because a, at least in my mind, yes, you're right, privacy may have lost its meaning a bit to an extent, but at least there's transparency and the misuse.
So people are more aware of scenarios that they can ex look out for. 'cause Microsoft does this as well, but in terms of what their researchers have found. [00:24:00] I'm assuming they don't go through the logs of the customers. They look at what's happening and what's been reported to them. And actually I haven't checked this, and maybe it's a good thing to fact check whether Claude, sorry or Anthropic actually provides a threat intel service to their to the customers, same as Microsoft and Amazon.
'cause if they do they do that, then someone's actively researching. And maybe to your point, it's not a breach of privacy, just someone actively finding out someone's blog somewhere on the internet for this. But to your point about the gray area, I think there's something to be said about the fact that yes, there is a need for this, but on the flip side, at Black Hat and across the board, I've had so many conversations about people not using enough AI in their work, in their day to day operations.
It's almost like what's going on? Why? I mean, on one side you have these reports which are saying that. Hey attackers. Start using it. Start doing something about this. This is, this is it. This is, here you go. Case studies, 10 case studies for you. [00:25:00] But on the, on the other side, when I talk to people in Fortune 500 and others, there, there are certain slithers of people who are heavy into ai, but a lot of them don't have AI usage.
Like to the point that 5% or 10% is AI usage in a lot of companies. Yeah, but I don't know where this, like, where, where, I mean, there's a question to be asked over. I, I think where I'm coming from this, there's a question to be asked on the other side as well, that what can we do to increase the adoption of AI so that we probably have a better answer and not be in this gray area.
'cause at the moment I don't think we have enough data points.
Caleb Sima: Well, like, you know, well just take a couple of these points. So let's take the first one. The first one is vibe coding attacks. Yep. Okay. So what does this mean, practically speaking? What this means, practically speaking is that attackers are now getting the ability, value wise for them is to have broader scope.
Mm-hmm. With deeper level of attacks. Which is, hey, it used to be I could port [00:26:00] scan everybody and then I could, you know, maybe run metasploit. If I do that on things that are vulnerable and that's my extent now, I can not just port scan people. I can au automatically reasonably do intelligent phishing.
I can also reasonably intelligently respond to those port scans using a variety of tools to do deeper exploitation. Which means that low hanging fruit in enterprises are now lower, right? Yeah. The benchmark.
Ashish Rajan: Yeah. Yeah,
Caleb Sima: yeah, yeah. And the things that are higher are, and by the way, we've talked about this in all of our things.
This is just proof that this is happening. So what is it that, what should the enterprise do? Like how are they going to use AI to help that problem? Because theoretically, you know, I'm just gonna take a stab. Love to have anyone argue with me. But there's only two things you can do, and in enterprise, you can either one.
Patch faster, right? Yeah,
Ashish Rajan: yeah.
Caleb Sima: Which is, you know, by [00:27:00] the way, like hard in any organization. Like for what? Like this is a, this is a red tape people process problem. Yeah. So what's AI gonna do to solve that? We haven't seen what that's gonna be. Yeah. Or two. You have better detection. Methods, which by the way, I think people are working on, so we have ai soc, we have these startup companies doing detection, engineering, ai, you know, so there is some aspect, but we we're, we're too early, right?
Like. There's no one and no one is smart enough, capable enough time, resource enough, let I to be able to say, across our enterprises, you're gonna now have smart engineers building custom AI solutions in enterprises. Like that's just not to, to stop these attacks. Like that's just not gonna happen. And so then you can't do anything with ai.
Ashish Rajan: And I guess maybe to add to another layer to your. Thought there is, we are, we have not started producing AI native applications yet. We're still infusing AI into existing applications in a lot of places. [00:28:00] Yeah. I mean, so there's a scenario that we haven't even covered for, because no one hasn't got gotten to that point.
We're doing agent workflows. We haven't gotten to the point of beyond the detection and patching part, and, uh, maybe a good time to remind people of what Daniel user told us. Even if we get a GI, someone would not press that red button ever in the organization because we'll just be at that point where going back to the report as well, maybe sharing this report is just a starting point for people to realize that we should probably start investing more time in it.
But I up with you. I, I don't see how anyone else can argue with your, uh, two patching versus detection faster. Uh, at least the way AI market is going, we have continuous red teaming, the AI soc market, the open source detection market for AI code being produced. So, yeah, that, that all signals point to that at this particular point, but unfortunately, I don't think any of them, like maybe the threat, uh, the red teaming one kind of tackles the vibe hacking.
Caleb Sima: Yeah, I mean, you could, you could say, you know, the AI red [00:29:00] teaming companies are, are going to show you the level of vulnerabilities that at least the attackers will be able to see. That's, yeah, that's probable, right? But then it goes, but if you ask any enterprise the problem becomes, well, like, I already know I have a thousand vulnerabilities.
My problem with fixing them. So which one? That's all they, who knows how you go with that? Yeah. But even look at the North Korean hiring one. Yeah, okay. So they're using ai. To simulate effective workers, which they're doing a great job at. Yeah. Uh, and, and then all these companies are hiring them.
Uh, you know, the first question maybe, you know, like we ask is do they care? 'cause maybe question. Yeah. You know, as a CISO of, of a company, I would care about this. So then what do you do? Right? What do you do about this? You know, the answer isn't necess is the answer to use AI against their ai. Maybe, I guess.
Yeah. Uh, the answer that we're doing today is we're saying we're just video. If you one, don't allow remote [00:30:00] work only allow in person two, uh, do video calls with them and make them wave their hand in front of their face and look to see if they're reading off of a fucking prompt when you're doing it.
Yeah. But by the way, like when you're like a Fortune 500 company, the amount of employees that you're hiring, and at the level of tier you're hiring. You can't enforce this, right? Whether is the person who you're interviewing using AI to answer your questions it would be cool, maybe since we're coming up with cool ideas that you could have an AI thing, look at the person and, and they can see whether they're reading or not.
When they're doing a video. Oh, just with the eye movement? Yes. Yes, that's right. With the eye movement. Of course there is AI already that you can install on your camera to make it so your eyes look at you without, without going into the reading. Yeah. Yeah. But that should be easy to detect, like, okay, that's clearly like an AI thing
Ashish Rajan: that's going on.
Actually maybe, yeah, to your point, may, that's the closest we can get. 'cause um, I don't know if you've heard of this company called Clearly. Was it Cluey? Have you heard of this company? No. Clearly. So it's a young kid funny [00:31:00] enough, basically C-L-U-E-L-Y. He's, he is going viral on social media.
Basically. He's, what he's done is kind of what you were saying. He's basically helping salespeople, interview people, use ai. Oh, cheat, yeah. Yeah. Cheat, basically. Yeah. That's the one. Yeah. Yeah. Have you heard of them?
Yeah. Yeah. I have, I did, I did hear about this before. Yes. He took, he took giving students the ability to cheat on interviews.
That's right. To now positioning it as sales. Yeah, yeah, yeah, yeah.
Ashish Rajan: By the way, he didn't even have a product until like three months ago.
Caleb Sima: Right, right. I didn't hear that. The only
Ashish Rajan: thing he had, I think he used Claude code or OpenAI to vibe code his way for the first version that was used by students.
Yes. And he, he basically used that for one whole year. Yes. Before he brought to the sales point of, Hey, I can charge people money for this. I'm like, brilliant kid. All the power to him. But it kind of goes back to what you were saying.
Caleb Sima: But by the way, like I, I remember first reading that and I was like, good for that dude.
Yeah, yeah. And everyone was, everyone was hating on him. I was like, yes, this is
Ashish Rajan: [00:32:00] no's same. There's a sense of envy. But I'm like, oh dude, I'm, I'm like all for it. Like it's a great disruptor and probably his early his life to go ahead and do this as well. But bringing it back to what you were saying about reading and picking up people for what they're doing, I, I feel like this is obviously the identity side of what we're talking about, I think the report also had some of the fraudulent side of things as well.
I think that one of the cases around the whole MCP, like I think fraud as a market, one would've thought is very evolved. And it was bound for disruption in terms of what the fraud ecosystem would be, how it would be impacted. Especially like things like synthetic identity was one that stood out for me.
Where maybe there is a link between synthetic identity and the North Korean workers on how, I mean, you can get away with that.
Caleb Sima: There is isn't, remember when we brought we brought, um. A couple people on, for, we, we, we talked about our whole identity episode, remember? Right. Where kind of like Yeah.
With Adrian. With Adrian, [00:33:00] yeah. From um, yes, from the What's the
Ashish Rajan: Identity Company? Which is the Open No. Is the,
Caleb Sima: uh, Tools for Humanity. Yeah. Tools for Humanity. That's it. Yeah. Yeah. Tools for Humanity and the orb, right? Yeah. Orb for World Coin. Yep. And like. This is like truly like, you know, you as you know, my big belief in this, like this has to come.
Yep. Like if World Coin were implemented and forced by companies, it would solve this problem. America, yes, in the sense that there is one private key, quote unquote, to a human in the ability to say synthetic identities don't necessarily quote unquote exist in that, that frame it, it is tied to some human right.
So therefore I know now whether that human is cheating or not. That is a different problem to solve. Yeah. But like, you can at least tie it to say this is not a synthetic identity.
Ashish Rajan: Oh, yeah, yeah. You give like a, at least a, a genuine source of truth for it. I think, uh, I, some, something that I find interesting about humans is that we are [00:34:00] really good at finding loopholes and the, the, the, the simplest of loopholes in this world.
And you almost think, how is that even a thing to, to your point about while the world was talking about how to use AI for, you know, your job's gonna be replaced. I'm gonna, basically, the entire world is being threatened on their job being replaced by ai. The hackers on the other end, or the, sorry, the bad actors on the other end are flipping the script and going, oh my God, my job is so much more easier now I can use AI to do all this stuff that I could not do before.
And people wanna get rid of the, so it's almost like how you look at the perspective makes such a big difference in this particular thing. For me, to your point about the identity piece where we know what the problem is, but the acceptance of it would be harder. Even though the government has, at least, I believe the government is all for AI usage, but no one has built anything for AI ethics or AI act.
Beyond the EU Act that exists. There's nothing in America at this point in time. Is that right? In terms of compliance or, I mean, there's a whole ISO 40, 40 2001 or whatever they call it, [00:35:00] but that's just a framework for if you provide AI services that's not really security Yeah. Against these things. So I'm curious as to how this evolved, but I, I, I just wanted to go through, there was one for.
Synthetic identity powered as a, as a fraudulent one. There was a romance scam bot. I obviously, how different would be say your no-code malware development to the malware distribution campaign?
Caleb Sima: Well, there's just a lot of different points to, to all of these in the sense that I think malware distribution campaign is.
All about sort of the ability to create phishing, social awareness, all of this kind of stuff, right?
Ashish Rajan: The next one on that list is Chinese threat actors, specifically targeting Vietnamese IP ranges. Credential harvesting using hydro hash. So like, they don't, they don't know this like
Caleb Sima: standard, like, okay, of course they're gonna be using this.
This is just automation.
Ashish Rajan: But I think the interesting part for me is [00:36:00] that everything was done using Chinese communication, the in Chinese language.
Caleb Sima: There is a no code malware development campaign, which is Russian. There
Ashish Rajan: you go.
Caleb Sima: But
Ashish Rajan: now we're a mul world as well.
Caleb Sima: Developer. Yeah. Creating malware with advanced evasion capabilities.
Ashish Rajan: So does that mean it's an interesting one. 'cause, you know, um, traditionally everyone has taken that as an advantage that most, developed quote unquote developed companies or fortune 500 and all of that. They're all English speaking. English first. But then because of ai, whatever your language, doesn't really matter what it is.
You can, it just, AI just translated it for you. Even if you are in one of those top, I don't know, a hundred plus languages around the world that'll have enough data points, you can basically translate your language into a malware or anything that you, I mean, you can obviously do Yeah. Language doesn't matter.
Caleb Sima: Yeah. Yeah. Let me talk about this Russian one. So this guy developed advanced malware and this guy, what they're saying is he clearly knew what he was doing. [00:37:00] He was an advanced engineer himself, developed all this, you know, amazing anti analysis techniques. However, what's really interesting was that malware samples appeared on virus total within two hours of Claude generating the code.
Oh, with submissions from Russia, UK, and Ukraine indicating potential active deployment. So what's interesting about this is multiple things. One, how fast the guy deployed this thing. Claude generated the code and somehow, I guess it worked well enough that he deployed it within two hours of this thing being deployed and actually.
Even more crazy about this sort of data point is that people with all of his advanced anti detection, anti analysis techniques also within two, so in technically speaking, you could say, let's say it took an hour or an hour and a half for him to deploy it, [00:38:00] and then it took another 30 minutes for the victim to recognize it was doing something and then also upload it to virus.
Total. Yeah. Wow. That's pretty amazing. That's a, it's amazing one that he created this thing, deployed it within two hours, or actually two hours was the time that it got uploaded virus total. So he deployed it immediately. Yeah, once claw generated it. And then also the fact that virus to that, people recognized it, identified it, uploaded it to virus total within that two hour timeframe.
Ashish Rajan: But I think what I'm also just reading this, it is fascinating. Is. Basically it, it was smart enough to have a telegram bot infrastructure for command control. Command and control as well. So it's basically building infrastructure, building data filtration capabilities with screenshot functionality. Yeah.
It's not just like, yo, it, I'm just copy pasting data. I'm taking screenshots and stuff as well. And it can disguise malware [00:39:00] itself as Zoom, a cryptocurrency trading tool. Yeah, so primarily Windows based. 'cause this person seems to be quite good at Windows internal knowledge as per the report. But it's a, it's an interesting one.
Combine that with act, how much how much malware is already on the internet. If you trying to disguise it as popular ones. I mean, I'm not surprised this guy got that far. I guess because I think, was it this one or the previous one where they were using social media campaigns, like sending GitHub and LinkedIn messages?
Was that This one? I think, I can't remember. This also goes
to this. Romance scam bot thing, which is just phishing and using that as a messaging technique, which is, this is a no-brainer, obviously.
Ashish Rajan: Yeah. Primarily in Chinese as well. Funny enough, I wonder, and it was, it was a telegram bot, the romance scam bot.
Yeah. Multimodal AI to support, I mean. We do live in a world of Tinder. I guess you don't, you never really know the person. You keep talking to them. I imagine. So we are not far should probably, uh, I think we [00:40:00] should talk to someone from one, one of those organizations as well. 'cause they would see a huge spike of people, like people who are not confident in speaking to the other, I guess the other gender, just using a bot.
But they're smart. They use a bot for con conversation. I was watching a movie, I think a Ben Affleck movie where he is artistic. Yeah. And he figured out a file formula on what preferences girls had in small little towns in the us Oh yeah. Yeah. I saw that movie too. It was hilarious. Counted. I counted two and I'm like, and he said he just completely hacked the algorithm for it.
Yeah, yeah, yeah, yeah. But then the moment that really talked to him, then like, oh my God, this guy is so weird. Like no one else got anything. I'm like, wow. So I wonder if that's not far from a romance bot as well, but hopefully, uh, people use good things. No, you don't
Caleb Sima: have to. 'cause you could just use HeyGen.
Create your own video avatar of what, whatever person you want, and then have it just real time talk to the person over video calls. I never see them in person. Yeah. Never see them in person. That would be a great [00:41:00] relationship. I mean, we're not even like a couple years out from that. Can you imagine what this is gonna look like in two years from now?
Ashish Rajan: Well, uh, what was that movie where the person was talking to the guy who does joker? And he was talking to a theory version and basically gets addicted to it. I don't know, it's like a very colorful movie, but there's a movie. I can't remember remember the name? Oh,
Caleb Sima: her.
Ashish Rajan: Her, yeah. But there's this particular scene, and I haven't seen the movie, but there was this scene floating around everywhere when the first time I think ChatGPT went down for a few minutes and the scene is primarily this guy's in his office talking to whatever her is.
She just basically come, disappears. This guy goes into panic mode as if family armor died. He's running the streets, he's tripping over staircases, trying to find, go back home. 'cause the home is where the main server is or whatever the thing was. And as soon as he is about to enter the train station, it just comes back online as if nothing has happened.
He is [00:42:00] like, Hey, are are you? Are you, are you okay? And she's like, yeah, yeah, I'm fine. Oh, sorry for panicking you. And I'm like, wow. Are we too far from that? I mean, I'm still very dream feature, but hopefully we don't get the romance bot. I'm pretty
Caleb Sima: sure we're, we're already in that world, man, where you you have psychological dependencies on AI and its ability.
Like all of that I think is very, very true today.
Ashish Rajan: Right. It probably doesn't help when Open AI releases services that are in the health space and like now we are now think they have released also job and certification as well. They have a consulting service and I feel like they're testing the market.
For which service would have the most adoption and then basically just release a product in that particular space. But bringing, 'cause they have so much data set, why not just use it to five, figure it out? 'cause everything seems to be releasing. Next year they could do the Google Cloud thing to call it beta and see how many people respond, how many people sign up.
So I don't know. I think [00:43:00] it'll be fun for them. But bringing it back to the report Carding store, they only, we only have one lift, which is the carding store powered by ai.
Caleb Sima: Which to me is just using AI to help you scale and write better software. Uh, it just happens to be for illegal purposes.
Ashish Rajan: And it's in Spanish speaking, it was a Spanish speaking actor.
Yeah, global. We have Spanish and Chinese, Russian, uh, and only a few or one British guy with the, with the uk. So we are definitely going global in terms of the geographies we've covered. So I guess maybe to kind of sum it up what's the, what's your big takeaway based on all of this? I'm curious, any, any top two things that come to mind?
Caleb Sima: I think the, the first thing that comes to mind honestly, is we knew this was coming. Yeah. We talked about it. I mean, we've talked about it for a couple years actually. So the thing that's the most surprising is, frankly, that it, it, you know, just like any adoption for criminals, it takes time, right? So we, it's now been a couple years [00:44:00] and the things that we knew was coming are now here.
The other thing that I feel. Is these are very individualized types of reports. They feel very singular in their nature. This is not happening at any real scale yet. I feel. Like I think that, you know, mine is sort of the North Korean where they mentioned, tons and tons of incidents of this happening.
You, a large part of the report feels very singular, right? One actor or a couple threat actors or these types of things versus this normally in two years will probably be done more like Verizon's breach report, and it'll be done based on stats like. Hey, we had, you know, like there are over 35,000 people doing this kind of attack using our, our service or you mean
Ashish Rajan: the volume is not there at the moment?
The volume
Caleb Sima: is not
Ashish Rajan: there. Yeah.
Caleb Sima: Yeah. So we're still very, very early. Yeah. So you're, you're seeing the early adopters yeah. In doing this. And the thing that, you know, we don't think too much about is, it's not about the new [00:45:00] threats that come from ai. It's just about the. Professionalism and scalability of existing threats that come from ai.
You can now build, you know, sort of the credit card thing, which is you can now build real enterprise software. Yep. At which I think a lot of these attackers weren't necessarily, let's say, well versed in, but the commoner, the script kitty, can now build really good, reliable scaling software. Yeah. Done with true great automation.
At scale. And I think like that is something I'm not sure, I think many of us are focused on the new kinds of attacks that come versus just, oh, the level of sophistication of the existing the reliability, the scalability of the existing is good.
Ashish Rajan: I think for, for me, one thing that stood out over here, I mean, I agree on the first point on the indi, like the specific things that have come out of the report.
One thing, which is probably a, a relief and a surprise [00:46:00] here that I wonder if, given the opportunity, some of these people would've gone for, even though they don't know much about the systems, but if they would've been smart enough to go down the path of finding business logic flaws, and not just a, like a lot of them are brute forcing A VPN sending a phishing email.
I'm sending GitHub or LinkedIn requests and all of that, which is, it's, and to your point, it could be low stakes. A lot of people can do a lot of security awareness around it, but to me, I guess if they start cracking into that, I, and this does not make me feel that they're far from it, it just required the person to be motivated enough to go, I wonder what else can I do?
What is the, yeah. Because they probably did not even understand, like because they never worked an enterprise or a large company, they didn't even realize that, hey, they could be interconnected systems on this that I could go into. And I think that's where. I feel the real quote unquote, the real threat is, and to your point, that's when maybe people start feeling, Hey, this is something much more.
The, the second thing that the, and I'm [00:47:00] gonna go back to the North Korean one. It is a great example and maybe because I'm seeing a lot of use cases there, a lot of CISOs that are spoken to, they're finding that a lot of AI usage is quite low within the organization. If people, and no disrespect people of North Korea, please don't hurt me.
But I think the idea behind. You don't even need to understand the language, but you can still do a great job just by using Claude. To me, it sounds like, hey, is there any process that is around you that you can use to integrate with Claude and perhaps use that as a starting point for how can I start using ai, what my AI use cases are, and build something like this for yourself?
I think we've been doing this as custom applications already in a lot of organizations. I think a few examples of this have existed for years where people may have a great SIEM provider, but they still have to have internal metrics for what's a high, low medium, what's a false positive? Because it's not applicable to me, but this big SIEM provider thinks that's [00:48:00] really high, very high for me.
We've done that before, but why not use like almost abstract that layer even more? I think that's what, to me, this is the hope for that is possible today.
Caleb Sima: Yeah, it was, there is, uh, there's the aspect of basically saying, Hey, if North Korean people who cannot speak English and know nothing about technology can actually be contributing reliable high tech software engineers.
Yeah. Using ai, yeah. We are clearly not maximizing our capabilities with it. A hundred percent.
Ashish Rajan: A hundred percent. I'm like, what is going on is like, I guess we just don't have the motivation. We just have the safety net.
Caleb Sima: It. And you know, there's another aspect that I, I'm trying to work on and maybe others, which is if they can elevate themselves to that level, what can you and I as individuals, we should be able to elevate ourselves up a couple notches too, Jess and AI and how are we doing that?
Ashish Rajan: Right? Yeah, I think I'm definitely keen to kind of, [00:49:00] uh, do some use cases for this as well. 'cause I'm, I'm curious, I, I've spoken to a few people here. Who have used AI within their teams and building internal threat intel systems to have that. And that's why I had the example of the abstraction one. 'cause I'm like, oh my God, you guys are doing this.
But there are very few examples of it, and most people are trying to change their angry email into a non angry email. I'm going, that is not, it's like, you know how when YouTube or internet was invented, people are just going more for cat memes and dog memes. Yeah. Like this is literally the same kind of usage of, uh, AI as I, I feel.
Caleb Sima: I can tell you. That. I think a lot of that does exist. But you know, for example, like I, I have a portfolio company Yep. That is doing some mind blowing things in detection, engineering. Mm-hmm. Like things that are like, this was impossible. A year ago. Yeah. Two years ago. There's no way we would be able to do this kind of stuff.
And so I do think this technology is there, it's just, it's being created right now. Yeah. Yeah, [00:50:00] of course. Quite early. If you're doing it, you're gonna start up and, you know, you're probably in stealth a bit right now.
Ashish Rajan: Yeah. But do you also feel that, and I think maybe this is I, I don't wanna go too off topic, but I think maybe this is the topic we should talk about in terms of the use of AI.
For security teams within organizations, within enterprise in terms of how they, like, we've obviously spoken a lot about security for AI as a team for a while. We haven't not gone into the AI for security part. And I want, because that is also to, what we are seeing is there's potentially a gap there that just needs to be filled in some way or shape or form.
And we, we did the whole episode on white coding. And maybe some more use cases around that to what you said. Maybe it's from the portfolio companies you are working with, we saw what are they doing, which we can bring over to other people and how they can use that as a, because clearly many people do understand they're not gonna build an AI product.
Whereas we, sorry. Yeah, yeah.
Caleb Sima: I mean we, this is top of mind. Actually this week. Yeah. I'm [00:51:00] holding an invite only thing for very forward thinking CISOs who are using ai. And the exact topic is, this is a dinner event that I'm doing. The exact topic is practical real world AI you're implementing right now.
Inside of an enterprise. Yeah.
Ashish Rajan: Yeah. I would love to, yeah, I would love to hear your feedback from that particular one. 'cause I'm definitely curious as to where and how, 'cause I think there are not many examples being spoken about. I'm trying to fill that gap as well. So I think that that would be great insight for people to even know what some companies are doing.
'cause at this point in time, no one's talking openly about this.
Yeah.
Ashish Rajan: I'll, uh,
Caleb Sima: on the next episode, I'll, uh, we'll dump on that dinner.
Ashish Rajan: Perfect. Alright so maybe that's the conclusion of this episode as well then. But hey, we understand the gaps. We've shared the top two, few, top few takeaways from the report, but people should definitely check out the report and hopefully OpenAI and others start building.
'cause I imagine Amazon, Microsoft can come as a report as well. They have threat research team, they're building or giving capability for AI [00:52:00] processes, so hopefully they do as well. But thanks everyone for tuning in. Thank you for watching or listening to that episode of AI Security Podcast. This was brought to you by Tech riot.io.
If you want to hear or watch more episodes of AI security, check that out on ai security podcast.com. And in case you're interested in learning more about cloud security, you should check out our sister podcast called Cloud Security Podcast, which is available on Cloud Security podcast.tv. Thank you for tuning in, and I'll see you in the next episode, episode.
Peace.

.jpg)

.jpg)


.jpg)
.jpg)

.png)








.png)

.png)
