BlackHat USA 2024 AI Cybersecurity Highlights

View Show Notes and Transcript

What were the key AI Cybersecurity trends at BlackHat USA? In this episode of the AI Cybersecurity Podcast, hosts Ashish Rajan and Caleb Sima dive into the key insights from Black Hat 2024. From the AI Summit to the CISO Summit, they explore the most critical themes shaping the cybersecurity landscape, including deepfakes, AI in cybersecurity tools, and automation. The episode also features discussions on the rising concerns among CISOs regarding AI platforms  and what these mean for security leaders.

Questions asked:
00:00 Introduction
02:49 Black Hat, DEF CON and RSA Conference
07:18 Black Hat CISO Summit and CISO Concerns
11:14 Use Cases for AI in Cybersecurity
21:16 Are people tired of AI?
21:40 AI is mostly a side feature
25:06 LLM Firewalls and Access Management
28:16 The data security challenge in AI
29:28 The trend with Deepfakes
35:28 The trend of pentest automation
38:48 The role of an AI Security Engineer

Caleb Sima: [00:00:00] There's no one in your organization. If it's a decent size, but that's not using CoPilot some form of AI generated. It doesn't have to be co pilot specifically, but definitely a AI generated helping you code.

Ashish Rajan: I guess we shouldn't really have a job in the first place. If they were just doing their thing,

Caleb Sima: I prefer the security should be invisible versus security should not exist.

A couple photos sent to the right people will get you arrested, will get you thrown in jail, will get you thrown in a court case. And you can just do this by just click, click gen.

Ashish Rajan: Black Hat is probably the underground cyber security. It is part of one of the popular cyber security conferences out there where a lot of CISOs lot of technical practitioners, they all go.

And Black Hat 2024 was not a disappointment at all. They had AI summit, they had founder summit, they had CISO Summit, they also had a lot of conversations about AI village, which is in DEF CON as well. So in this episode with Caleb and myself. We basically look back on Black Hat 2024, especially [00:01:00] the AI parts of it.

For example, what were some of the themes at the AI summit? What were some of the things that CISOs were concerned about as they were walking down the floors and trying to find out from other CISOs, hey, what you should be doing? Also brief teaser onto a panel that we did with the CISO of DeepMind , Anthropic.

And we have a conversation hopefully lined up with OpenAI's CISO soon as well. But at the moment, these are some of the learnings we had from Black Hat. And it was definitely. Let's just say interesting on where we landed on, especially now with the US election being this year and deepfake being top of mind for a lot of people.

Also whether there are actual use cases that people can walk away with, that they can implement the cybersecurity platforms. We also spoke about the three broad use cases that I found out as people should be concerned or looking into for the security programs, especially if you're looking from a tooling perspective.

All that and a lot more in this episode of AI Cybersecurity Podcast. If you know someone who is looking into building an AI cybersecurity program or is a [00:02:00] leader, a CISO, who's just curious about this and wants to know what happened to Black Hat from an AI perspective, this is the episode with them.

Please do share it with them. If you're watching this on YouTube or LinkedIn, give us a follow and subscribe because that lets the algorithm know that, hey, there are people like AI Cybersecurity Podcast who are talking about Cybersecurity in a way that it's not Skynet and in case you're listening to this on Apple or Spotify, definitely leave us a review or rating because it helps more people find out about us and lets people know about the good work We're doing here.

I appreciate all the support you give me, Caleb, Shilpi and the rest of the AI Cybersecurity Podcast team It definitely means a lot that you've been talking about us. And I'm so happy to see a lot of you at BlackHat.

Hello, welcome to another episode of AI Cybersecurity Podcast so Caleb and I are I was going to say fresh off the boat.

Sounds pretty wrong. Fresh off the Black Hat boat is how I would describe it. So today we're talking about Black Hat, which is Hacker Summer Camp, for people who are not aware of, maybe. To set the scene, Caleb, because you've been going there longer than I have. How would you describe Black Hat to people who have never been there?

Caleb Sima: I explained it this way. I like, actually, I'm going to do it with more of an overall security conference. [00:03:00] There are two, let's say, major accepted, I don't know if they technically are the largest security conferences in security, but RSA and black hat, right? I feel are like two major, biggest security conferences that are held.

And so RSA for sure is in the bucket of business deals get done, very high level, very, I would say maybe more somewhat practitioner focused business problems. That's what everything is revolved around versus Black Hat . It's supposed to be hacker. Tinkerer underground focus, the core of where cyber security comes from, right?

Which is I'm really focused on being able to, reverse engineer, exploit focused, vulnerability focused, that's where Black Hat has generally been. Although, when you look at, there's also DEF CON, which is like also now more hacker security focused, it's funny because it used to be Black Hat used to be the, [00:04:00] the best of the best vulnerability security research, right? That's what it used. It's used to be premier. If you spoke at Black Hat, that's because the topic of was like, Oh, you found an exploit in all of DNS or all of Cisco routers or whatever these things are, and it was the premier, but now Black Hat has become more like RSA, so it's more business focused.

It's more about vendors. It's more about getting deals done. And then DEF CON, which used to be way more scrappy, way more like black leather, kind of piercings hanging out in the hallways, which it still is a little bit, but it's now it's becoming DEF CON is the Black Hat and BSides is now the new DEF CON, and it's funny to see the evolution of these things come together.

When was your first? I feel like my first black hat had to be in, gosh, maybe 1999 or 2000.

Ashish Rajan: I'm not going to age myself there, talking about what I was doing in [00:05:00] 99. But, to add to what you were just saying as well, I met people who've been there for, I think I wanna say 20 years. Is that, would that be around the same?

Caleb Sima: Yeah 2024, so yeah. Yeah. So even longer for sure. 2000 I was there to 2001 for sure.

Ashish Rajan: This is probably a good segway to talk about some of the other things that develop as well, so we had the AI summit. We also had the Entrepreneur's Summit or Founder's Summit. I can't remember the name of it.

Caleb Sima: Oh, so we had two.

No. So at Black Hat, we had the Investor and Innovator Summit, which is basically investors and founders. And then we had the CISO summit. But there was a lot of other summits. There is AI summit. There's like insurance, cyber insurance summit. There's the briefings, there's the trainings. So there's a lot of that is going on.

Ashish Rajan: But. The reason I was talking about that is because I guess, cause this was the first year for the investor summit to run as well, investors and founders summit. I think this is probably the second time the AI summit was running as well. I primarily was either floating between [00:06:00] the AI summit and just Black Hat floor in general.

And it was really interesting. I think. I would be curious, where were you floating around and what, for some of the AI conversation that came out?

Caleb Sima: Black Hat for me is a very different beast because I am on the board for both the CISO Summit and the Innovators and Investors Summit. I was, and especially the innovators and investors, this was our first year in launching that.

So I helped with many others to help create that summit. And so I was also the main moderator of that summit. Yeah. And also the CISO summit. And the thing is like Black Hat this year, one decided to host both summits at the exact same time. And on complete opposite ends of the conference, like the CISO summit was at the Four Seasons

the founder investor summit was at Black Hat all the way in the back. Corner. And so I was running, it's literally like 25 minute walk to go back and forth between these two that I was, there was a [00:07:00] time where I was, I had to run from the investors that literally jog through the hotel to the Four Seasons to get to a talk that I needed to go attend because I helped put together, which was the AI security talk at the CISO summit.

And so I was like running to the other side. It was very painful.

Ashish Rajan: The CISO summit is primarily Chatham rules. So not, we can't specifically talk about what the answers were, but what are the themes that came out of the questions that we can share for the audience to at least get a viewpoint on what were some of the top of mind topics for people, obviously we don't have to share any answers, but curious about the top of mind topic, which is without obviously breaking the Chatham rules.

Caleb Sima: Yeah. I think we made one with the All the panelists except for one, Matt from Open AI right? Yes. But basically, yeah, the panel was the CISOs from Anthropic, DeepMind ,OpenAI and we had a couple other people. We basically, the I the panel was put together and the focus was around what are the [00:08:00] practical things that you really need to worry about, right?

Every CISO is dealing with these things. They are dealing with Open AI and Anthropic and DeepMind, right? There's no question. Every enterprise right now is dealing with that. And what are the concerns? What are the thoughts? What are the strategies? And so I think really a focus and having those three there and that they are CISOs themselves and how do they think about security and the challenges?

Ashish Rajan: Yeah, the panel is coming out, I think after potentially this episode goes out as well, but so that would be interesting. We were able to get, fortunately, as the timing worked out, we had the Atnropic CISO, the DeepMind CISO, and Kristy was there as well. We had a great chat about the use cases and everything else.

So I'll definitely put a link to the episode here somewhere in the show notes so that people can watch it if they are getting to this one first. Something that was interesting for me in all the conversations that you, and sparked by what you just said right now, that every CISO is dealing with. OpenAI, as well as Anthropic in some way, shape or form. A lot of people that I spoke to, some of them were CISOs, [00:09:00] some of them directors and VPs, they were a bit hesitant to talk about what their AI platform for, whether they're using a foundational model or whether they are multimodal or not on all of that.

But they felt definitely concerned about the security. And I think where I'm going with this is that a lot of them felt they knew what AI was running, but they did not confirm or deny that it was Open AI or Anthropic and I'm like, what else could you be, unless you're creating your own, which requires technically hundreds of millions of dollars, none of these people work for companies that have hundreds of million dollars.

So I guess my question is for people who are saying that, and when other CISOs hear this, my thinking at that point goes, Oh, are you using like an open source version or Option B, you have no idea.

Caleb Sima: I think it usually falls in option B. Here's the thing. It's like the majority of CISOs, 98 percent of their time is dealing with a lot of stuff that ain't this, right?

And most CISOs do not have the capability to go and dig deep and [00:10:00] understand any of these things at any level of scale. What they tend to do is because the lack of time, they have to go based off of sort of their circle of friends around, what are they saying? What are they dealing with? And looking towards pointers and advice for them to help figure out what to go do.

So for example, they may know in the enterprise that, by far, they generally will know we are doing, we are using Google DeepMind or something, we're using OpenAI or using, like they generally at least have that. But not necessarily the details of how are they concerned from an employees using it from a consumer perspective?

Are we using it from an enterprise development application perspective and at what level of detail, generally, I think most CISOs won't have that level of detail. It's they know it's scattered everywhere. They know that it's happening. Usage is going on. They're trying to wrap their heads around what is the usage where and by the way, in most organizations, it's probably mixed, right?

It's there's going to be. A [00:11:00] lot of OpenAI a lot of Anthropic, a lot of Llama, there's probably a bunch of things everywhere in any decent sized organization that's being used, but they don't know that detail. They're trying to figure out where is it and what is it that's being used right now?

Ashish Rajan: Oh, actually, was there any to your point about, where and how in any of the conversations you had, did it come out in terms of the business use cases for how AI is being used and if it's at all being used for security or yeah, curious to know if any, cause I've got three examples that I was solid, but I'm curious as to what you came up with.

Caleb Sima: Yeah. I feel like the maturity of security teams using AI is getting a little bit better. Like the number one thing that I've been hearing is about looking at GitHub commits, code commits and identifying issues and giving smart remediation recommendations to some of the more advanced that are actually giving the recommendation of the code itself. And here is the fix. So I think that [00:12:00] has been a very consistent one. The other one, which I actually predicted in my talk that I gave at BSides earlier this year, then I'm seeing a lot of pickup on it is the sort of revival of chat ops. And using LLMs to reach out to employees and have discussions about a alert detection vulnerability.

Hey, we identified this thing as this, we think this is high. What's your thoughts on it? Like these kinds of integrations are happening for sure. So those were the two that I think were the most consistent. There's a lot of discussion of SecOps and, consumer products like Dropzone.ai and a lot of people talking about how does this stuff get automated, which obviously we talked to Edward on our call on this stuff. So there is a lot of focus on people talking about SecOps automation and AI, but I haven't talked to anyone who's implemented it and said, Oh my God, this is the stuff, but there's definitely a lot of chatter around it.

Ashish Rajan: For the other podcast we have, Cloud Studio Podcast, we had a [00:13:00] analyst from Forrester and she was talking about how the SecOps automation, as people are thinking about AI, may not truly be as what they think that the answer is, it's not going to be the SOAR that people are hoping it would become.

So anyway, I would not spoil the conversation with people. I'll definitely let them check out the episode and I'll probably link it somewhere as well. Oh yeah. I was going to say, if you want to talk

Caleb Sima: about that, then that's a whole different road, but yeah, I can.

Ashish Rajan: Yeah, that's right. We go in a very different direction because then there was a conversation with SentinelOne on their PurpleAI pieces and how they're doing the, I think they're calling it.

The MDR space, how they're evolving that. So anyway, I think the whole conversation the three themes, coming back to what I came across, talking to people who are probably in the space, one was around, I think, which is linked to what you said about the GitHub commits. There was a growing concern at the same time, awareness of the fact that GitHub came up with the report that over 40 percent of their new commits was all AI generated. And this is for people are going, okay. And this is actually being committed into the repository. This is not just generated. It's committed into the [00:14:00] repository, which obviously meant either, and this is by the way, one of the largest scores, source code repository on the internet saying they have over 40%.

And this was something very fascinating. And most of the people that I spoke to, they all agreed that. They do believe there is a, some form of co pilot being used by a developer now.

Caleb Sima: Actually my question to that, I didn't read the report, but did they say how they determined it was AI generated commit?

Ashish Rajan: No, I didn't read that either. So I know just, and the reason I know is because I think it's when they're, quarterly report. It wasn't like a GitHub AI report. It was a report by Microsoft on their GitHub business kind of thing. I think that's how, I, but it's a good point. I can definitely look it up as well as to how they create, because I imagine.

Caleb Sima: Co pilot makes the most sense for sure. But I guess the question is there's a difference between AI writing the code auto committing it on behalf of AI versus AI assisted, right? I am using co pilot. I'm writing this thing and I'm submitting it as a commit, but yet somehow they're saying, Oh, the code that was written is [00:15:00] clearly AI written, right?

Ashish Rajan: That those are very different. Oh, 41 percent is AI generated autonomously. That's what I'm looking for. ? And it's, they are saying it's GitHub Copilot. Yeah. It is saying it's generated. It's not committed. So I stand corrected. I just brought it up. And the numbers keep changing depending on how recent you are.

Now they're saying 50%. So okay, clearly.

Caleb Sima: Here's the end. The end result of this is we all knew that AI is gonna increase the amount of code being generated. The amount of code being generated is going to, triple, quadruple because of AI. , that is a. A very safe bet to make, right?

Ashish Rajan: And I would also add to this, that the concern that people had over there was, Hey, as long as it is Copilot that they have licensed for, and that they understand at least there's a relationship there, it's more the Ollamas or open models. that claim to help the developers and perhaps junior developers, a lot more who may not want to ask the question to a senior developer.

Hey, how do I add this particular [00:16:00] statement in Java? I think I'm going to ask the free version cause they wouldn't want to pay for a service and how comfortable would they be able to use Copilot? Whether it's even encouraged or it's a more hush, that was one of the first concern that at least the first use case that I came across.

There was another one which went into the API gateway on the whole approach for the concern that people had was more around the API gateway use case where, Hey, now that I know that I am producing these APIs, how am I providing this to either to the rest of the organization or to a third party? The chatbots to where you would be referring to as well, they're all API calls we made as well.

Okay. How's that going? And someone is basically checking the checking and validating input and output coming back as well. I don't know if you have some thoughts on that, but that definitely seems it seemed like, Oh yeah, I can see why people are focusing on that. But then the security people are aware of it.

There was very small percentage that actually spoke about it because into that detail. So I felt, Oh, they have gone a layer deeper than the people who are talking about copilot.

Caleb Sima: And I think the majority use cases co pilot is [00:17:00] there there is no question. Like to me, if you're in an organization or you're a CISO, your engineers are going to be using it.

If they're not today, actually, let me rephrase that. There's no one in your organization. If it's a decent size, that's not using Copilot some form of AI generated. It doesn't have to be Copilot specifically, but definitely a AI generated, helping you code that is happening. It's way too valuable.

It's extraordinarily helpful across the board. It is a no brainer. Everyone is going to be using it. Now, the question I think that, people are trying to figure out the answers to is what does that mean? Going to the point of 4X more code is coming from engineering now. Does that mean I have a four time bigger security problem, right?

And I don't think that's true. But I do think there's some aspects of validity to it and people are trying to figure out how to wrap their heads around it.

Ashish Rajan: I tend to disagree , and I'll probably put a caveat to it as well. Most of the security industry build on the [00:18:00] thing that people don't do their job.

If they actually do their actual job the right way, we probably no need security. They encrypt when they want you, they reset their password. They have MFA, all of that.

Caleb Sima: What's the definition of the right way? What's the definition of right ?

Ashish Rajan: The right way, which is the ideal way, or I guess at least it.

For the security person or Oh, you have to, yeah, the right way from a security person perspective, we should not have a job if people actually would just do their thing. Like I think I'm coming more from a perspective that people talk about, hey, security should work towards automating their job out.

I guess we shouldn't really have a job in the first place if. They were just doing their thing.

Caleb Sima: I prefer the security should be invisible versus security should not exist. Think of Apple. That's my ideal way of looking at things. Yeah, it's like the way your iPhone works. It's what you do on your iPhone.

Yeah. On an iPhone, I would say by and large, security is not something you think about when you do things on your phone. It's just not. Yet. Security is very much in an iPhone at [00:19:00] levels deeper than 99 percent of most products, right? But yet you don't notice it. And I think that's the most ideal goal in life is secure.

You don't notice, you don't think about security when you use your iPhone that much, but it's freaking there. And it's invisible, but it's not gone.

Ashish Rajan: Yeah. I'm going to turn into an Apple fanboy there. And clearly we're going to lose all the Android users at this point in time. But the fact that they made Face ID as a good thing that like, it's like, can you imagine how hard it was for us to even make people sign up for MFA?

And they went biometric on people. And I'm like, holy shit. This is they accomplished something that many people in security failed for years. The reason I was pushing back on increased workload was because ideally from a security person's perspective, you would think that, if all things were followed, we should not technically have a job.

The challenges at the moment in most organizations on an average level are primarily driven by the fact that if you look at the AppSec people, they have a lot of problems because the [00:20:00] tools at the moment give you a lot of false positives. Someone has to go through it. Identify, Hey, out of these 20, 000 alerts that I have, which ones are actually something that I should be investigating, sending over to the developer, working with them.

Then the other part is a supply chain there as well. That is not just that we are the only ones producing code. We're using something else to produce the code for us as well. Now with those two things in the mix, and the more new code being generated, I feel at the moment, and it's not the fault of the developers, by the way, I think it's definitely the fault for it on our part as security and that we are not able to keep up with this.

And this was seen when the cloud came in, this is being seen when AI is here. We are like the whole reason we have this podcast is because there was actually a need for a lot of people to stop talking about Skynet and thinking about. Hi, how can a CISO actually address this problem? So I tend to lean on the fact that it is definitely going to be initially a huge amount of workload, which is where your AIS PM DLP, all of the, and the SecOps [00:21:00] automation, all of that is going to spring up, they definitely have an opportunity and they clearly have noticed it, but I don't know.

On a people and process side, technology side, I think there is definitely some answers coming, but people and process side, I think that would definitely be, let's just say stretch thin. I don't know if you agree with that.

Caleb Sima: Yeah. Oh, by the way, the one thing I would say I noticed is I think there people are tired of AI

Did you notice that at the conference?

Ashish Rajan: Yeah. Yeah. Yeah. I,

Caleb Sima: they're tired of it. Yeah. Yeah. They don't wanna, it's only taken a year and a half.

Ashish Rajan: Anything apart from AI. Let's talk about anything about from AI. Is that what you got as well? Yeah.

Caleb Sima: There's a lot of I don't want to talk about AI anymore.

While we say this on the AI Cybersecurity Podcast.

Ashish Rajan: It's rightly the theme and I can understand why as well, because I think if people go back to some of our previous episodes, they'll realize that some of the conversations we had about security would land in the fact that we didn't really have a lot of good use cases for how AI was being used.

Even the vendors was trying to figure it out. What is the solution? What are we trying to solve here? [00:22:00] And even right now, there is no vendor talk. And I don't mean vendor in the context of a cybersecurity vendor. The context of there is no use case where there is an actual customer talking about an AI use case.

Every product that you look at right now, whether it's Adobe, think of any product, everything, even Notion, everything has like an AI assistant now. What is it doing? How's it using? I have no freaking idea, but at least now I know that everyone is talking about it. But if you look at the AI use cases on the internet is primarily the marketing team's talking about it, but I didn't hear a Capital One or I don't know, I'm just going to name a few companies like Robinhood or Netflix or any of them.

What are you guys doing for with AI? Everyone's spending millions. What are they doing?

Caleb Sima: The thing that we haven't hit yet is we haven't hit a defining moment where AI is not used as some side feature, right? Like Notion is using AI as a side feature, right? All of these people are plugging in these things as side features [00:23:00] versus where is it that AI is Completely changing the game and AI is the primary core.

Like I'll give you a great example. Perplexity to me is a great example of that's not a side feature, the entire core focus of the thing is to replace Google. And by the way, for me, it does. I feel like 98 percent of my stuff is I never use Google. I always use Perplexity. Never use Google anymore.

Ashish Rajan: Wait, so it gives you definite answers, but then is there a verification process? Oh, yeah.

Caleb Sima: Yeah, it's so much better. You use it, right? But it's everything that I use Google for, and actually everything that I even used to use ChatGPT and Claude for in terms of just curiosity or things I want to know, I just use Perplexity because Perplexity is integrated with search.

So now all of a sudden, Oh, if I want to say Hey, what's the best, I really want to see a great action movie Spider Man give me something that is in the past year and like Perplexity will give me an actual [00:24:00] answer and it will show me here it is. Here's the resources.

And I'm like, okay, it's great. These blog guys are writing about this movie, but tell me what Reddit says about it. Oh, here's all the things about what Reddit said about this movie. And actually this movie came out above that movie based on Reddit comments, right? And so Oh, so like now I don't have to do Google searches anymore.

I digress, but this is an example of where I think what we haven't seen yet, but where we hopefully we'll see here in the next year is AI core, things that are truly focused on. What does AI bring as a core solution or problem solver? That's not some side feature added to an existing problem set or product.

And so when I think, that Robinhood may add something that's AI based, but it's going to be a plugin into its primary use case versus if someone came out with a true AI stock trading application at core. What would that even look like? That's a hold. I know that sounds scary too, by [00:25:00] the way.

I would not recommend someone do that, but I think that's the reason why.

Ashish Rajan: How do you define LLM firewalls and do they integrate with access management?

Caleb Sima: Yes and no. So it's an interesting problem, right? There is this problem that says at its most simple level, I have a data store. And I have this brain, the data gets passed to the brain, rephrases it and regurgitates it in some way.

And, or you talk to the brain says, this will rephrase it in a way it says, this is the kind of data that I need. And then pass it to this data store. The problem is basically, in the world of today, data rights are pretty simplistic. And, but very well defined. So if I have a document that's meant for engineering, the group engineering has access to the document and finance has access to that document.

And that is the way, and it goes into a database at which it has these permissions or in some file store at which it has these permissions. And when that user tries to access that document, if you're an engineering, you can do it and you can access the engineering document, right? As you're going to it's really simple.

[00:26:00] I think in LLM land though, it becomes very different in that now people are feeding these things into vector databases and vector databases are not as granular. And so when you say, okay, finance document gets shoved in here and engineering document gets shoved in here. What if an engineer asks something about the, let's say the code for the finance application?

So it goes, Oh hey, I need to understand about the current structure of financial app 101 in terms of how it's being built. And instead it says, Oh, finance, cause it's a vector database and there's no permit. Oh, finance structure 101. And it returns. The enterprise is corporate finance answer around the structure of the organization of their corporation.

And it was like, Oh, that's not what should be going to the engineer. Now there's data leakage or a data rights problem. And so how do you prevent that from happening both at the data source itself at the vector database, and also maliciously from [00:27:00] individuals trying to get access to the finance documents, even if they're an engineer.

That's the prompt injection problem. And so there is this sort of problem space that says, how do I restrict it at the data source? And then how do I restrict it at the person who's asking a play? And so I think that's the fundamental issue. And how do you solve that? LLM firewalls happen to be one way, which are targeted today at the user asking the LLM.

Basically saying, Hey, I want to get access to the finance docs, even though I'm an engineer. And so an LLM firewall should know, Oh, I know what your rights are through SSO because I'm integrated into that. I also know that you are asking a question that looks like a prompt injection in what you're doing and therefore I may block you.

In order to keep that from being passed to the right orchestration place, but also firewalls should be going between the LLM itself and the data store itself. So when the LLM or when the information gets [00:28:00] retrieved from the data store, can you say, oh, this is a. This vector or this sort of block has something to do with finance.

And the person who's asking it as an engineer are we sure we want to pass a finance doc to an engineer? We should probably block that from happening.

Ashish Rajan: Maybe that's a good segway into the actual challenge that most people were stuck at was the whole data security part. Where people had classification that were never applied and now people are lost basically to where like now there is a question of data discovery as to identifying where all the perhaps sensitive data could be sprawling across and then there's a question of are we even in the Okay.

There's an interesting point that someone mentioned. I was talking to, I think the individual was, I won't take it, take their name, but they were part of the the state of Nevada. And it was an interesting conversation and the conversation was more around the fact that, Hey, government has data from 50, 60, 70 years ago.

Imagine if someone is actually a dedicated job to just go [00:29:00] through 60, 70 years worth of data to identify what should be classified as public, what should be classified as confidential versus private. Yeah. That's a lot of money and time and resource being spent on a data analyst person potentially.

Caleb Sima: Yeah. This is super, super solve solvable problem. I actually

AI is really good at this. Yeah,

Ashish Rajan: but are we trying to not do the AI part with the, this is the pre AI part because pre AI part Yeah,

Caleb Sima: that's right.

Ashish Rajan: Yeah. The pre AI part. But I guess like this, the reason and. Talking about government officials as well.

There was a whole conversation about US elections as well, about Jenny Easterly. She was talking about the whole, that the critical infrastructure is ready for most of the threat. But now this is election year in the US. I don't know what your thoughts on the whole cause obviously you have the whole deepfake conversation we were just having before this as well.

Caleb Sima: Yeah. Deepfakes was a big trend. Yeah, clearly it's getting better and better, which we knew. But now actually, before we started this thing, Ashish and I were, I was sharing with him that now there's now open source projects where you can take one still photo of [00:30:00] somebody and it will auto recreate a really great deep fake for puppet mastering on webcams or on your.

You're, and it's just it's so good. And even there's articles, cause you saw like on Android, what the pixel has, the magic editor thing. And that thing is phenomenal, man. It's so good. Flux came out. Do you know about flux? I know the pixel. I didn't know what the flux was the flux came out.

It's the, I think it's the founders of stable diffusion went and started their own thing, and they launched it maybe a week ago, two weeks ago, called Flux, which is a, another image generation model. And it is again, it's the next leap in image generation. It is so good. It is very good.

The realism and the capability of these things now, like people are even talking about you used to have to type in detailed prompts for things that you need. People are finding out using flux, actually, you don't want to do that. You actually just want to actually give generic things.[00:31:00]

And it just does even a better job. I just want to see Caleb in a black t shirt playing baseball. You don't have to do anything out. And the thing will go. And then you can say, okay, I just want him swinging and it will keep the character exactly the same way. And it looks like a digital photo, right?

Like it's so good. And so like these things are getting so much better in its generation. I can't, it's just unbelievable.

Ashish Rajan: I guess maybe that's also worthwhile saying, perhaps the reason it also feels like it's going so fast is also because of social media not because social media is the main audience, but they are the ones who are making the popularity happen for so if you think about the technology leaps that are happening.

It's image generation, video creation. That's pretty much the two big areas where the leaps are happening. It's not, we're not talking about, we're talking about 40 percent code being created by GitHub. I feel like the reason why these two continue to get attention is because social media, because we're like, Oh, because that area of our world depend on images and videos.

We've always trusted. Now we've been trained in it for we all ourselves have been trained for 10 plus years. And now if you see an [00:32:00] imagery, which is to go back to the election thing, Like I don't need a rush email thing to happen. This year, I could literally just have some presidential candidate do something, which is, to your point, swinging a baseball in a team that they don't believe in, or Democrat in a Republican jersey.

Caleb Sima: In the Flux subreddit, the one of the big topics is people are using it specifically on recreating CCTV screenshots. Oh my god. And it's really good, like Donald Trump, buying milk out of a 711 CCTV camera, it's just, and they do, they show random person walking an alligator down the aisle of a Walmart, right?

And it just looks real, like you can't tell so there is now a model being made for flux that is specifically focused just on CCTV generation.

Ashish Rajan: Obviously we primarily focus on leaders and CISOs, some of those CISOs and leaders also have to deal with staff that is in physical location, like a bank staff or your, let's just say T [00:33:00] Mobile for that example, they have staff that is actually standing there and looking at imagery that's coming on television that they advertise on.

Is there a concern that they should, like from a physical security perspective, like There is a whole privacy question that, the people obviously there, CCTV camera, the first thing that came to my mind was, hey, does that mean like courts where cases go in? Can if this thing becomes really more realistic I believe.

At least I think I've seen the same videos as you have. They still look a bit animated. As in in terms of, you can tell, tell

Caleb Sima: No, you cannot tell. Really? You cannot tell. No.

Ashish Rajan: Also, if you were not in that subreddit feed, and you did not look at that video, you would not, you can't tell.

Caleb Sima: You cannot tell.

It looks super, it looks, exactly like what you would see in a real photo. Here's the other question. Like it's not only can you not tell visually, you just cannot tell anymore from a photo, from an AI generated photo. I think like when you're looking at the things again, it depends on who the person is that created it, the way they created the model they created all out of, but just to take the [00:34:00] CCTV, someone, I would say 90 percent of the ones that I saw that people are posting on CCTV image generated, you'd click on it, look at it in high detail.

By the way, these generated high, but it's still CCTV quality, right? Cause it has that quality of the thing you look at. It looks exactly like what you would see in a real photo taken from this thing. There's just no difference. And here's the thing is it doesn't, even if you can digitally tell it's fake, which you can, right?

That's clear. You can do that. It doesn't matter.

And it doesn't matter if you digitally look at it and go, Oh, this was generated by AI. The work, the spread already. It's too late.

Ashish Rajan: Yeah. It's too late. Yeah, and I think maybe, and that's where my concern is coming from as well, because old school defacing a website, this is like much worse case scenario of defacing a brand, which primarily, by the way, all of them are social media.

Yeah. But I guess it goes back to the content moderation that the platforms have to start doing. I guess I imagine they're the ones who are probably working on this because if you look at the telegram case that happened recently, where the person was being caught for not moderating the content being used or [00:35:00] creative, obviously I know I'm digressing quite a bit, but I'm like, coming back to Black Hat.

That's for another episode. So we spoke about so far the hot topics, the AI summit. I should probably do an honorary mention for AI village as well. That was at DEF CON. There are definitely some interesting talks over there. So I'll definitely, I'll check that out. We spoke about the three use cases, or the two use cases you saw, which was the GitHub commit part, the smart remediation.

We also spoke about the three use cases that I saw with the secure gateway. We spoke about the LLM firewall, the data challenge. Offensive security. That's a big one.

Caleb Sima: AI offense. Actually, it'd

Ashish Rajan: be interesting when you say offensive security, are we saying pen test automation or are we saying

Caleb Sima: pentest automation?

Yes. That is a big trend right now. There's pen test. But

Ashish Rajan: in the sense that you don't have to validate it or, Oh, actually, cause to your point, there's already known patterns for SSRs. There's already known patterns for it.

Caleb Sima: Yeah. When you if you look at a black box DAST web, like you're, these things are pretty standard.

You, you stick a bunch of crap in parameters and you determine [00:36:00] whether it returns an error. And so Hey, AI can do this, right? Like we can navigate and do things like DAST in a much better way. I think that you're seeing a lot of focus, a lot of papers being written. A lot of startups coming around that are all claiming that, they can automate 98 percent now of a pentesters capabilities on web applications.

And wow. Yeah. There's a lot of that is in the current focus and trend this year, which by the way, I am very hesitant on, and I throw a lot of bunk in some of this stuff, but that is the trend.

Ashish Rajan: That's the whole attack surface management market. That's being created from all of that anyway, so to your point, so there's definitely that use case as well.

I think that's most of the big takeaways. We obviously have the episode where we have the panel for the AI summit and definitely to get Matt on OpenAI on for an episode as well.

Caleb Sima: We had a presentation at the CISO summit on how to generate your own deep fakes.

Ashish Rajan: Did someone create a deep fake?

Caleb Sima: Yeah, John Whaley. Gave a great presentation on how to generate deep fakes at the [00:37:00] CISO summit this year. Regarded presentation, super fun. Very scary. Yeah, very scary.

Ashish Rajan: It's funny. Cause I think the AI summit had really interesting. So their talks are more around the NVIDIA executives giving their insights.

So that's their hardware providers. And there's a fireside chat on building a cyber resilient nation in the age of AI, which is government data talk again. And then there was. AI and science for security and there was a therapy for security leaders. And I'm going, Oh, that's a good one. Actually, I bet that was well attended.

One more talk, which is worthwhile talking about, cause there's a lot of them are panels or a lot of them are not talks. They have a lot of, opinions and saying people are talking about the trust and everything. One, which was on the last talk by Matt Knight from OpenAI. I think he was, it was a closing keynote. where he was talking about his challenge. I think that was definitely well attended as well. Where he's talking about his strategic approach to this whole problem. But again, I feel there were definitely some technical breakout session, but [00:38:00] they were more focused on, hey, how to overcome alert fatigue, then leveraging RAG for proactive cyber security posture.

And voice cloning was another one and the threats of AI.

Caleb Sima: It's a little bit sad here is that we are talking about what happened at Black Hat and we have not talked or spoken, or even, I didn't even attend any of the briefings. So if you really want to talk about we're talking about Black Hat, like only, I think probably five or 5 percent of the audience or attendance, even probably even knew about the summits or even attended the summit versus, so we have not mentioned the briefings, which is what people go there for, did you attend briefings?

Cause I didn't attend any briefings.

So what were the AI things in the briefings? I have no idea. So there are people who are listening, who are going. But there's a whole bunch of these things in the briefings that you didn't talk about at all.

Ashish Rajan: Because one of the things that also came out of the conversation was that the spike of investment from organizations for, Hey, here's 10 million, 20 million, do something in AI. When there is no fruitful result for a year, [00:39:00] next year, that money pool would disappear, or at least will shrink considerably. It may not completely go away because now it's just too big to fail, but it would shrink considerably for today.

Give me a use case where it makes sense instead of me just spending another 10 million on this and you hiring every person out there on the internet say cause there are now AI engineer roles, AI security roles. I'm like, what are these people doing in actual job? I don't know if I shared that with you.

There's a, I think it was an AI security engineer role. And I'm going, what does an AI security engineer do? Clearly either they have an AI product internally, which is looking at this. So I'd went through a description. Again, it said you should have years of experience and all of that.

I'm like, no one has years of experience in this shit. But you need to have six years of LLM experience in order to apply for this job.

Yeah, that would be something special.

Caleb Sima: There's definitely a lot of good people I have been talking to that. It takes a certain skillset to really understand LLMs, fine tuning, building, putting together, applying them in workflows, [00:40:00] knowing the nuances.

And there are a very small handful, a pocketful of these people who are security engineers, right? Like they are people who are in cybersecurity, who actively are invested in this and you have a lot of knowledge around this stuff. And so there's not a lot of them, but there are definitely some really strong people out there that I've had a lot of conversations just recently within the past couple of weeks that have been really fun to talk to that they are what I would consider an AI security engineer to be.

Like people who are super knowledgeable.

Ashish Rajan: What would be some of the traits you would say they have that stood out for you as, Oh, your thing. Cause I imagine it's the way they think about a problem versus I guess that would be the differentiator. Is that what you found?

Caleb Sima: Yeah. Like they know where, what an LLM is good at.

How to fit it to the things that they need, how to fine tune the models and run the models in the right way the intricacies and [00:41:00] nuances of accuracy and hallucinations, like they really get it, like they just get it. They know how to write a prompts in the most effective way, they just, they understand this, but their security people, right?

These are security engineers who are trying to apply LLMs to their day to day jobs. And the way that they're doing it is in a really phenomenal. And unique way of doing it. And it's really fascinating talking to these people. And actually what's interesting is I find most of these people that except for one, most of the people I've talked to who are really good at this are what you would consider to be junior, right?

These are people who are six to eight, five to eight years in security total career wise, right? Like they are going from junior to maybe regular engineer, but they are definitely not. Senior IC level types of staff. These are like more junior people who are have phenomenal methods and thought processes around this stuff.

[00:42:00] So like when you say AI security engineer, my head immediately went to these people. They are like, they know security, they get it, and they know really well how to use LLMs very effectively.

Ashish Rajan: Interesting, because I almost, because I just found the role and it seems like they're definitely hiring quite a bit of people in this space.

They're looking for an SVP, but it was an AI software engineer job and they did ask for software engineering experience and they called out general awareness and understanding of Gen AI tools are very helpful. For the role of the SVP, who is the Generative AI risk and control senior control officer.

Proven background auditing, artificial intelligence and large model implementation or systems. Background in application development, which including AI or Gen AI specific applications preferred. Yeah, that's just all cloud and risk and all of that, but. You just basically have to help define the enterprise wide Gen AI risk and control framework.

Caleb Sima: What's interesting is the people that I'm talking to, who I would consider to be really good at this all have jobs [00:43:00] in various different parts that have nothing to do with AI. Like these are like, Oh, I'm a red teamer. I'm a pentester I'm a detection analyst. Like these are all like people who are, Oh, there's one who's up.

I'm a privacy engineer. I don't like these. These are not like people working on LLMs or AI functions. These are people just in their day to day jobs that have a passion for figuring out how to apply this to their job. And they are like super smart, very smart about this stuff,

Ashish Rajan: yeah. Wow.

Cause it's funny because you and I recorded an episode with two people from the red team side as well. Daniel Miessler and Rez0, who again, to your point, Nothing to do directly, but I think, I believe Rez0 works for the security team in a cybersecurity company as a cybersecurity vendor company, but again, nothing to do with AI, but he's clearly doing a lot of time researching and working in this space as well.

Caleb Sima: What's interesting is the person who writes the requirements for the job spec, really [00:44:00] should be more focused on the education and skill set versus the experience. I don't know why would they put experience and working on AI technology when that's like maybe eight months worth of time.

Ashish Rajan: You'll find this hilarious. So they also have a role, which is Director of Generative AI Innovation Lab. And there's over a hundred applicants who applied for it. Oh, there are people, there are a hundred plus people who believe that they actually have the capability to do all of this for.

Caleb Sima: I would agree because it's, they're asking for everything.

So you might as well apply because it's not like anyone in that hundred will match it. So you have just as good of a chance as every one of those hundred that applied to it.

Ashish Rajan: So Yeah, cause the requirement is knowledge in AI, SDLC and ML Ops platform. Yeah. Yeah. We've have that automation pentest. That's definitely a field that's coming up.

Then there is the part where now there are people who are looking for AI professionals to be hired in their organization, whether it's ML ops or the AI ops, and there's a [00:45:00] whole episode of the first season we did on what these are and how they're different. Why Black Hat briefing, which is the talk that it didn't have a lot of AI topics in there, and I guess going back to what we were saying earlier, maybe it's like that's the vendors who are talking about it, then the actual people creating these AI projects internally.

Caleb Sima: Black Hat briefings are supposed to be very vulnerability research focused, right? Threat focused. And also they had an AI summit. How much did they vet the presentations based off? You should do. But the problem is then that you, what you were saying is the AI summit was very compliancy driven versus technical driven, so

Ashish Rajan: they had tech breakdowns, but then the tech breakdowns were not the kind of tech breakdowns you're talking about at CISO summit, which is like deepfake, let me make a deepfake of Ashish in there kind of thing.

Caleb Sima: That's good feedback though for the AI Summit is I think the Black Hat needs some feedback around, Hey, this was way too high level. We need to get some lower stuff in there.

Ashish Rajan: Yes. And I don't know if it, because the attendees were supposed to be like that. Cause the most of the attendees that I spoke to, they were [00:46:00] technical.

So it's not, they were just basically walking around having hallway chats. Hence the reason I ended up in all hallway chats as well. But I think that, that is the final thoughts for Black Hat. But thank you so much for tuning in, everyone.

We'll see you in the next episode. Thank you so much for listening to that episode of AI Cybersecurity podcast.

If you are wondering why aren't we covering all topics, because maybe the field is evolving too much. too quickly. So we may not even know some of the topics we have not covered. If you know of a topic that we should cover on AI Cybersecurity Podcast or someone we should bring as a guest, definitely email us on info at cloudsecuritypodcast. tv, which reminds me, we have a sister podcast called Cloud Security Podcast, where we talk about everything cloud security with leaders, similar to the AI cybersecurity conversation. We focus on cloud security specifically in the public cloud environment at cloudsecuritypodcast.tv. Which if you find helpful, definitely check out www. cloudsecuritypodcast. tv. Otherwise, I will look forward to seeing you on the next episode of AI Cybersecurity podcast. Have a great one. Peace.

No items found.