In this episode, to kick of 2025, we dive deep into AI and cybersecurity predictions for 2025 exploring the opportunities, challenges, and trends shaping the future of the industry.Our hosts, Ashish Rajan and Caleb Sima sat down to discuss the evolution of SOC automation and its real-world impact on cybersecurity, the practical use cases for AI-enhanced security tools in organizations, why data security might be the real winner in 2025, the potential of agentic AI and its role in transforming security operations and predictions for AI-powered startups and their production-ready innovations in 2025.
Questions asked:
00:00 Introduction
06:32 Current AI Innovation in Cybersecurity
21:57 AI Security Predictions for 2025
25:02 Data Security and AI in 2025
30:56 The rise of Agentic AI
35:40 Planning for AI Skills in the team
42:53 What to ditch from 2024?
48:00 AI Making Security Predictions for 2025
Ashish Rajan: [00:00:00] So you know how sometimes you come back from holidays, you have over 300 emails that you haven't read. People are running AI over it to give me a summary or which one should I look at and which one should I not look at?
Caleb Sima: Some real talk here. I think it does not do a great job. People who are using Gemini for this should be double checking their results.
It's really Q2, but Q3, Q2, Q3 of next year. My expectation is we should see an influx of announcements and releases of these new startups that are AI at their core, building substantial improvements in automation using AI.
Ashish Rajan: If you're wondering about what AI would look like in 2035, this is the episode. So Caleb and I went through what are the AI security spaces you would see evolve in Q2, Q3 or 2025. Yes, it's going to be as quick as Q2 and Q3, at least some of those. Overall, we spoke about the predictions for 2025, what you can expect from the cybersecurity industry, what you can expect as CISOs or leaders in the industry on what you can be doing or should be doing as you look into 2025 for [00:01:00] AI and what the role is.
Third, what are some of the technologies in AI or terms and technologies in AI you should get rid of? And predictions for 2025. A lot more of that. If you have been enjoying episodes of AI Cybersecurity Podcast and listening to the episode on Apple or Spotify, we really appreciate it, if you can give us a 5 star rating on your favorite podcast platform, but if you're watching this on YouTube or Linkdin, definitely give us a subscribe or follow.
It definitely means a lot when you hit that subscribe . It definitely helps the algorithm and lets us know that you are enjoying content like this and you want us to create more of it.
And we also appreciate all the support you have been giving us for the last two years. I look forward to doing a lot more in 2025. So, happy Holidays from myself and the team here at AI Cybersecurity Podcast. I just wanna say I look forward to a lot more interesting conversations as AI evolves on the AI Cybersecurity Podcast and if you have any feedbacks and comments, we'll definitely look forward to addressing a lot of those topics that have been coming across from you guys in 2025. But I hope you have a great holiday season. Enjoy this episode of predictions for AI Security in 2025, and I will talk to you in 2025.
So in this episode, we're talking about 2025, [00:02:00] looking into 2025.
So I was asking Caleb if there is anything that can be revolutionary other than AI, as far as the magic crystal glass can see, I think you had some thoughts on that.
Caleb Sima: Yeah. Obviously there's lots of things that are valuable that are not AI related. However, I guess there is a strong belief that any tech company today that is being created will be using AI for something or another, right?
Like it may not be AI core, right? The entire business not might, may not be wrapped around AI. However, I just find it very difficult to believe that any company today won't be using AI for something. Now, listen, you're either building things for AI, you could be a hardware company and building chips, which I'm not sure how much Gen AI you're going to be using for that.
But I just feel as a software tech company, it's very difficult. To go back to your original question about White Rabbit and our companies. Yes, a lot of value. It's just because Gen [00:03:00] AI allows us to do things. That were just unimaginable prior, right? And so there's just some really amazing things like in our first portfolio company that we are doing with GenAI that just was impossible to do.
And it can change like in, my stuff is very cyber security focused obviously, but like for example, like we're focused in a space that is very large and with a very difficult job to do and has been primarily manual and process oriented. And now with Gen AI, we are doing things in automated fashions that, would normally take days or weeks and we're doing it in a minute.
And this is not like fake stuff, man. Like this is not Oh, that's just like a prototype. It was like, no, like this stuff, it works and it does like a good job, which is, super surprising to some extent.
Ashish Rajan: I'm with you. I think I'm [00:04:00] smiling also because. If you go back to where we started the conversation and because this is like the end of the year episode, we are looking forward and looking back at the same time as well.
I think most of 2024 A, the first season was about us demystifying what AI and I guess to some of the nomenclature around that is then we started talking to other people about what are they doing with AI and cybersecurity. And obviously worthwhile calling out that to what you said, cybersecurity, just like a very small subset of what AI is doing in a very broad space and it almost as someone who's in tech and keeps listening to AI news on, Hey, now we can do this.
Now there's this much token, all of that. It almost always feels that feels exciting. So it's also good to hear that you're seeing, because you're focused primarily on cybersecurity or at least looking at cybersecurity companies that there is actually things in there as well that's being done, which is something that used to take us weeks, months or a year sometimes it's being done in minutes. One thing that came to mind and I want to give [00:05:00] credit. So we've got a couple of friends visiting from Australia at UK. And cause now OpenAI has that video mode on. And I was taking them around London and some of the buildings are like, man, 1700. I have no idea what those buildings are.
And I think my friend was asking me a question. I'm like, actually, I don't know, man. I think I'm going to have to Google this. Like, why don't you just point ChatGPT at it? And I did, by the way, the building is quite far. I was just pointing at like this for people who know London, it's like in Bank, there's a lot of big buildings with a lot of statues on top.
He was just curious what the statue is for. And I was just like, I don't know what the statue is for. Pointed ChatGPT at it and it knew exactly what that thing was, when was it put out and why is it significant? I'm going, wait, so I could just, that could just be my tour guide. And this is just non cyber security usage. It hasn't even creeped into our daily use.
But the reason I bring that example is the question that I had, my first instinct was, I need to Google that. That means that's just
Caleb Sima: because you're old school.
Ashish Rajan: I'm not up with the times yet, [00:06:00] Caleb.
Caleb Sima: That's like what, old adults from 2020 used to do. I don't know. I'm like,
Ashish Rajan: it's sorry, I still use word, PowerPoint, Excel.
I think the reason I use that example, because it was also a humbling moment to think that as much as excitement is there about AI, how many people have started shifting their mindset around, can this particular task that takes me months and years. Can I do use AI for this? And how easy is it going to be?
Actually, do you, does any example come to mind in cybersecurity that is top of mind for you for this?
Caleb Sima: Around great uses of AI.
Ashish Rajan: Yeah. Like I think where you saw that it takes, used to take years. Cause a lot of the cybersecurity challenges, we normally stop at the point where I've created a ticket.
Someone needs to go and look at it. Like someone in development needs to look at it. Can that problem be AI? I'm going to throw a word in there. Is that going to be challenged?
Caleb Sima: I think there's lots of people who are doing great things in security with AI, but I think a lot of these things, at least in my current are yet I would call production or, it's [00:07:00] been in labs and startup companies and seed companies, lots of work being done.
So I think I would like to focus on things that are actually doing things that have product out that is being installed, that is working at customers, I feel like oddly, it's very small. I feel the only things that are security related around that are these SOC automation companies. Those feel like because those are clearly doing a lot of things that would take a lot of time.
And they're automating these things fairly quickly. I think what's coming fast on its heel is more AI offensive stuff, specifically around, I think automating more of the mass intelligence scraping attack surface understanding of organizations, being able to gather and go through tons of attack surface data from a company or a network or an application and kind of narrow it down to more formidable targets to focus on.
I feel like that's [00:08:00] really out there doing some big differences.
Ashish Rajan: I've been advising a couple of companies in the AI space and I think one of them was interesting too. We called it as a LLM firewall as that space some time ago. And it's being used by some companies, but the one that example comes to mind is also there's a narrow focus on how the data industry or the data analytics industry, if you want to call it, the data scientists, the data analysts, the people who create the ml ops pipeline.
I'm seeing some people focus narrowly on that. And perhaps the reason for that is because a lot of CISOs or people in leadership position are worried about how data is being handled. Now that there is more awareness of the fact that we need to be more careful about data and perhaps some production data may be used or we don't know what kind of models are being used.
So there's a lot of big unknowns. So the supply chain space of AI seems to be, at least the company that I've been working with, they definitely have customers who are signed up, banks and stuff are signed up with them. They're definitely [00:09:00] and what does it,
Caleb Sima: what does it do?
Ashish Rajan: I would say supply chain security is how I would describe it.
Essentially the point is, the data analysts people use are Jupyter notebooks. So what is a Jupyter notebook data input, output in terms of, sanitization or whether it's like PII or not there's detection into it. And then the other one is what kind of model is being used by the data analyst team.
So that's why I say supply chain as a catch all word.
Caleb Sima: It sounds more like development environment is what that sounds like to me. More like a, like
Ashish Rajan: the reason I call it separate is because in my mind, the co pilot example that people use for development is your SDK connected. Like the, it's a developer who's building the application.
I'm talking about the entire supply chain rather than the fact that I'm building a firewall or I'm trying to find what model is because genuinely the biggest, if you were to think of the bottleneck.
We don't know what the other side is doing. And this kind of gives visibility to that. That's where I found it a lot more. Yeah. Makes sense to me.
Caleb Sima: Yeah. I've thought of it more of, infrastructure, [00:10:00] architecture, environment, and dependencies of that environment.
But yes, I think, it's just a matter of wording or terms that the only thing that, maybe it's just my old school when I hear supply chain, to me, supply chain is indicative of identifying risks in things at which we have dependencies on that are owned or operated by third parties.
And it goes in the supply chain part as it can, the dependencies of that dependency and that dependency. And so that goes to more like source code libraries and its dependencies and reputation of those providers. That's what sticks in my head when people say that versus, current engineering architecture environment and it's operations is, how do I manage risk around that.
Ashish Rajan: I'm with you. But what goes in essentially to your point, maybe supply chain is not the right word, but perhaps the idea behind there are these pockets of opportunities being created [00:11:00] within organizations that were perhaps, I won't say they were neglected.
Like I think. What AI has done is it's opened up the Pandora's box for, Hey, that data security policy we had, what we actually implementing it became a question that people had to answer. And I think that's where in my mind, the opportunity that came up, which I think every new tech comes with new opportunities.
And as people like to call it, there is a development one to what you called out the co pilots of the world. What kind of SDK is looking into it? All the models seems to be doing code at the moment. Yeah. Then there is the other part, which are usually not the non technical people, or maybe the technical in the data context, but not in the otherwise context, again, not probably aware of security as well.
I think that data folks who have been doing ML and data analytics for a long time, which we just, I don't think I've ever spoken about them in any platform. We never talk about them. I at least personally have not heard. So that's where it was interesting for me to have actually, yeah, I wonder what, What happens in that part of the world?
If you don't talk about it, you don't probably hear about it as well. So that [00:12:00] was one. The other one for me, obviously the co pilot one you called out already. One thing that I've been finding interesting is I was at Black Hat Europe and they had an AI summit here as well. And JP Morgan Chase was talking and they said they have basically allowed everyone in their organization to use AI.
Each, every department, obviously, I imagine there are technical people using it, not the person who perhaps may not be as technical. So they're it's like open invitation for any business unit to start using AI to see what they can do with their organization. And to me, that perhaps is an opportunity to what you saying about SOC automation.
It's perhaps an opportunity for someone like, and again, I don't know, this is in production. They basically were talking about the fact that it's about discovering all of these other AI agents being used, whether in HR, Ashish in HR, is he just using the one that we're allowed by company or not?
Caleb Sima: What I don't see, or what I'm having a hard time getting real data on is companies and enterprises that are using AI and using AI to me is by [00:13:00] default defined as employees in their company using ChatGPT like that to me is what most of the definition of people saying versus AI significantly changing something in an automated fashion or replacing some system at some scale allowing them to do things that they haven't been able to do before.
I hear very little examples of that. And that's what I'm looking for is, okay, all these big companies and everyone keeps saying we're doing amazing things with AI where we're laying off hundreds and thousands of people because AI takes their work. We're not hiring new people because AI takes what are they doing with AI? What have they built that is allowing them to do this because I have not seen at least predominantly great productionized stable systems that people have built that allow this to happen. Where I'm laying off thousands [00:14:00] of people because AI has now done this job. What is it that is happening?
Ashish Rajan: And first of all, I agree. I think where people are coming from it's funny. I think I was on speaking about AI on the conference thing. And I think something that was almost an epiphany that I had when I was speaking with some, we were just talking about why developers probably are not well equipped to do security and blah, blah, blah.
Anyway, the point being really interesting. Something that I thought at that point in time was. Are we not productionizing these systems that we keep talking about or keep hearing about because we are nervous that losing people or cutting off people is going to look bad for the company. You don't reckon that's the thing?
Caleb Sima: I don't think so. I think it's more that, my gut, you could be right. I don't know. Neither one of us really know, but my gut says it's because no one's built something that works well enough, stable enough, consistently enough in order to replace these systems. Everyone keeps talking about, Oh, we have AI agents.
They're doing all that. What are they doing? What no one, I can never get details. I can never, no one's [00:15:00] written or, publicly, at least, hey, we are GE and this is what we've done to make our HR system more efficient using AI. What exactly have people built that has made this possible? I don't get details and it's been very difficult to find.
Everyone just says, yes, we do that. Yeah, we're doing this. We're not hiring. We're laying off. We're using AI in, 30 percent of our core systems and like to do what exactly is being replaced? Because and I'm not being doubtful. I just want to know because it seems like there's lack of details around this.
I
Ashish Rajan: only have productivity example. So the close conversation that I had, I was on a CISO dinner, Black Hat Europe, and a conversation that I had at AWS reInvent, which is really interesting. So someone was talking about how they're using the Atlassian connectivity into Slack teams and so to do investigation.
Again, it goes back to SOC, the person in Geneva, they were, so they, every time they get a [00:16:00] threat is detected. They would have a comprehensive information about the threat and a link with the resource. It talks about who created the resource in one of the Atlassian or Slack or whatever pulls all of that together and they were able to use that to a quote unquote improve productivity.
It doesn't solve the problem, but gets you to the answer quicker. That was one example to go back to what you said about SOC automation.
Caleb Sima: But again, that's been like the only real life concrete. Out there, production level thing that I've seen AI being used for like it's okay. I see the real like it's detailed.
It's example. You can see it. You can touch it. You can feel it. Okay, I get it. But what else
Ashish Rajan: the second example that I have, but this is a non cybersecurity example, which is well, cybersecurity is non cybersecurity. The first one was. Certain organizations who have managed to put their data into whatever database that they want to use or search through using LLM or be able to get an answer for.
So people who are working [00:17:00] really old companies, 20, 30 year old data, which is being fed into all these companies. And again, this has been given to me an example. So I want to put a caveat saying. They never mentioned it's in production. They said they're using it, but using it could be, they have an alpha beta version.
Caleb Sima: That usually seems to be indicative of. Yeah.
Ashish Rajan: So the example they gave was the information. So the model of the system is like a legal company. Where every time a new person comes in and asks for, Hey, I want some help. Now, chances are nine or 10 times that case has been dealt with someone in the past.
And, or at least that particular scenario has happened in the past by, and some legal paralegal person is going to go through all the documentation, find out, Hey, when was this case raised? And all of that information, not usually that information is public. They've been able to use that. And I think some of them are actually LexisNexis is monetizing that service as well, from what I understand, as a way to speed up what paralegals are doing.
Again, productivity example, not an example of cybersecurity, but their service and they've been documented publicly. They even came on Cloud Security Podcasts as [00:18:00] well to talk about it. The CISO came and spoke about it. The idea was that, Hey, this is information. We are able to make that available to everyone.
Anyone who is asking or seeking information, at least like a basic level information. For legal purposes, they can go to it before they go to a, or a paralegal person can use as an example for a 1986. It was like, someone versus a state case, and that was basically found that was.
Caleb Sima: Actually you bring up a good point because that also indicates to me, Harvey dot AI, which I is clearly making tons of money using AI.
No, but this is a very high rising legal company using AI. That's been, I've never used it, but I've talked to many people. It was like, this is a, they're doing tremendously successful.
Ashish Rajan: Interesting, right? I'm going to look that up as well. Did you see the picture of Harvey? It was a sort of a H. The second example that I wanted to get to was this was a called out by a CISO on getting, so when people go on holidays and these days your co pilot is, or actually not co [00:19:00] pilot Gemini, all of them are, I think Gemini and co pilot are the two ones that are being allowed into Outlook and Gmail.
They've been, so you know, sometimes you come back from holidays, you have over 300 emails that you haven't read, but you can basically run, people are running AI over it to give me a summary of which ones should I look at? And which ones should I not look at? And I don't know if they're using something like fabric from Daniel Miessler in the background, but the way they're using it and the example they gave was instead of going to 300 emails, it goes through and identifies what could be potentially important that I should skim over and look into and gives me a summary of, Hey, how many were potential just conversations, how many were potential actual actions that were required or expected of me and how many I was in versus were directly addressed to me.
Caleb Sima: Yeah, I will say this some real talk here. I think it does not do a great job.
People who are using Gemini for this should be double checking their results. I have found it to be really rough around being comprehensive and accurate on. [00:20:00] These things, like it's bad.
Ashish Rajan: Funny you say that, because the friend example that I was talking about, who I pointed ChatGPT as a structure at bank in London.
The same friend uses Gemini. So for a, it doesn't do video at the moment, at least not in the UK. And he took a picture of the image, shared it and asked her what it is. It could not comprehend what it was.
Caleb Sima: Yeah. Yeah. I'm just, it's not there, who, but to your point, who does do a really good job of that is shortwave, the email app.
They have a really good AI assistant and they also have beta stuff around. Things like smart label where it basically every email goes to an LLM and you write your own prompt for it. So you can do things that says, hey, only emails that look to be directly actionable by me with me in the two line that is not a promotion or a survey.
Please label this as action needed. And it will go and do that quite successfully. So to your point the productivity gain of you're talking about is definitely doable. [00:21:00] Gemini is not very good at it though.
Ashish Rajan: The email of the other example that I called out, but I think if I were to bring this back because the end of the year episode, the one thing that I wanted to bring up is in terms of tooling, we've called out looking into 2025, SOC automation is probably the one that stands out the most.
That's basically the TLDR so far. Am I getting that right?
Caleb Sima: At least so far, I think in 2024, when I think of actual real life time savings productivity, that you can touch, feel, and see, I think SOC automation is the winner. And again I'm leaving out AI security companies who do things like prompt firewalls, which is true.
Also, I think real life, touchable producties, AI capabilities. I'm leaving out AI security companies, just in the fact that they are only focused on AI security, but I think they would probably count, but just in terms of like real, I think actual product, productized AI touch feel AI core companies.
I think that is right.
Ashish Rajan: So to your point, then doesn't [00:22:00] that mean the AI security companies are taking that step with further. Then most of the security people in 2025 would probably be focusing on securing productivity applications rather than like the one that we're concerned about with co pilot and everything else.
We, cause to your point about the current space we're in, we have SOC automation, LLM firewalls from management or firewall you call it. If you were to use that as a stepping stone to double clicking into that particular space for AI security companies directly. What are some of the companies that, or the problem space that you feel would be, coming out in 2025 that may have not been addressed or should be addressed?
Caleb Sima: It's going to be an interesting year next year, because I think next year, my guess is, by around Q2 to Q3 of next year, is when we should see substantial innovative working products that are AI core that are doing a lot of AI in the market. If we do not see that in Q2, [00:23:00] Q3 of next year, I think that is an indicator of failing success of using Gen AI in real life production applications, because we've now had, I would say a couple of years here of real AI innovation and speed with a lot of engineering focus and talent around it. And if you don't start seeing things in a year to year and a half coming to market that are giving some promise around what this thing can do, that will be a very worrisome sign.
Ashish Rajan: So Q2, Q3, are we being too optimistic?
Caleb Sima: I don't think so. When did AI really make its name on the scene? This would have been what, 2022? Yeah.
Ashish Rajan: November 2022. Yeah. So yeah,
Caleb Sima: November 2022. I think now we've got about, a year for people to figure things out. So November 2023, and then another year of investing, building, trying to build something to build a dream, November [00:24:00] 2024.
And then half a year to get those things out to market. And at least V1s in a public, we think about all the startups that are focusing on this and all the people that are invested, like it takes about a year to a year and a half to produce a market viable product. So you should start seeing these things pop up.
My estimate is, Q2, Q3 of next year in mass. All of the investment that's happening this year in building these things or should be preparing to launch next year. It doesn't have to be with customers. It just has to be like, hey, we're sitting here saying only, SOC ops is really where you're seeing real, I think AI impact that looks like it's making sense.
I feel like by next year, we should see a lot more AI core companies that are doing substantially provable. AI things in real life, right? That's a year and a half worth of work around AI at this point. And AI, there should be something to show for it with the hundreds of thousands of people and [00:25:00] talent that's been focused on it for the past two years.
Ashish Rajan: I have a bit of different perspective. The real people who would shine in 2025, as much as there would be a few blips in the radar for AI, I definitely believe the data security people are going to be like the bigger winners for 2025 in my mind. And this could be controversial. People want to say to your point, yep. Q2, Q3, I still feel most people that I talk to. They still don't have their data, right? As much as amazing AI projects may exist and AI security projects may exist, I need to get that data into a pipeline somewhere for someone to look at it and make that prediction for, is this good or bad?
Caleb Sima: Yeah I'm with you actually.
I think your thinking is well aligned with mine that data security and actually AI's impact on doing data security better also in alignment are very attuned to doing this really well. Cool. So yes, I'm with you. It is the time for them, right?
Ashish Rajan: Now I see all the data providers or platforms have started talking about security as well, [00:26:00] which is really interesting.
They're trying to almost, and I think it was Databricks. So one of the companies that I was talking to, they're almost trying to be, I won't say Dropbox is the right word for it. It's almost if you were to think of data for everything data, And if you are looking for a provider, instead of managing ML ops and whatever else, they want to be that name.
And I think Databricks is one of the examples. Yeah. And there's a couple of other ones. Snowflake is trying the same as well. They're all becoming like
Datadog
yeah. They're all trying to be like cloud providers. And he going, wait, this is no longer just a data product. Now they're talking about, you can run the entire pipeline.
Just send us the data. We'll do everything for you. Just log in and everything. Whoa. This is a very different kind of supply chain we're opening up to. And for me, because these people have had the time and the money and a lot of time to process a lot of data and be the default data storage place, what happened with GitHub where suddenly it became like the default for everyone and now it's just GitLab and GitHub are the only ones who people talk about.[00:27:00]
I feel like there'll be an emergence of that for data as well and data security as well. And so my thinking here is that's where my money is on data security companies at the moment. Definitely a blip in data for AI. I'm not discounting that at all, but I don't believe personally it will be that massive.
That we would go in Q4 next year, when you talk about this. That will be like, Oh, okay. More than SOC automation. I'll be curious to see which AI companies in a mass scale, like the, to the point that everyone's talking about it. I'm very skeptical, even though I'm usually an optimistic person, but I'm more skeptical on the.
What would AI security product look like? I'm more optimistic on data security would get a lot more limelight.
Caleb Sima: You're actually speaking a little bit bigger than data security. My mind was stuck in this data security startup versus when you're talking about Databricks and Snowflake and Datadog, these are guys that are, they want to become a data operating system effectively for the enterprise, right? They want to cover [00:28:00] everything and what you do in both to consume, manage, use data in every aspect. And data security is nothing more than just a slice in that. That is mandatory. Right?
Ashish Rajan: Yeah. They have to do it. They're already doing it.
Caleb Sima: Correct. Very similar. Like you've heard Ollie from Databricks say many times. That Hey, they want to go be bigger than Salesforce, right? Like they want to be that kind of size. Salesforce, you could somewhat to say to maybe a little bit of a twinkle, it is the software automation OS for the enterprise, right?
Like everyone uses Salesforce for their business operations and business flows. And I think, they want to be the data version of that. So yeah, for sure. These guys are always going to be winners. I'm very happy about that. However, my speaking, when I was thinking about next year, we're just startups that are AI core and not even cybersecurity, just AI core startups, things that we should see.
Yeah. Like AI, like companies like Harvey, people who are like coming out that are making [00:29:00] substantial mega jumps in what we do every day. Yeah. Like I would expect to see that.
Ashish Rajan: Oh, okay. So you're saying more from a 2025 Q2, Q3, the amount of AI companies, which are making substantial changes in different specific industries, you should, we should see a lot more of that.
Caleb Sima: Correct. And if we don't see that to me is a worrisome sign. That means productizing Gen AI is clearly way more problematic than we predicted.
Ashish Rajan: So you're saying security started in point 2025 Q3, Q2, we will still see quite a bit over there as well from cybersecurity.
Caleb Sima: I just think in general, anyone.
Like Q2, Q3, really Q2, but Q3, Q2, Q3 of next year. We should see, my expectation is we should see an influx of announcements and releases of these new startups that are AI at their core. Yeah, building substantial improvements in automation using AI across the board, [00:30:00] just in life in productivity in sales.
You just name it, there should be a lot of these guys coming to market, including cyber security, right? Coming to market saying, this is now what we're doing. And here is our product that is now available to use. Because that's, when I look at the startup timeline and the tech timeline, I would expect to see the first wave of true sort of people dedicated building AI core startups announcing next year.
And if we don't see that, then that would indicate to me, they can't quite get the value or the fit using Gen AI properly to do the things they want than what they thought they would be able to do. And that would be worrisome because we should see a lot of them come out because, it's enough time it's there and we should see real products making real differences in our lives at that point.
Ashish Rajan: Actually, it's a good point and I'm an agreeance with you as well on that one in the, in that [00:31:00] particular note. So to your point, then our prediction 2025 kind of collated out there, but for CISOs who probably, in enterprise or medium large, everyone has different kinds of challenges. If we were to change the lens from external to internal, but going back to the example where I still live in the world, where I'm going to Google first, instead of trying to figure out if ChatGPT knows an answer.
But then I'm questioning the fact that should I verify the answers ChatGPT has given me? You should always. We're also in a world where Gartner and other companies have started talking about agentic AI for 2025 as a theme. And a lot of people have started picking up that marketing term as well for a lot of CISOs.
Hey, actually, I'm sure you've heard the term, so how would you describe it for the CISOs? And I guess, do you see practicality in people using At least the cybersecurity people using internally AI agent as a way, or as part of their cybersecurity program, or considering it that, Hey, we should have someone with that skill set in our team in 2025, because to say Q2, Q3, if most of the products are going to [00:32:00] be productive now, we'll probably be in a similar situation to we were with cloud where DevOps took off and then security had to follow a couple of years later.
But we have an opportunity here where everyone's talking about agentic AI, everyone's talking about how you can use ChatGPT, Claude and other things. What can CISOs consider for their cybersecurity program for 2025 if they wanted to include some kind of AI or, what's practical, because we've been talking about startups and what people can buy and sign up for.
Is there things we can do internally for productivity that you can think of?
Caleb Sima: I think first I'm going to be practical. I think agentic. It gets thrown around all the time because it's the new hot term. But I haven't really heard people talk about defining the word agentic. I think a lot of people, what I have seen is people use this term to refer to anything that is automated. So for example, scripts that you write to automate your stuff are now being labeled as agents and agentic. And [00:33:00] yeah, and automation in general is now being labeled as agentic because everyone would just wants to say that they're, just like AI, Oh we're using agents, AI agents when it's nothing more than code.
And script to automate, which is again, just another example of, I think, hype that's being thrown. So first, like we should define the difference between automation and scripting versus agents. And what's the difference between these two? Because I will say that I do think AI agents are nothing more than an upstep of automation, right?
It really is. It is just the next level of automation. But what makes an agent versus a script? And in my view, this is my personal view only, but the difference is that an agent which uses Gen AI to define and identify its roadmap of next steps. It has a level of autonomy to create its next level of [00:34:00] actions without it being defined.
That to me is agentic versus just automation. Like for example, you want to automate something. Hey, I'm gonna pick SOC as an example here. I see a phishing email. One, if I write a playbook for this and a SOAR, that is just automation, right? That's a script I've defined very explicitly that when you see this email, is it phishing or is it not?
And if it's phishing, you need to do this. If it is not, then you need to do this versus if it's agentic, your first thing is hey, is it phishing and AI makes the decision for you? And then it says based on the fact that it is phishing, I have at my opportunity a set of tools and a set of things at which I can go choose to do.
I will create my next steps to go and do those things. So I will say, Oh, this is phishing, but I noticed that this phishing looks like spearphishing has only been targeted for a specific set of executives. [00:35:00] Therefore, I think this is a higher priority at which I'm going to escalate to a higher level analyst.
And to my owner, and then we'll make that decision and make that roadmap. Or if it sees it as phishing and says, Oh, this looks like standard phishing. It's been sent to everyone in the company. We're just going to block it and remove it. It makes that decision autonomously. That is an agent. Or it is taking actions in order to go do things.
I'm going to use this tool to then verify what I think it should be versus not. So these are all, I think, definitions that should be applied because I think a lot of people are just throwing agents as automation today. That's at least what I have been seeing.
Ashish Rajan: But do you find there's a need for talent in security teams moving forward?
As many people also saying that it's not that AI would take your job, it's people who know how to use AI are going to take the jobs or at least be better at your jobs, but how many people out there are even considering that as a team skill that they probably want to include someone, at least for lack of a [00:36:00] better word, AI champion in the team who's going out there experimenting here.
What can we do? Should that be a real consideration for people? At least personally, I feel yes, but I don't know what you think.
Caleb Sima: Here's the struggle. Being an operator, everything is a problem and everything needs to be done. And so you have a set of priorities that you must focus on that are the most important priorities for your team and for yourself to accomplish.
AI. Maybe an amazing, cool thing, but when does your team have the time? When do you set aside the time? When does it become I need to go hire someone that knows how to write AI agents when AI agents aren't even defined and are super, super new and very cutting edge. Or is it better that I go hire the new appsec guy because we have a ton of pain and appsec, right?
And so here's the thing is AI agents are cool. AI agents or AI itself is cool things, but I imagine at least today that is [00:37:00] maybe 1/50th of the things I need to worry about in an organization for a CISO. And I also imagine it's probably fairly low on the stack of priorities because yes, AI is changing enterprise.
Yes. I need to be educated about it. Yes. I need to know about it, but do I need to specifically create headcounts or refocus time on my talents in order to go train them on AI agents as a mandate. I think that's pretty low. I am a cutting edge guy, right? I love learning and I would hope that the people in my security team also love learning on their own to be able to go build these things.
But like, when you're a CISO, give it a year or so to see what happens. Like where does it play out? Where do my engineers start up? If my engineers start writing AI agents and they start creating a lot of security risks for us, then we definitely need people on our team that understand it. Do I want to go and invest in that to make it so we're more operational efficient?
No, I think it's [00:38:00] too early to go invest in my team's talent to make us operationally efficient when I think AI agents are super, super bleeding cutting edge. And to me, we just have probably higher priorities and better time focused on things elsewhere. That's my stab in the dark for most CISOs.
Ashish Rajan: I'm maybe leaning on the other side for this, maybe just because, and the reason I say this is because I started in that cloud field, where I saw it happen with DevOps, where No, exactly what you said. People initially didn't believe it. There was a lot of resistance for it. Then people started using it in the development side.
And eventually everyone now had a cloud security position and now whatever is happening with that world and people can find out about it, but it's maybe it's my personal bias to it. I think that's why it's, to me, it seems like an opportunity being presented again to cybersecurity. Hey, you know the mistake we did when we did not pick up on that DevOps thing that was going on where we could see it's making waves everywhere.
Oh, it's not gonna come to [00:39:00] me. But then suddenly you have 25 projects one day.
Caleb Sima: And you're like, yeah, but I don't think that's how to someone who is a laggard. You are correct, but remember how long did it really take for cloud security to get to that level of nine years, eight years. And again, these are not, it's not like a CISO is going to ignore it and say that's not my problem.
I think that's not the case. It's more of, Hey, I get it. I understand it, but is it my priority for my team to adapt and spend time for that now versus saying, let's keep an eye on it. And then how does this progress in the next quarter or into the next year? And also, by the way, my priority is the company and the organization.
And I want to build a great team, a great security team to protect that organization. But if my organization does not make the decision to get into AI or move into agentics or go into these kinds of places, I need to think about where my [00:40:00] priority is on both the risk of that and also versus the benefit of using it to make my team more efficient.
Honestly, when you start seeing the security vendors start taking place, cause there's enough of those guys that's going to sell to me to make my team more efficient using AI and using agents when that starts happening. Again, depending on size of team, but to me, it's just it doesn't make sense to go and mandate my team to or go hire people who knows how to write AI agents, because first of all, the people that know how to actually write AI are like super small, and they're probably working for startups right now in order to go do it.
And then two, it's just not like a good use of resources considering how early and unproven this is. Give it a year again, give it till, Q2, Q3 of next year. Let's see where this thing goes. And I'm giving a very practical, even keeled view of this, right?
Ashish Rajan: We'll let people comment on, [00:41:00] when they watch the stream, I'll let them comment on where they stand on this. It's funny. I think you've obviously spent a lot longer in cybersecurity than I have. And as you mentioned Oh, I had this unique insight that most people or most cybersecurity companies or individual operators, tend to only see they see a wave, they wait for a cybersecurity vendor.
And this is a very high level generalization. So I don't know how big the space or how big the, slice of cybersecurity is that. But a lot of CISOs to your point would wait for a cybersecurity vendor to come and talk about, Hey, by the way, this seems to be a thing you should be looking at. And then they have an epiphany go Oh yeah.
They obviously are aware.
Caleb Sima: Or my team comes and says, Hey, this is something that we want to go do. That's also great.
Ashish Rajan: Yeah, a hundred percent. To your point, a hundred percent practical sense makes sense to just wait it out and see, Hey, is a fire just going to be put out soon?
Or does it go into a, turn into a bonfire? I'd love for people to comment on this, but I do want to move to the next topic as well.
Caleb Sima: Yeah. I don't think it's a fire that gets put out. I think it's just a. It's I'll give you an [00:42:00] example. Like at Robin Hood, we, this is when, Gen AI first came out.
And so I wanted to do, a hackathon around AI. And get the whole team around it, but finding the time to reserve. And even you have to manage, you have to create it and all of the aligned engineering and operational projects that all have deadlines in a company to go do this.
Was this more important than that, than having a hackathon around? No, it is. It's just hackathon is not more important than that. And there's 30 of them that are way more important than having a hackathon on AI. Even though I really wanted to go do that, and actually a lot of people in the team wanted to go do that, but it's just, you just can't bite that because the priority is not, we did eventually end up doing it by the way, but like it took a lot of work, it's hard.
Ashish Rajan: It'd be interesting to see what people in the comments say as well. Was there anything in 2024 that you probably want to [00:43:00] ditch and not see in 2025? I'm sure it's the word agentic, but we're already way past that now. Is there anything that comes to mind that, it's like the whole, where there's a lot of things we want to improve and move forward in 2024, which we have been talking about.
Is there something we kind of people to get rid of or ditch it completely about AI in 2025?
Caleb Sima: I have a little bit of a stick around AI safety. I think a lot of both, it's both positive and negative and the amount of fear, but then the amount of forced controls being applied. It feels like it, there's here's AI and we need to be careful about its usages in the right ways, right?
We don't want to create clearly new biological weapons or whatever these things happen to be. However, I think there's been an overreach on it. It's Hey, I don't trust people. And I am now putting myself [00:44:00] as the buffer and filter of what is good versus what is not good. And I'm going to give a five year old version of the product to everybody with all sorts of barriers and things in place.
I feel like in next year, as we move forward, I think there's going to be a lot of backlash around that. I'd love to see, this is going to be very controversial, less safety and boundaries being put around models.
Ashish Rajan: Yeah. Okay. That'll be interesting. I definitely did not think of the privacy part, but I think what I would think I would like to ditch personally is that I'm noticing a lot of people are trying to favor, and again, my, I think mine is controversial and I think about it.
So a lot of people are trying to favor their own countries AI companies. They don't want to use the one, which is probably, clearly probably a lot more advanced, but the choosing to go for their more local national AI companies to support the local startup, which is not a bad thing. They should do that.
But bringing back to what we're talking about from a business perspective, Is it really like the, if you get more [00:45:00] efficiency from something which is already proven, people choosing to ignore that and going for a personal preference model of, hey, I'm going to become a, someone who only favors local startups in this again, nothing wrong in it.
I just feel like at this point in time where we're still trying to figure it out as to what the right thing is, would you rather use the, if you need a heart surgery, do you rather use the best heart surgeon or do you use the surgeon who's sometimes does heart, sometimes does lung. I don't know. It could be any, but what are you talking about?
Caleb Sima: Are you talking about like the U S restricting its model usage to only U. S. That sort of thing,
Ashish Rajan: that sort of thing. Yes. And so being
Caleb Sima: very nation oriented and divisive and protective.
Ashish Rajan: So there's a company like a ring health company, which is now sinking. So it does sleep analysis and everything.
Caleb Sima: Yeah. Yeah. The ring. Yeah. Yep. I'm familiar with it. Yeah. Or yeah, that's right.
Ashish Rajan: They recently announced that they have started syncing with the glucose monitor thing as well, that both the data can be synced for a short period. Now that is geo locked, but I would love to have access to [00:46:00] that data if I had Aura ring, or I don't have an Aura ring, but if I wanted to know, Hey, how does my sleep affect my glucose level or, which to your point, Q2, Q3, I'm expecting things like these to continue to come out where, Hey, now we have multiple sets of data.
I can ask questions about what my physical health should be based on my sleep, bad sleep patterns, anything else. I should be looking at living like an athlete by Q4 is my prediction, but then I would not be because it's all geolocked.
Caleb Sima: You're basically like information should be free for the betterment of all right is yes.
Which by the way is the original hacker manifesto statements, right? I agree with you. But you know what, it's just going to get worse. That's going to get worse.
Ashish Rajan: Oh, right.
Caleb Sima: Yeah. So I don't, to your point, you think like it, you're right. That should be left behind, but actually I think that's just going to, I think the walls are going to get bigger, thicker, and with way more data walls and silos. That's just going to get worse. That's my.
Ashish Rajan: Because each nation would [00:47:00] want to be the power.
Caleb Sima: Yeah, because now information is turning to be very critical. Having publicly crawlable websites with all your data and information that AI Agents, are going to just going to pick up and be put into models is now problematic, right?
I want non scraping, bot protection is going to go up. All of these things are going to be siloed information, just same as GDPR Hey, information on, UK citizens needs to be stored here. Like data has to be stored here. We're not going to share any of this information. Like all of this stuff I think is going to get worse and that's my prediction.
It's going to get, it's going to get badder.
Ashish Rajan: Yeah, I think I was hoping they'll ditch it because I think I feel like, and going back to the original manifesto of hackers, I definitely feel it would have been great for us to be able to isn't because if the goal is, it's like saying Apple only sells vision pro in US only, which I mean it's almost ridiculous.
Like it could have so many more use cases, people having a developer SDK available for people anywhere in the world should be able to do many things with it. [00:48:00] It goes anti against it, which is why , I'm hoping people would ditch that. But to your point, I can see this in the UK as well, where I think there are individual summits happening for AI on how UK can be the leading country for it, Europe has the same thing.
I'm sure other parts of the world has the same thing.
Caleb Sima: It's going to get worse. Yeah, for sure.
Ashish Rajan: We're going to ask Claude and OpenAI for predictions for AI security for 2025. I ran something on my end. What do you prefer? Do you prefer Gemini, Claude?
Caleb Sima: Claude to me is my preference, but
Ashish Rajan: You go Claude.
I'm going to put in ChatGPT. Do some research and share your prediction for AI security in 2025 for an audience of leaders and practitioners of cyber security working in various industries across the world. So going in no particular preference or just in the order it presented itself.
Proliferation of AI driven cyber attacks. Evolution of defensive AI measures. The two things we were talking about as well, emergence of AI governance and ethical framework, which was the last thing we just spoke about where there will be a lot more walls [00:49:00] and data may be a lot more interesting.
Oh, quantum computing and cryptography challenges.
Caleb Sima: Yeah. That I think is an interesting one. Look at all the things that are going on with quantum computing and breaking of crypto, man. That's scary stuff.
Ashish Rajan: The new chip that Google released, or they spoke about their new quantum chip. Yeah.
Yeah.
Caleb Sima: This is like some scary things, man.
Ashish Rajan: Yeah. Not, we're not just AI is coming in, but we have quantum as well. Like both arriving at the same time.
Caleb Sima: Claude is pretty good, man. It's spot, it's spat out it's stuff. And I'm like, I, you get it's pretty good.
Ashish Rajan: Okay. I'm going to quickly finish mine and we'll go into yours.
So quantum cryptography challenges, then increased focus on supply chain security and kind of what we spoke about earlier as well. Okay. Regulatory developments and compliance imperative, I think, which is what we're talking about, like the solid has been created. And the last one is AI's role in addressing talent shortage.
That skilled professionals can be replaced. The gap of automating routines, tasks, and augmenting human capabilities, enabling security teams to manage threats more [00:50:00] effectively, despite limited resources.
Caleb Sima: Yeah. That's a pretty good generic. Yeah. And
Ashish Rajan: Like the points we spoke about for the entire episode so far, like I feel like we touched on all of them, but I'm curious to hear on Claude, what is the Claude saying?
Caleb Sima: It's pretty good. It's pretty good. Okay. Model chain, the supply chain stuff. So that is the same prompt injection evolution and adversarial prompting will become more sophisticated. I agree. A model poisoning will definitely start rising up from threats as people do training and fine tuning. I 100 percent agree.
AI enhanced SOC. So we'll need to adapt to defend against AI powered attacks.
Ashish Rajan: Okay. Wow. Okay.
Caleb Sima: Pretty spot on. Regulatory compliance. This is a good one. Authentication and access control and zero trust architectures will need to evolve to account for AI system interactions, such as agents, et cetera, which is.
Ashish Rajan: Oh, that's a good one. [00:51:00]
Caleb Sima: It's a very good one. Privacy preserving organizations will need to focus on stronger privacy controls around AI systems. Also, very good. Recommended preparatory actions for leaders. Let me look at the, ah, these are all pretty generic gain expertise, develop skills, emerging technologies to watch, federated learning security frameworks. Interesting. Homomorphic encryption for AI applications. This goes back to privacy preserving. Also, maybe a little bit of the quantum stuff will be interesting, secure multiparty computation for model training, advanced watermarking techniques. Yep. And AI specific security monitoring tools. Wow.
Okay. Not bad.
Ashish Rajan: Yeah, I would say more practical on the Claude side. Yeah. More analyst style on ChatGPT style. It's escalation in sophisticated adversarial attacks. proliferation of AI augmented defenses, which is [00:52:00] your context aware detection and automated response model improvement loops in the DevOps style ML life cycle.
In terms of escalation of sophisticated adversarial attack, it said data poisoning, model evasion and transfer attacks, just going back to what Claude said. The third one is regulating compliance pressures. So emerging regulations and standards. Third party model audits, model certifications before using a third party AI product and security workflow.
And that's a pretty good one as well. So fourth one, rise of defensive AI collaboration and information sharing. AI threat intelligence exchange. Security vendors and large enterprise will pool together and anonymize threat data. The second bullet point over there is open source security ML frameworks.
Community driven repositories of security focused ML algorithms, defensive playbooks, and adversarial test frameworks will mature. By 2025, practitioners will benefit from curated regular updated libraries. Also will open source of AI frameworks. Fifth one, human in the loop will remain critical.
Interpretable AI [00:53:00] for executive decision making. And the second one is ethical and strategic consideration. So humans will be in loop for ethical and strategic consideration as well as. for executive decision, interpretable AI, which kind of makes sense as well. The final one, increase use of AI in supply chain security, integrity, verification at scale, organizational deploy AI based provenance checks and behavior, anomaly detection to validate components sourced from extensive global software and hardware supply chains.
Caleb Sima: Yeah, so it did pretty good. Human in the loop is a good add.
Ashish Rajan: Breast was very similar to what you had, but none of this came up when I asked you to do research. So it went very analyst mode. But maybe to this also highlights the fact that when you do research, most of the blogs don't really call, they don't have an opinion.
They usually are giving you a broad set of information. They won't say action one, two, three, four. No one out there has usually got it behind a report that you have to put your email for such kind of [00:54:00] information. Maybe that's where research gave like a general information where when you ask to do prediction.
But now I guess we still have to make a conclusion. Which one do we lean on more? Or is it like a balance is a cloud or the non or the chat GPT? Oh, one way. Are you using Sona? No. What's it called?
Caleb Sima: This is what I need. I need a single chat interface. That will send my results to GPT, OpenAI, and Claude, and then take the answers, merge them together, and then deliver it.
Ashish Rajan: Oh, and, but verify it as well, that if it makes sense with context. I
Caleb Sima: don't need to verify, even if you just send it to both, give me the the summarization and the ability to view either answer. That would be actually a great service. Someone has got to have built that already.
Ashish Rajan: I mean, Q2, Q3, man. Q2, Q3, someone's got to do that.
We're not leaning on one of the either. Is that where we're going with this one? Both seems,
Caleb Sima: I like, I, Claude's I'm still feeling Claude's a bit, but I would say. [00:55:00] The o1 model of Open AI was not bad either. Yeah,
Ashish Rajan: Yeah. I, but I guess I loved the, like for my adversarial examples, I had model evasion and transfer attacks, adversarial ml and data poisoning.
Data poisoning was there in Claude as well. Claude. Yeah. And
Caleb Sima: there's also a bunch of bullet points that Claude put that I didn't read out around that. Oh yeah.
Ashish Rajan: Same for my end as well. So I feel like if I were to pick, pick one of the two I would probably put them on par. I want to it was like.
For me, I feel they, if I were to not judge this, if I did not have to judge it, if I only had the ChatGPT outfit I would have still been happy. But now that I know the Claude one, because they both made sense when he went through it and you and I independently went through it as well. So we still came to the same conclusion, but anyway.
That's basically was I think my prediction over here. Let's go build people. Q2, Q3 is not far. When we run this podcast episode in Q2, the beginning of Q2, we're going to have someone from the team go back to the, this episode, talk about what we were talking about. I hope everyone had a great new year and the [00:56:00] holiday season as well.
I hope everyone enjoys it. Thank you so much for tuning in. We'll see you in next year, 2025. Thanks, Caleb. As always appreciate 2024, man. All right, everyone. Thanks everyone for your time. We'll see you in 2025. Thank you so much for listening to that episode of AI Cybersecurity Podcast. If you are wondering why aren't we covering all topics, because maybe the field is evolving too much too quickly.
So we may not even know some of the topics we have not covered. If you know of a topic that we should cover on AI Cybersecurity Podcast or someone we should bring as a guest, definitely email us on info at cloudsecuritypodcast. tv. Which reminds me, we have a system podcast called Cloud Security Podcast, where we talk about everything cloud security with leaders similar to the AI cybersecurity conversation, we focus on cloud security, specifically in the public cloud environment at Cloud Security Podcast .tv, which if you find helpful, definitely check out www.cloudsecuritypodcast.tv otherwise, I will look forward to seeing you on the next episode of AI Cybersecurity Podcast. Have a great one. Peace.