AI is evolving fast, and AI agents are the latest buzzword. But what exactly are they? Are they truly intelligent, or just automation in disguise? In this episode, Caleb Sima and Ashish Rajan spoke to Daniel Miessler—a cybersecurity veteran who is now deep into AI security research.
🎙️ In this episode, we cover:
✅ What AI agents really are (and what they’re NOT)
✅ How AI is shifting from searching to making decisions
✅ The biggest myths and misconceptions about AI automation
✅ Why most companies calling their tools “AI agents” are misleading you
✅ How AI agents will impact cybersecurity, business, and the future of work
✅ The security risks and opportunities no one is talking about
Questions asked:
00:00 Introduction
03:50 What are AI Agents?
06:53 Use case for AI Agents
14:39 Can AI Agents be used for security today?
22:06 AI Agent’s impact on Attackers and Defenders in Cybersecurity
37:05 AI Agents and Non Human Identities
45:22 The big picture with AI Agents
48:28 Transparency and Ethics for AI Agents
58:36 Whats exciting about future of AI Agents?
01:08 Would there still be value for foundational knowledge
Caleb Sima: [00:00:00] One thing to set in some ways to think about agents and AI is, in our first iteration, AI is largely we use it as the replacement for Google, right? Like it is a search engine. It helps convert right? We basically look at this thing as searching, converting and creating.
That's what we think about it. And then in phase 2, what happens is AI moves into making decisions and acting on those decisions. This is where we're now moved into.
AI agents, imagine the ability to add about 10,000 extra team members or colleagues in your organization in each of the departments.
So you can excel your productivity by a thousand or probably even more. That's what AI agent is providing. And then this conversation with Daniel Miessler who's the returning guest to the AI Cybersecurity Podcast. We spoke about AI agents, what they are, the two sides of AI agents as they exist today, and all the [00:01:00] marketing that you've been thrown, which is actually wrong and is not what agents are supposed to be.
Daniel does a great job of defining what AI agents should be, especially when you're trying to assess what you're looking at, is that an AI tool or an AI agent? All that and a lot more in this conversation with Daniel.
I hope you enjoyed this because we spoke about everything to do with AI agent all the way to all the questions that were asked on LinkedIn Twitter by folks like yourself. Feel free to continue to respond to some of the questions which we'll include in the future episodes as well. Shout out Adam Atkinson and Rares-Adrian for their questions which were answered during the episode as well.
All the way to what should we be really working on when AI agents are going to solve all the problem. If you know someone who's trying to work towards an AI agent or building one, definitely share this episode with them. At least the cybersecurity lens of AI agent would be very much something that you would get to learn from this episode.
And as always, if you have been listening or watching episodes of AI cybersecurity podcast for a while, whether it's on audio platforms like Spotify or iTunes or video platforms like LinkedIn or YouTube, definitely give us a follow, subscribe. It [00:02:00] means a lot for us. And if you want to take a few seconds to even drop us a review, I would really appreciate that as well.
Thank you so much for tuning in and I hope you enjoy this episode on AI agents. I'll talk to you soon. Peace. Hello, welcome to another episode of AI Cybersecurity Podcast. Today we have Daniel Miessler. I did not intend to do the intro like that, but it's gonna go on like that. But dude, welcome to AI Cybersecurity Podcast for the third time, I feel.
But today's conversation is about AI agents A, if you could give a brief intro about yourself for people who may not have heard your previous episodes, and then we can get into the whole AI agent conversation, would you mind giving an intro about yourself, man?
Daniel Miessler: Yeah, sure. I've been in security for a very long time, like 25 years, almost as long as Caleb.
Not quite. He's yeah, people don't realize he's one of the actual OGs here. But yeah, I've been in here in security for a long time. And as of the end of 22, I went pretty hardcore into AI. I don't know if you remember Caleb, when I texted you in [00:03:00] like November or something, and I was like, stop, whatever you're doing, stop, whatever you're doing.
It, this was like. October, November of 22, and I'm like, stop, whatever you're doing and go into AI. You're like, dude, what are you, should we have a call? What are you talking about?
Yeah. So I made a pretty hard shift right at that moment. And I had just gone independent as well. I had just left Robinhood and basically went independent as my own business and four months later, ChatGPT hit. So it was like. Great timing. So now I'm combining security and AI.
Ashish Rajan: Awesome. And this probably leads me to the topic of the episode as well.
AI agents, could you I think, obviously, before the recording, we were talking about how my entire feed is filled with AI agents these days. I've got a few questions people are asking on my feed as well. But to set the scene, how do you define AI agents?
Daniel Miessler: Yeah, I've got a pretty good definition. I think about definitions a lot because I feel like if you don't have good [00:04:00] definitions people are just like talking at or around each other and not really with each other.
So I define it as an AI system component, so it's part of an AI system. It's not the entire system that's capable of autonomously taking multiple steps toward a goal and doing so in a way that previously only a human could do it. So there's a few key pieces there. One, it's pursuing a goal. That's like mandatory.
It's doing it by itself. That's also mandatory. And then the other one is The steps that it's doing previously would have required a human because we've already had automation for the longest time. Autonomous systems doing automation. That's pretty much all of computing. The difference here is those steps are things that require the human intelligence prior.
And that's what makes agents different.
Caleb Sima: I can agree with this definition. Because I feel like if you take this definition and you apply it to all the crap right now that everyone's saying is [00:05:00] AI agents, it would fail.
That is like everyone is just calling automation AI agents. Oh this thing will automatically push it to your GitHub repo and resolve these conflicts.
And I'm like. Yeah, scripts do that too. It's just like you can't, but now like you can't be a company in the industry without calling everything an AI agent. So not only do you have AI, but you have AI agents now, even though it is nothing more than automation and scripts. So it's very frustrating.
Ashish Rajan: So is Zapier not the number one AI agent in the market?
Caleb Sima: I will say it's got the most promise of being a great AI agent though, right? Like in order for AI agents to work, they have to be just like humans. You have to have lots of tools and capabilities accessible to you. So taking things like Zapier or all these other sort of tool sets or if this, then that kind of things.
Yeah, they have a lot of potential. I just don't know if they are there [00:06:00] yet?
Daniel Miessler: Yeah. Yeah. The way I see it is like all these companies are trying to build all the different things that people are asking for. So they're literally just like taking incoming, press questions or whatever, and they're just handing it to the product team.
It's make sure we're doing this, make sure we're doing this. So if the answer is, are you doing this? Their answer is yes. Yes, we're definitely doing that. Are you an agent platform? Yeah. We're an agent platform. So are you doing like automation workflows? Yeah, we're doing that. It was like, whatever you think is interesting, we're doing.
Caleb Sima: It's super frustrating. I don't know, Daniel, is it me? Or am I just getting, becoming an old man where I'm just like, I see all this stuff and I just get cranky. I just think the more that AI has come out, the more I'm just like, I'm getting cranky. I'm becoming like an old man. I'm like. Yeah, back in my day,
Daniel Miessler: yeah, it happens to me sometimes,
Ashish Rajan: Daniel, for people who are, the terminologies you're using agentic platform and all of that as well. Could you [00:07:00] give a few examples of what they should be, quote unquote, because I guess there's a whole framework people are defined for what AI agents have, whether it's a tool, task, goal, the kind of going back to the definition you just shared, what are some of the other terms that are floating around?
And how would you describe them? A, we haven't reached the AI agent stage yet, Or have we reached a stage where AI agent is, as per your definition, is today?
Daniel Miessler: Oh yeah, lots of people are doing real AI agent stuff. Mostly they're quiet people actually building though.
They're not the ones like on, X talking about the stuff. A lot of the builders actually are talking, but most of the people talking are just like part of the hype cycle. And they're not even quite sure what the stuff is. I think there's lots of different things to talk about. One is like an agent platform.
So N8n, for example, is like fundamentally an agent platform. It's designed fundamentally to do AI workflows, [00:08:00] right? And there's a bunch of others that are exactly like this. But what they allow you to do is just piece things together and then have the agent basically one or more agents as part of that workflow, executing the things.
And then it goes back to what Caleb was saying, when you ask an agent to accomplish a task, the 1st thing it needs is a set of toolbox that it could use to accomplish that task. So the better the toolbox it has, the better it's going to be, the better your instructions that you give it, the better it's going to be.
And the smarter the model, the better it's going to be
Caleb Sima: Like a great example of something that I would say is clearly an agent is deep research. That is something that matches the definition, right? Only a human prior would have been able to accomplish the results deep research produces. And so it's a simple in its sense that all it does is browse the web, quote unquote, right?
It doesn't [00:09:00] have a lot of tools available to you in terms of a toolbox, but it's ability to browse the web and synthesize and understand and dig deeper and go to like that. It's phenomenal. Only a human could do that.
Daniel Miessler: Yes, yeah, absolutely. That's a great example because there's a lot of library science people that are really freaking out right now because if you give a library science person a question of hey, can you research like the history of executive orders throughout all of American history or something, they could be like.
I need eight months to do this properly, or I need eight hours to do this properly, or eight days or whatever. And it's like you uncover a rock and you find another thread and you start pursuing that. But if you watch Google Deep Research or one of these other similar products, even Grok or these other ones, it'll be like, oh, because I found this, I'm going to spin up these other threads and I'm going to pursue those.
And that's the part that would have required a human before.
Caleb Sima: Or [00:10:00] even to know like it says, Oh, I found something that looks like this and I found something that looks like this, but that's not quite what I'm looking for. I really need to go after this thread. You can watch that reasoning and like only human would have been able to make that call.
Ashish Rajan: Like what would be an example that for people who are either listening or watching this and they probably have access to deep research because deep research is not available globally yet. It's only in America at the moment. Oh, yeah, so I guess when people do get access to it, and if they're listening or watching this, what's a good example use case to give to a deep research, because obviously a lot of people would not even know, Hey, am I just telling you to give me an essay on the executive orders or?
What would be good?
Daniel Miessler: I could tell you one. I just did.
Ashish Rajan: Yeah, sure.
Daniel Miessler: I just did one. Which is strange. I asked her to research the difference between an energy boost from working out or like meth or something. Oh, [00:11:00] I got a thumbs up. The difference between
Caleb Sima: you got a thumbs up on the word meth.
Yeah. Yeah. Yeah.
Daniel Miessler: Yeah. Yeah. Yeah. Yeah. This was not an endorsement, by the way versus what you get from like Ritalin or something for somebody who has ADD. And I basically told it, I noticed that somebody who works out has a creative burst of energy. Somebody who takes Ritalin has a creative burst of energy.
How are these different? And how are they similar? And it produced this giant archive of I need to go read these like 98 papers going back decades. And then it does all its analysis like Caleb was talking about. Then it comes back with this conclusion. So that's a good example.
Caleb Sima: I'll give you a much more less serious example.
So my wife and I were at lunch yesterday and she was talking about like when she was a kid, she would watch these like Chinese soap operas on TV and she was telling me about this episode and she was like, [00:12:00] man, I remember that even when I was a kid watching this thing and I was like, Oh, do you remember the name of the show?
She's no way. I have no idea what that is. And I was like, okay let's see if deep research can do this. So I pull up Open AIs deep research and I said, Hey, I'm trying to find a Chinese soap opera TV show series from the nineties. And I described one of the episodes that she told me about.
She was like, Oh, this guy, this girl gets kidnapped and blah, blah, blah. So I wrote this thing is can you identify what this Chinese series is? And then let it run, it ran for 15 minutes, came back with an answer. I read to her the answer, she's yeah, that's it. That's the show.
Ashish Rajan: Oh, wow. Wait, so doesn't this depend on all that data being accessible?
And I guess where I'm coming from is, and I'm not a skeptic on this, I'm just coming from a perspective that we talk about all this data from ages ago for it to have the right information, the right context. For, to be able to [00:13:00] tell, Hey, what's the difference between an energy boost versus a Chinese show that was from the nineties.
Yeah. All this should be a digital copy somewhere for it to, or someone has made I, I almost question, like, where would the data source of this research be? Like, I would have thought the 90s show should not be on digital.
Daniel Miessler: So it's a combination of two things. It's one, what's already in the model that it knew about.
But more importantly, the stuff that it didn't know about, it's just scrubbing public papers. And there's so much research out there that is still public. There's some stuff behind paywalls. There are a lot of papers out there about these topics and not just papers, but articles, blog posts or whatever.
So there's lots of public information to crawl.
Ashish Rajan: So going back to the AI agent question, then I guess for people who are experimenting the deep research, which is like one of the closest examples of AI agents that we have today, where it goes off and does its own thing. Cause it, Caleb, you mentioned it as a tool rather than, it is close to AI agent but its not an AI agent, more of a [00:14:00] tool for a human to use to potentially get to an answer, or would you say, or would both of you consider deep research is an AI agent v1 for lack of a better word.
Caleb Sima: No, it is. It matches the definition of for Daniel's definition of an agent, which is it's something that it makes autonomous decisions.
Has a goal and objective and is something that normally would have only would have required a human to be able to accomplish and it cannot be done through true scripting, right? You cannot script deep research like that is just, you could, it would be crappy results. So I guess technically, if you want to make the definition better results, like you could script deep research, but it's going to be crappy results.
Ashish Rajan: But yeah, to work with it or not as well. And I guess Dan, you mentioned the whole use of the right kind of model that kind of prompting and I think maybe over the last few episodes with all the updates coming with ChatGPT and others is prompting still a big for lack of a better word, skill set required for AI agents to be built.
Daniel Miessler: A thousand percent.
Ashish Rajan: All right. [00:15:00] So people still need to use, I think I was reading the book, which is the before you came in, I was trying to show Daniel Caleb, I was reading the book from Chip and the AI engineering book, show you this is AI engineering book. And I was trying to read through.
And she was talking about all these AI engineering stack and what they are and got, went down a rabbit hole of what an AI stack is and what AI agents are all about. And it led me to a conclusion where it needs to have, at least the research that I did led me to a conclusion. AI agents, they could be multiple AI agents, multi model AI agents.
They could be, like, in terms of the definition you have, works absolutely well for AI agents as you describe it. Deep research is a good example of what AI agent is. Would you say, from a security perspective, going back to where you're focusing energy on, what is the use case there for AI agents and how close are we?
Beyond just a deep research as a general tool, or are we able to use that for security as well?
Caleb Sima: I actually I want to [00:16:00] back level a little bit and also talk about we've talked about AI agent from a Agentic pragmatic perspective in the sense of engineering code agent that runs, but there's also that what is it?
Computer assistant or C. U. A. or computer user assistant, right? Is that the right definition? I need to go look that up.
Daniel Miessler: I can't remember the acronym, but that one is proprietary. So it's like later change, but you're talking about M. C. P. No, I'm talking,
Caleb Sima: no, I'm talking about actual screen usage.
So it is, yes, it is like way where it's operator, right? Where it is a, it uses a screen, it uses a browser, it clicks through it. It acts as user. The effectiveness is saying I can spin up a virtual instance. Load a piece of software that is the AI and it will click, use the mouse, use the keyboard and use all [00:17:00] interfaces and interact through the interface as a person would not as with a way a program would.
And I do think when we start, when we think about agents, these also fall into that bucket, right? So we should talk and make sure when we think about agents in terms of users, there's these deep research behind the scenes, coded agents, and then there's the agents that are operator that are like, I use the screen and the keyboard like a person because both of them are going to be are going to explode.
And the question is also. How are each of these being used? How are their impacts in security to the enterprise to the world going to be? And how do you manage this? So the way I think about it is security has to deal with risks across the board and agents specifically are bringing 2 interesting kinds of risks.
The automated autonomous in the code background running like [00:18:00] a system and then there's the, oh, this is like an employee, but is actually an AI and those are 2 very different kinds of agents that we're seeing.
Daniel Miessler: Yeah. Interesting. Yeah I think I think for cyber security there's this really powerful concept called theory of constraints.
So if you think of something you do inside of a company as like a a pipeline, like a set of steps, it has to get done. I've got an example of claim right is like an insurance company and the human has to take in an insurance claim. And the first step that requires the human is okay, let's or the first step that someone has a job because they are a human and they're smart is look up and see if they have a policy.
See, if it actually matches, because it might not be an exact match that you can automate easily. Then look at the images that were submitted. And see if there's any signs of fraud, like if they're trying to cheat on this then use your best judgment and all of your training for this company to determine how much should [00:19:00] be paid out.
And then also go review the documentation to make sure you're inside a policy for that. So you have someone like named Carol or Mark or whatever, and they're like the best insurance processor because they've been working at this company for 12 years. They could do like 124 of these insurance claims in a week, and their accuracy is like 87%.
Okay, so that's like an insurance thing. Let's take like a security example of this where it's Sarah is really good at processing like intrusion alerts that come into a security thing to see Hey, is this something worth worrying about? So you can turn over a rock and go pursue what logs did they do?
Did they try to move through the enterprise? Is this part of a bigger attack? And let's say this person is really good because they could do 50 of those per week. In both cases, you have a bottleneck. This is the theory of constraints. You can only process, you can only look at so many logs. [00:20:00] You can only pursue so many investigations because there are only so many engineers that you can hire and especially at different quality and like speed levels.
Fundamentally, to me, what agents come down to is we can magnify that number of intelligence tasks that can be performed. Okay, and I got a buddy named Joel Parish, who talks about this theory of constraints as like his universal lens for looking at the attack and defense side of things, which I very much adopted because attackers aren't worried about getting initial access. That is not their bottleneck. Their bottleneck is the ability to gather more context, actually do the exploit, actually make money off the exploit, go and sell that to other teams. These are all the things that require human work and human thought. And that's what makes one [00:21:00] attacking team better than another or an attacking team better than the defense.
The defense has millions of logs to look at. They can't do it. They're stuck. That's a theory of constraints bottleneck. AI is going to help by adding 10,000 more employees to look at those logs, 10,000 more people to look at those logs and actually do intelligence tasks, which are things that can previously.
Only be done by a human. So why not 100, 000 instead of 10,000 like it's going to start as five and then 10 and then 100 and this as this just gets cheaper and cheaper, it's going to get cheaper and cheaper at the same exact time that he gets smarter and smarter. So pretty soon you're going to have attacker organizations.
That instead of functioning like 80 really smart hackers who are trying to exploit the internet, they're now functioning as 80, 000 really smart hackers trying to [00:22:00] exploit the internet. And they're going up, hopefully, against defense teams who are using that same tech to defend.
Caleb Sima: Except it's a lot easier to attack than defend.
Yes. Yep. And the struggle that defenders have always had Is even if I have 5000 new employees who are analyzing all my logs and looking at all my events, this, the struggle is, how are we going to adopt to fixing and prevention? Because we still are to quote our politics and red tape inside of an organization is what inhibits us from ever being able to get to that state. So when attackers start running to Daniel, to your point of maybe not even 80,000, 8 million on the internet doing, fairly smart attacks and we cannot, we get screwed. The reason why we're screwed already is because of our internal red tape, [00:23:00] politics and challenges in order to do that.
It's going like we are going to be forced in some form or fashion to figure out how to solve that problem.
Daniel Miessler: Yeah, I've got a really good example of this. Both Caleb and I were at Robinhood during log4j . And so here's a very simple thing that like should have been solved a long time ago and still is not solved.
Wasn't solved then, is not solved. Now you have a new vulnerability. It is external facing, but there's nuances in some configurations and versions it's really bad in some. It's not. Here's the question out of your tens of thousands of servers, which ones have what configuration and by the way, you're in a dynamic stack.
So boxes are being spun up with different versions all the time. So the state is constantly changing.
Caleb Sima: And let's also, I can, I add this, assume 5, 000 of them are actually truly vulnerable. Yeah. To get, go back to Dan. Okay. Which one of the 5, 000 do you [00:24:00] actually work on first?
Daniel Miessler: Yeah. So how exactly? So watch this now let's figure. Okay. Now we have the tech. Which application is that actually, because when you need to go remediate something, you can't like the server team doesn't know the server team. That's not their stack. There's their stack is the tech. Then you have a different group is like the engineers who actually built the app.
So the server people can't just go and change everything that might break the app. Okay, so how many people do you need in an organization where you have thousands of developers? Or hundreds or tens of thousands of developers. The stack is changing constantly. So the actual attack surface is changing constantly.
Caleb Sima: The people are changing constantly.
Daniel Miessler: That's what I was going to say. In Workday, okay the person that you sent the remediation to, let's say you spent an hour and a half researching this particular vulnerability. You found out, oh, actually it's Sarah's team, but actually that team is part of Chris's team now.[00:25:00]
Because if you send it to the wrong person, Their boss will just complain to security this, you're annoying my team, I'm trying to build here and what they do is they go higher to fight the security team to say you're taking away valuable resources from engineering and security is going to lose that battle eventually.
So what you have to do is you have to go to the right person. This is how you do remediation. You go to the right person. You give them the perfect context. Hey. Inside of your app, if you change this to this, you'll be good to go. It will take you, 38 seconds. And by the way, I submitted a PR for you.
All you have to do is review and approve. That is the work that needs to be done to actually do remediation. Guess what that takes? A really smart security engineer. Lots of research from Workday, because guess what? All the organizational charts, they're all old. They were old last week because there were layoffs and a whole bunch of hires and [00:26:00] reconsolidation of teams.
All of this is manual work that's only able to be done by humans. What happens if you have 10,000 people on the team who their whole job is to constantly monitor workday, keep that in line with the attack surface, keep that in line with which developers are working on which apps and keep updated the mapping between the apps and the tech.
So when you find a vulnerability in the tech, it updates the state of the app. It knows which developer to send the remediation to. And it perfectly writes the remediation instructions for that developer to make it easy for them.
Caleb Sima: This is agents, by the way. Yes. Yeah.
Daniel Miessler: This is agents. We would need a team of hundreds or thousands of people to be able to do that quickly, depending on how quickly they could do it, but now we see forget AI, take it out of your mind. Forget even the word agents. [00:27:00] Take that out of your mind. Just imagine a world where you had 10,000 of those people. Would that be useful to you as a CISO? Holy crap. Yes, it would. That happens to be what agents are.
Caleb Sima: You know what, I did a a chart thing a while ago in a presentation, where I gave predictions on where I feel AI is going.
And so one thing to set in some ways to think about agents and AI is, in our first iteration, AI is largely we use it as the replacement for Google, right? It is a search engine. It helps convert right? We basically look at this thing as searching, converting and creating.
That's what we think about it. And then in phase two what happens is AI moves into making decisions and acting on those decisions. This is where we're now moved into. This is about agents. [00:28:00] So when you think about AI, AI is still behind all of it, but it's moved in its use cases. From being a Google to now making decisions and acting on those decisions and using its suite of tool, its toolkit in order to actually action and on the decisions it wants to create, that's the agent part of it.
And so the next phase that my prediction is that today we are doing it very independently, right? It's oh. I will look up this thing. I will go search Workday. I'll go message somebody on Slack. That is an agent. But when you go to the next phase, it's going to be both making decisions and taking actions, but doing it at scale.
So to me, that's the next stage, which is, hey, it's not just going to be an agent acting on behalf of somebody, it's now an agent that is monitoring your entire production infrastructure and taking making decisions [00:29:00] about it and taking actions on it. So now I can look at, let's say, 5000 compute instances, monitor the health of all of these instances, see where they are failing, go out and fix it and make these decisions autonomously. Now what you're doing is dealing with AI and agents at a scale. Now you've moved into a operator management role, right? Versus just a sort of agent single independent execution role. So this is where I think security and I always, I stated this in my slide.
Which is security gets interesting when AI makes decisions, acts on them, and gets really interesting when it starts making management at scale decisions. So that's my prediction on the show about where this thing goes.
Ashish Rajan: I love how both of you talked about what it is today and what the potential future is as well.
I think for people who are listening or probably watching this one. One question that comes to mind [00:30:00] straight away is that obviously we're in that phase where we're still not in a full AI agent mode. And perhaps we may be in the next few months maybe primarily in the U. S. Because it's available in the U. S. at the moment. But people may be using open source versions of this. We're also in this tussle now where as a security person and I kind of Caleb, you touched on this earlier about the new risks that have been introduced into the organization because of this. Initially, while we are not at that operational management stage of an AI agent for people who are working towards building this or using this from a security perspective AI agent where it stands today, I may imagine it starts off in the engineering or development side before it starts the security side in most organizations, as many of the technologies do.
Caleb Sima: I would say that is true for the background worker agents, but not true for the virtualized instance. Use your operator like agents. I think operator like agents are going to come [00:31:00] super fast and super quick. When operator gets to a stage where it's I can go to an enterprise and say spawn 500 virtual instances and they're now your junior interns that's going to happen really fast.
Daniel Miessler: Yeah, I would agree. So I wouldn't break it down the way you're thinking about Ashish. Again, forget AI and forget agents. Just say who inside of a company needs 10,000 super smart people who can execute things competently. Do we know of any group that doesn't want more employees that, that are really good at doing what they need to do?
Caleb Sima: I've rephrase that once more employees for cheap. Yeah, that's right.
Daniel Miessler: That's a great point. That's why I said, that's why I said once, that's why I said once they, they do once their team to be much, much larger. Everybody needs that. So it's just intelligence. That's all this AI stuff is.
And in the reason agent [00:32:00] matters is because of what Caleb was talking about. It's the action. It's the action that it could take. So it's not just analysis and summarization. It's can it actually do something for you? So IT teams need this like security teams need the sales marketing. Yeah. It's all the same.
Everybody needs more smart people to be able to do stuff for the team.
Caleb Sima: So what's going to happen is it's all about the friction it requires in order to deploy successfully, right? Yes. So if you are in what you might be thinking, Ashish is Hey. Engineering's are going to do it because you have to build the code.
You have to build frameworks. You've got to do all these things. And that is very true for the engineering code based in the background workers agent. Yes. You have to build the platform. You have to have a service. You have. All of these things, and it requires high friction. Why is it high friction?
Because you have to change your processes. You have to change the way you think about your engineering structure, how you access things. All of those [00:33:00] things change when it's about a background worker process, but none of that changes on a operator basis agent, where all it is a new laptop that you hand to now an AI engine.
So think about this. No friction. No changes in your processes. No changes in your technology stack. No changes in everything you do. What happens is I just boot up a virtual instance of think of it as a laptop, handing a new laptop to an employee and they get created a role. There is now a new user.
With a new role, and they're given all of the same things that your IT team hands to a new employee. Now it's just driven behind by an AI and at this point, what's amazing about this is if you can get that to work better and really well, no changes in process occur, so there's no friction. So I can now instantly deploy 500 of them, right?[00:34:00]
That's it. As long as security gives the thumbs up on the user role and the accounts permission at which they believe this thing goes there's no question. CEOs on every company in the world are instantly going to be like, Hey, I want like 1000 of these spawn up virtual instances and put these agents on there, which, by the way, you should invest in some virtual infrastructure, but like that's it, give it the right role, the right permissions, that thing gets on boarded with its own image windows OS X, whatever it is, has all the tools.
It'll be logged into Slack. It'll have Atlassian, like all these things that I have all access to all the tools and employee makes no friction. No time. Yeah.
Daniel Miessler: Yeah. I definitely agree with that. I see them, I think as unified, I would say, because to me, that laptop and the Slack and all those tools to me, the ability to have that laptop is a tool set, right?
So you might have a back [00:35:00] end set of tools. Which are like, Hey, change the code in the background or like GitHub or whatever. But to me, like handing somebody that laptop, it's another set of tools. I agree. There's like this distinction between like front end and back end, I think is really powerful, but to me, that laptop itself being laptop environments is like a meta tool that has all the tools inside of the laptop.
Caleb Sima: Yeah, that is correct. The, the way that I make the distinction is I think most people in an enterprise think building these agents is going to take time. You like, you have to get keys and access to an agent for it to access the resources it needs.
Versus in the, to your point, if it's a laptop, that is the tool. None of that applies anymore.
Daniel Miessler: Yeah. Yeah. To your point, so here's the way that's going to work. You're going to train this new intern by feeding it videos of a whole bunch of regular employees onboarding. It's just going to watch.
It's going to be like, Oh, so I got to go to Slack. I got to read the Slack messages like, Oh, I guess I have to [00:36:00] post into Slack. Hey, it's my first day. If
Caleb Sima: anyone, I am agent number one, two, four, seven. First day.
Daniel Miessler: Yeah. This guy, I, I really Karpathy talks about know we have AGI when The AI intern shows up and says, Hey, this is my first day.
I'm going through orientation right now. I'm reading all the docs. I had a question about this and someone responds and it responds with a confluence link or something. And it goes and reads all that and says, Oh, thanks, John. I really appreciate that. I checked that out and like it's just acting like an employee.
Caleb Sima: Yeah, my question is now answered. Yeah, I've noticed that this is out of date. I've gone ahead and updated it.
Daniel Miessler: Absolutely. So the point is, it shows up stupid on Monday morning. It goes through orientation. It follows all the steps. It's Hey, I tried to do this, but I got access denied. So it sends off the [00:37:00] proper email because it looked at the documentation.
Ashish Rajan: It's just, it's acting like a human. Yeah, .
In my mega mind stage, I feel like there's to what Caleb and you are saying, this probably sounds like there's an agent on the top for, a C level, which has access to all these laptops, tools and everything else from an operator perspective.
Ashish Rajan: But it's also like sub agents for each business unit could have their own agent for whatever tasks are required that goes into even more sub agents. So it sounds 10,000 agents, 20,000 agents just being running in an organization. What does that mean for people today? Obviously, we are not there yet.
We are only using operator. In terms of preparing for it,
Caleb Sima: I bet by end of this year, 1st quarter, next year, enterprises are going to be creating roles for agents.
Daniel Miessler: Oh, 100%. Yeah, I think we see a lot of identity organizations focusing heavily on the distinction between human and non human identities is for this reason we have to know this log was accessed, this access [00:38:00] was requested.
This access was used. Okay. Is this an agent doing this? Is this 10,000 agents doing this? Or is this actually the physical person doing it?
Ashish Rajan: Wait, so to your point about the non human agent in this context being more than just username, password or more than API credentials for OpenAI
because do we know what the signature looks like for an AI agent? How it operates beyond just the fact that, oh, it has a role in my Active Directory?
Daniel Miessler: It's going to look like a human. It's going to do the same human things. It's going to use the same resources. Like Caleb was saying it has an employee laptop.
Okay. When it's given work by the manager says, I need you to go review these reports. I need you to write a summary. I need you to email it out to all the leadership team. I need you to schedule this meeting. It's just using all the regular tools. It's using Slack. It's using the calendar. It's using all the regular stuff
Caleb Sima: it will. So you'll, if I'm, if I step back and I'm an admin, I run [00:39:00] it. You're going to have 2 areas where agents or AI will now impact you. The 1st is you will create new employees that are going to be AI based. So you've got to figure out. What are the roles? What are the permissions? What are the startup? Like, all of these things are all going to be.
It's like brand new employees. That's your 1st problem. Your 2nd problem, which you're already dealing with today to some degree is going to be my current existing employees are going to be using a lot of agents in their machines. And so I have to figure out, AI agents are now going to act on behalf of some existing employee.
So if it shows to me, today's logs and today's system permissions are assigned to Daniel. But if Daniel's now running agents that are doing three fourths of his work, it's going to be querying Slack as Daniel, adding edits, making changes as Daniel. But do, is it important for me to know [00:40:00] that it is Daniel physically doing it, or it is an agent on behalf of Daniel doing it.
Does that make a difference in how I do controls, measurement, permissions? It does. You will have to figure out as an admin, how to solve both those problems.
Daniel Miessler: I think that's a fascinating point. Because check this out. Let's say I work at Caleb's new company and there was a breach that happened from my account and Caleb is in charge of researching was it me or not?
He looks back in my last 2 weeks and I've been doing the work of 500 employees and I haven't slept I've been submitting code modifications I've been doing I'm submitting like 200 code modifications per hour and it hasn't dipped Not for one hour. And so it's okay something's going on here.
Oh, and by the way, one of the things I did was take the super sensitive [00:41:00] document that came from like a leadership meeting and uploaded it to the internet. And that was the reason this whole investigation started. So the question is going to Caleb's point. Did I do that? Or did some AI using my account do that?
And because that question is messed up I think it's going to force just Caleb was talking about. It's going to force us to differentiate between human actions and AI powered actions. Whose accountable, right? Yeah. Yeah. We can't take my keys that I'm using to manually do things as a human.
And just hand them to that entity, because then it's going to be, you can't tell the difference.
Caleb Sima: So if you think about the forensics problem. Also, yeah, the amount of data now that you have to go through versus before you have to go through. And so now we make sure that our security forensics and incident response teams has to have AI in order to be able to deal with just the amount of data that gets generated from a [00:42:00] single employee, who is now agent aided, all right because the amount of work theoretically should be 5, 10 X easily to Daniel's point. What's also going to be interesting is when I think about the problems of agents acting on behalf of is I think there's an issue around thought process to the point of what Daniel brought up. My example is very simple. Daniel has a set of permissions that are given to him that he's allowed to have. So let's just say for the sake of my example, Daniel is head of marketing for my organization.
So Daniel has both access to our social media account. And clearly has access to his own email when an agent acts on behalf of Daniel with his permissions, the ability for that agent to then take Daniel's personal email and post it publicly on our social media. It's very doable. This is not a permissions problem, right?
This is actually those are correct [00:43:00] permissions, but the context, the understanding of knowing I should not take personal email and publish it on Daniel's social on our corporate media stream is very different. So an agent has the permission ability, but do they have the foresight or the knowledge or even if that agent was malicious, if they got prompt injected to do this, how do we as security monitor that and manage that and prevent that from happening? So Daniel is using AI agents. He gets maliciously prompt injected through a data poisoning attacked. It then takes his email, posts it on that public media stream. How do we know, do we even have security products that will help us identify that? Do we even have the technical ability to stop that from happening in real time? Even if we see it happening in real time, like these are hard problems to figure out. [00:44:00]
Daniel Miessler: Yeah. Yeah, absolutely. I was thinking about that in the earlier example, let's say I'm on a call with a bunch of other managers.
And in the meantime, my my agents are reading Slack. And somebody in Slack posted a message that has hidden prompt injection inside of it. Just the parsing of that message can cause it to go take an action. And that action might have been to post on social media or to send a document out. And it just did that because it parsed the Slack message and the Slack message was malicious.
Yeah, this is huge.
Caleb Sima: So today fun story. So today I'm talking to the Hayes Labs guy. Oh, yeah. Leonard. Leonard. Yeah. Yeah. And I'm going to ask him if he can red team me. Oh, nice. So I am using, or I am starting to use a bunch of AI type utilities and tools for my personal productivity.
And the question is. How good are these things? If someone really were to target me, [00:45:00] could they completely exploit me? And so I want to ask him to use me as an example and see if he can break my sets of AI use productivity tools. And if they can, that becomes a pretty interesting blog post.
Yeah. Yeah. Yeah. That's really cool. I'll talk to him later today to go and see if he's willing to do this. I'm doing this on this, but before I even talked to him.
Daniel Miessler: Yeah, Ashish I wanted to touch back on something. Caleb was talking about earlier, if that's okay. Cause I really talking about the agent thing. I want to talk about what I think, like the kind of where it's all heading and what the big picture is, because I think what it comes down to is this structure which I don't know what it's actually gonna be called.
I called it SPQA in like early 23. But I think it's basically like you have a giant state. The state is like Caleb and I have been talking about asset management for the longest time. I believe asset management is so absolutely key, but imagine [00:46:00] 10, agents or whatever, maintaining your state for your organization.
That's like the centerpiece of this. And then you have another piece, which is that's your current state. What is your desired state? So you have a policy that says, this is my desired state. I do not want to have this type of data in this type of country. I do not want behavior from employees to look like this.
I do not want outbound traffic to look like this. So that's a stated desired state. The current state is being maintained by all this automation that's feeding an AI context. What is the role of agents here? Very simple. Look at the current state at all times. Constantly. Look at the desired state just given by all the leaders in the company, make changes accordingly, try to make the current state look like the desired state.
Ashish Rajan: God, that scares the hell out of me.
Daniel Miessler: What does that look like? This, that is now a business conversation for CRM, for customer management, [00:47:00] for churn, for remediation of vulnerabilities. This is, that is the size of the TAM here. Make the current state look like the desired state.
Ashish Rajan: And so you know what then would make a lot of money because, Daniel's right.
That would be a dream. Here's my desired state. Like in a CISO, I want to say, I'm going to set a boundary that says 5%, like we should always be below 5 percent vulnerabilities according to Yep. Plugs into, defects, right? Or something like this. So this is the desired state. So it's going to make changes obviously to that state.
The thing that scares me is how does it, can we get to a world where there's some verification that, that makes me feel confident that whatever changes it makes in order to keep that desired state are good ones.
Daniel Miessler: one, 100%. What's beautiful about that is it's also in the desired state.
Ashish Rajan: Yes, that is correct.
Daniel Miessler: You make it very clear that look, while you are pursuing this goal, [00:48:00] you must not deviate like this. We can't have you being crazy. By the way, your prime directive is don't take down production.
Yeah, so you can give it all these instructions. And the whole point of agents is that it's interpreting with intelligence of humans or potentially beyond humans at some point, but it's able to take these.
400 things that the CEO said, and the CISO said, and collapse them all together and keep them all in its mind at the same time while it takes action.
Ashish Rajan: Yeah, this is the desire and current state is an interesting one also, because because we had two questions that was sent by some of the viewers who wanted to ask what AI agent goes back to this as well.
So probably I'll ask the first. And then also second, I would love to hear what both of you think about this. The first one was from Adam Atkinson. How do you see the role of AI agents evolving in the cybersecurity landscape? And what measures should company take to ensure that AI agents operate ethically and transparently?
Caleb Sima: I feel like we [00:49:00] covered a lot of that in this what the ethic and transparency. So we covered what AI agent would change and operator side as well as the, I can, I could touch on ethics and transparency, maybe. So first of all I want to hit on.
Let me hit on transparent because it's the easiest and then I'll hit on ethics. Transparency is easy. Transparency is just about are you getting visibility into both the actions that an agent is taking and hopefully some of the thought process that is happening? With reasoning models, what's great is you are getting transparency to the thought process as to what is happening, which allows you to at least see the thought process before the decision has been made.
So getting transparency, I think, is a. Pretty easy done thing, not that's an easy problem to solve. Ethics, however, I think is a very interesting conversation. So as part of I helped create the CSA's AI safety alliance, right? And my focus with that alliance was how do we [00:50:00] guide enterprises on practical usage of AI in the enterprise right from a security perspective and everyone around me was constantly pushing saying we need to deal with the ethics part of AI models and the way that I think about this is I was like, I refused and I did it for a couple of reasons. When I think about ethics, you can think about it at a couple of levels.
There is the AI, is it a threat to humanity, right? Like level, like everyone talks about is AI a threat to our nation? Which is another angle of that. And then is AI a threat to our morals, and what we do. And then finally, to me, there's AI threat to the enterprise.
And I was like, I don't want to get into any of the other threats. Like threat to morals. That's what people say when they say ethics. The problem with that is different for every person, country, nation, city. [00:51:00] Like you cannot, how do you know what is moral versus not moral versus what's ethical versus not ethical?
All of these things are, can be context sensitive from company to company, right? Much less, I think from nation to nation. And to me, this is a question that cannot be answered at that level. These are things that need to be answered at the enterprise level, at the, at a state level, at a country level.
These are the kinds of things that need to be answered there. And it is not our job to figure out what you define as being ethical versus not ethical. Yeah. Yeah.
Ashish Rajan: Would you say, I guess this kind of goes back to what Daniel your example of the current state and desired state the prompt or the policy that's written to what should be our desired state.
That should also include for what's considered as ethical by the organization.
Daniel Miessler: 100 percent definitely. That is where the leader and to Caleb's point. [00:52:00] This also changes when a new leader comes in new leader might come in and be like, I just don't like this direction. I think that was morally wrong.
I think it was morally too safe or morally not safe enough. Therefore, I'm making the following change. Now, what's crazy about this? And I've actually done this with live real AI is I have the new leader make a new statement about how they want the the org to work. And then the very next automation that runs the very next continuous management thing that runs.
It finds a hundred different things that is now going to change in the environment as a result of that one high level statement by that leader.
Caleb Sima: But this is the problem. This is what scares me is that you said that like scares because high level leaders. And by myself, I am also one of those we don't know what we're talking about.
So like making a great statement at that broad level and then having it do these [00:53:00] cascading changes is sounds like a disaster.
Ashish Rajan: I agree on the whole there's something to be said about the whole human intention for a vision as well. A lot of human leaders, as we talk about today, they have a vision that they want to implement in the companies or country, states, wherever now that's just a possibility. It's not the end state that they wanted. They would like to think that based on the information I have today, that is the desired state that I want to get to. And along the way, you make some decisions and you continue to improve it as you go. But going back to what you guys are saying.
I almost feel like it may lean people or lean leaders towards being more safe when they talk about these things or are, it would not be a grand vision. It would be a safer vision moving forward because to your point, if the moment that leader changes now, there are the 20,000 changes that's going to happen.
People lay off emotional change that's going to come in and companies culture is going to change. There'll be a so much to think about. [00:54:00]
Daniel Miessler: I think the biggest difference is organizations will more closely match what that particular leader wants to do, because right now there is extraordinary slack and gap and noise in between the intentions of a leader and what actually happens on the ground.
There's multiple layers. There's multiple teams. There's multiple hierarchies. Each one of those hierarchies has a different quality of signal that actually gets passed down. So a leader comes out and says, Hey, we've not been aggressive enough in this market. This one particular market. We slept on it.
For example, Microsoft is we're going back into mobile. How long in 2005 would it take for an average leader inside of Microsoft to be like we need to think about mobile more seriously. We're talking about steering this giant aircraft carrier 1 degree at a time as opposed to you say something like that to an [00:55:00] AI 2 years from now as say, a company of 10,000 people.
That is going to change the procurement process. That is going to change all current documentation. That is going to change all training materials. The very next employee onboarding that comes out the following Monday is going to be fully updated for what the leader just said. So we're talking about now it's his little battleship that could turn very quickly as opposed to an aircraft carrier.
Caleb Sima: But, again, it's both great and great. Also. All right. Yeah, because, we already in organizations and startups have too much ADHD as it stands. Now, what you're saying is everything changes based on that ADHD in the entire organization.
And so that would be a disaster. However what I think we need to keep our minds open about, Daniel, what I think is right as you're in the optimistic view, which is we should have the capability to do however, to your point what are the boundaries that we put in [00:56:00] place in order to make sure that although we have the capabilities to do we think through it thoroughly before making those calls?
And that is the thing that I think could also be aligned with this is the outcome I want to your point, right? You know what it says to me? It's like every leader. Not every leader, but many leaders need opposing views and councils to be able to think through and structure what they actually want and define what they actually want.
And do you, what you want, is that really the right thing to want? So there's almost a debate process. Maybe it's almost like. Daniel, like the way I think about AI coding, which is yes, you don't just type in a bunch of crap and then it codes because you're going to end up with like crap. But what you want is you want to start with a high level PRD, break that PRD into more details.
Once I want to get user case examples. Now I want to get architecture and engineering requirements [00:57:00] based off of that. Yeah, once I have all that together, now I can give that to the AI to go code, right?
Daniel Miessler: Yeah, I think that's right. And here's another crazy way to think about this, which is not like this is not even difficult, but imagine somebody has a problem like Caleb is talking about, they're just, they're grasping at shiny objects and they've been giving a bunch of commands to their AI and it's making a bunch of changes in the work.
So now there's a process. Where if somebody makes a top level leader makes a change to the org that's too drastic, it actually spawns a meeting the following morning at nine a. m. Sitting at the meeting are these representations of this is Warren Buffett, this is Peter Thiel.
This is whatever. So you've got a bunch of people there. Plus you have your other human leaders. And now there's a discussion where the AI essentially says using the knowledge, remember, it's read every single thing [00:58:00] from Peter Thiel and every single thing from Warren Buffett. And it says, look Chris, you could do that if you want, but you are causing employee churn.
This has gone up since you've started being crazy. We've got customer churn as well because they're confused about the message. Your sales team can't sell anything because you keep changing it. So let's push back. So you can literally say, because we have this problem. We have to address this by being more cautious.
Caleb Sima: It's so funny, but it's so realistic because I do that already today.
Ashish Rajan: Oh, there you go, I love to have the demo of this. But this kind of leads me to the second question that was asked by Rares-Adrian. What excites you both most about the future of AI agents?
Daniel Miessler: For me, it is this current state versus a desired state.
Ashish Rajan: What about you, Caleb?
Caleb Sima: It's a man. I wish I had maybe more time to think about that question. But ultimately, I'll be simple. It's, there is [00:59:00] a world that I envision where I can be the laziest person alive, and I think agents can enable this. It's so there's, to me, there's a I have all these, but I'll give you, so maybe a more productive answer.
There's so many things that I know I want to automate that I want to get to a state of that would have, 2 years ago required me to build a company, raise capital, get a team of engineers, and go and produce it so that it could reach a goal or a destination of something that I needed for me to solve a pain that I have, I feel in the next three to four years, that's just it's already getting to a point where I can solve my small ones with just me.
And so when you think about the next two years, for sure. Like you're going to have one person, [01:00:00] billionaires, absolutely right. They're going to be, they're going to be people who can build entire businesses by being a single one or two people and build entire companies that can generate hundreds of millions of revenue.
I think that's feasible.
Daniel Miessler: 100 percent
Ashish Rajan: Using AI agents. Yeah. Yes, actually, this is a good one. So for people who want to be potentially become that one 1 billion dollar individual company run by one person who's are we saying what kind of skill set do you guys envision them to have ?Prompting? I'm not gonna answer the question, but what do you guys envision those people should have skill set as or what can they start working on today to build towards in the next couple of years?
Caleb Sima: Actually, the more at least today's state, you still need to be technical. You need to have engineering understanding. You need to understand the roots. And I think that is always important, period to have that background. However, I think in the next [01:01:00] year, and the thing that's the most important is the ability to communicate effectively.
Understand let's say you're building a product. They need to be able to understand Product requirements, user experience the ability to communicate and build relationships well, like these things are becoming oddly enough, it used to be engineering, but the more you work with AI to code something, the more it's about communication.
Instruction building the right expectations, the right format and process, it's going to be about process, right? What I was saying before about when you, you don't just AI code, you build, you, you work with AI to define what you want in your user experience. And what you want the user to experience, and then you ask AI to build a PRD on it, and then you evaluate that PRD, and then you say, I want this and this in my PRD, and then you go down levels, and you say that now that's set, I want to be able [01:02:00] to create my expectations in engineering architecture, and it should be easy to modify, should it have certain amounts of performance requirements, all of those then get created.
And then you have to say, at least today, this is a junior engineer. So I need to manage you like a junior engineer, according to the set of golden documents that we've created as my product. And I will now iterate with you one step at a time over and over to build a version 0. 1, 0. 2, 0. 3, and you will iterate.
The problem like that is the best way today to create great code. I think in the next year to Daniel's point, you're going to have 50 of those and they may not be junior. They might be let's say regular engineers, maybe even senior engineers that can just reference the doc automatically make its code and then do it.
So then it's about managing these things. Communicating with these things. So everything is now going into management communication [01:03:00] process. All of these become very important.
Daniel Miessler: Yeah. I like that answer. The way I like to talk about it is to someone asked me that question. I turn around on them and say, what would you do if you could make anything?
What would you make and how would you know that thing that you made was good? And I think the answer that unfortunately, a lot of people would come up with is I don't know what I would make. Why would I make anything like I work at a company, they tell me what to make. And that's what I make. And that is the wrong answer.
That is the worst answer you could possibly have going into 2025 because what it means is you will be replaced. Because we're moving into a world where people who have ideas and have opinions are the people who are going to be visible and be successful. If you listen to Caleb's answer, he's talking about, you want [01:04:00] this, but you don't want that. You want it to work a certain way. That requires that you have an opinion. It requires that you can tell the difference between a good version of this thing and a bad version of this thing. What does it take to have an opinion?
You have to know how things work and you have to have beliefs.
You have to have opinions about what is the thing you're trying to build? And why are you trying to build it? What problems are you actually trying to solve? If you don't have problems you're trying to solve, you don't have opinions about how it should be solved, then you could have the best AI in the world sitting next to you.
And you would use it to look up whatever recipes, right?
Caleb Sima: Yeah.
Daniel Miessler: What is somebody who doesn't have ideas going to do with an all powerful genie or AI. So I would say the number one thing that you need to have going forward, you need to have something that you would give to this all powerful thing, [01:05:00] because you're about to have the all powerful thing.
Question is, what are you going to do with it?
Ashish Rajan: Is it going to be affordable though? Because you know how people talk about the cost of doing all of this.
And I think deep search is 200 per month at the moment, I think. I don't know what the cost is. It is. Yeah, for Open AI Deep Search deep research is 200. Are we expecting the cost to go down? You know how, Daniel, you were saying that, you will have the most powerful genie or AI agent, quote unquote, available to most people out there.
Are we saying it would be affordable as well, considering the amount of compute that's going behind this and the cost people talk about? Locally hosted all of that.
Daniel Miessler: Extremely affordable. Extremely affordable.
Ashish Rajan: I would even argue, is it not affordable right now?
Daniel Miessler: Agreed.
Ashish Rajan: Agreed. It's affordable for most, dude, you should check the internet for all the people from third world countries just saying that.
This is not affordable. This is not no, like
Caleb Sima: I'm not going to say about third world countries, but let's just take the US and let's even say that you're a junior engineer. Okay. And let's say that [01:06:00] you're a junior engineer. So let's say you get paid. I don't know. Let's say 100k 80k a year as a salary.
And you have to pay 200 bucks a month for deep research. In your job, the question you need to ask is how much research do I do? And how much time does that take me? And what are the results and output of that research in my job? And then you look at that and say, If I'm saving 25 percent of my time on being able to save for this, which allows me to be 25 percent more productive.
Will that or will it not probably help you get a raise faster, to be a better productive output as an individual? These are all like you got to think about like how much when you think about cost of time and also quality of output. That you can produce in your job is it worth that in your paycheck to get it so that you [01:07:00] can look at this and say, I am now say, 25 or 30 percent more productive than everyone else around me.
That's a big that's a big difference. That means you have better output, more throughput, more recognition, faster raises. I don't know there's a debate to say whether that's expensive or not. Daniel, do you want to add anything before I add?
Daniel Miessler: Yeah, I would say. Completely agree with what Caleb said. I would say that these agent frameworks, these models the amount of like cost reduction that's happened over like last year and a half, it's in the thousands of percent we're talking about going from being hundreds of dollars to being cents.
Yeah, to do this stuff. And that's going to continue to happen. So I think honestly, within a year or two, somebody in any country around the world is going to be able to have a powerful agent ecosystem for a few dollars a week, [01:08:00]
Caleb Sima: and you can say, thank you competition for that, right? Because yeah, like deep research right now does cost you 200 bucks a month, but give it another, what, two months.
And the competition is going to come out with their own version, deep, which is probably going to be just as good quality. And then no, one's going to be like, okay, deep research is worth 200 bucks because I can get it from perplexity now for 25 bucks a month.
Daniel Miessler: No, it's a great point. It's a great point.
Ashish Rajan: Look at deep. DeepSeek has the ability, the chain of thought, right? If you give DeepSeek a bunch of free and open source tools, like web search research ability committing code, writing code, the combination of those two, all of which are free, it's already free. It's already free.
So the final question was, so I got asked another question, which I think is valid here as well. After talking about the future and the skills that people require, people are also asking questions around what's the like colleges these days, [01:09:00] right?
Ashish Rajan: I think people are coming in probably have never heard or never built an app themselves. Perhaps they're being taught on ideas. Should they not focus on foundational knowledge anymore because now we have AI, deep research and all of that, where do you guys
Daniel Miessler: Are you talking about college students or people considering if they want to go to college?
Ashish Rajan: No, I think college students who are would be coming into the workforce in the next few years. I think as people who have had a lot of, had a lot of experience, a lot of contracts, all of us probably perhaps would have use cases that we can think of where AI agents can be valuable because we have built up all this experience for building applications, securing applications, leading companies, all of that.
But for the current college students, and someone asked me this question, I'm like, I guess at least my personal opinion, I have one, but I would love to hear from you guys if you find the foundational knowledge to be still valuable in the world we're moving towards.
Daniel Miessler: Yeah, I've got a hot take on a college right now.
The foundational knowledge is essential, as both Caleb and I have [01:10:00] said. You have to understand how things work before you can understand nuances of things. The question is Yeah, and yeah, exactly. To tell the difference between good and bad. Like, how are you going to make those decisions? The question is a four year degree the best way to get foundational knowledge about our tech?
And I would argue in most cases, no. I think college is still valuable in 2025. I've read a whole bunch of books on this, and It's mostly useful because of the people that you meet and the people that you're exposed to. If you go to Harvard, is that a better computer science education? Yes, because you're hanging out with a bunch of people who went to Harvard, which means they're really smart, they're really well connected, they're really curious, and you're hanging out after hours building apps and stuff.
What I would say is to be very afraid of. In college right now, do not think that these classes that you are taking and these textbooks that you are reading are giving [01:11:00] you anything valuable that's going to earn you a job. You basically need to do your courseware use AI to help you do your courseware.
Learn the fundamentals. Most importantly, if the courseware it helps you learn the fundamentals, fine, that's cool, but learn the fundamentals, get decent grades, get your four year degree, just so you don't get excluded because you don't have it. But 95 percent of your education better be coming from YouTube and X and whatever someone's currently building, because if you pause on AI right now, you're for a month or two, you're way behind.
How behind do you think that textbook is? And this professor who's teaching you in a college course, it's ancient history, it's ancient history.
Caleb Sima: Yep. I also, so I come from, I didn't even finish high school. I clearly have a view. Yeah, I dropped out ninth grade but here but here's the thing that I've started to come to [01:12:00] appreciate in my, let's say world experience is, I didn't finish high as clearly didn't go to college.
I did end up going to Harvard, actually, Daniel, because I thought I needed to go and because I skipped out on all the education. I wanted to go see, is it really necessary? Like I, so I'd pick the best college I could and went and got an executive MBA at Harvard, right? And went and did the courses and the class spent the time to go do it.
And then now having kids and watching the world. I have a very distinct view of what I think college is. So here's the thing for people like me and Daniel and Ashish and others, which, by the way, we are unique snowflakes. We are people with opinions, passions, drive, ambition. And by the way, is that luck?
Is that training? Who knows what that is? But you have a view. You have an [01:13:00] ambition. You have a purpose. But for a lot of people that does not exist, right? So for the 95 percent of people who don't have that passion and that purpose dropping out of college is not necessarily going to give you that if the alternative is I'm going to drop out of college and I'm going to hang around and figure things out.
That is not a good option. No, right? That is not the option to take college at least gives you. Gives you a path, gives you a guidance, gives you some place to at least learn, study, understand, and maybe during the process in college of the things they introduce you to and the people that you meet and the network that you create, that then it may spawn that purpose, that ambition and that goal.
It could be your roommate in college that you connect with that gives you [01:14:00] that ambition. It could be a subject that you just happen to take in college that gives you that ambition. And so the most important thing is look at college. I agree with Daniel. College isn't going to be the thing at which you'll probably going to make your career like out of it, but it may be the seeds that get planted there.
That help you figure out what that career is. And I think you do if you don't have an idea or don't have. College is important because it gives you the fertile grounds to potentially get it. However, if you were like me, I knew what my passion was early. It was security. And that's just I was doing that since 13.
I didn't care about anything else. That's just what I did. And so everything was about that. Great. Go pursue it. Go figure it out. But at the end of the day, I think most people do not have that. And so you've got to go do that. And the other thing I'll also note is what college also does for many [01:15:00] people is many people don't know how to learn, right?
Despite that, right? And teaching you how to learn, how to write research, giving you the rigor of writing papers, giving you the rigor of doing math, giving you the rigor of building the foundational thought processes. It requires for you to learn to be a logical, smart person so that you can talk to somebody and have a good debate without it being reduced to insults like that is that is the teaching you how to be logical, how to think, how to process and use AI for all of the right tools. But it's those patterns that also, I think that you learn how to learn that also help you in whatever topic or subject that you end up doing later in life.
Daniel Miessler: Yeah, I agree.
Ashish Rajan: Don't skip college, but I guess you add to this as well. Most jobs out there still require you to have a degree and if you are someone who's graduating and wanting to start somewhere, [01:16:00] potentially the likelihood some of those job descriptions would still ask for a four year degree or a three year degree in something is very high, so if you at least want to start a job somewhere straight away for whatever reason, financial or otherwise, you just want to start working, you definitely would need a degree that would help you.
Caleb Sima: Or God forbid if you're going to be a doctor, please go to college. I
Ashish Rajan: don't want a doctor or a surgeon who who self learned. I
guess cause I think there definitely would be fields that you probably want people to continue working on to it. So while AI agent improves medical research and everything.
So I think overall, this is a great conversation. I think for, and I feel like in doctor. Talk for more hours about this, but in the interest of time Daniel, any last. piece of advice or any last thoughts you want to share with people and where can they find you and connect with you to learn more about all this as well?
Daniel Miessler: I would say pay attention right now is not a good time to not be paying attention to what's happening. I believe [01:17:00] what's happening right now is bigger than mobile, bigger than internet, bigger than web, bigger than pretty much anything.
Caleb Sima: Yeah.
Daniel Miessler: Cloud, all those tech things are tiny compared to this because those other things were replacing like how tasks were done, whereas this is actually augmenting and replacing intelligence itself.
It's just completely different game. So I would say YouTube forums X like watch what's actually happening, what people are actually building. And like Caleb was saying with the college thing, find the thing that fires you up. The thing you should be most worried about is if you don't have any ideas, you don't have any desires, you don't have any goals, those are the people who are in the most danger right now.
So find your spark somehow. And if you already have one, be confident in it and use AI to chase it.
Ashish Rajan: Yeah. Awesome. Daniel, do you want to share where people can find you on the internet as well to [01:18:00] connect with you on any of this stuff?
Daniel Miessler: Yeah. So danielmiessler.Com is the website. And Daniel Miessler on X and Blue Sky and pretty much everywhere else.
Ashish Rajan: Awesome. I'll put those links in and people should also check our Fabric that you created as well for, from an AI tool perspective. That would be pretty awesome. I'll put the link in that as well.
All right, Caleb, final thoughts before we wrap it up, man. I think you, I think Daniel, you guys, we summed it all up. Nothing more from me. I could, there's a lot to say, but I think that's pretty good. Awesome. All right. That's the episode folks. I'll talk to you next one. Thanks Daniel.
Thanks Caleb. All right. See ya.