How AI is changing Detection Engineering & SOC Operations?

View Show Notes and Transcript

AI is revolutionizing many things, but how does it impact detection engineering and SOC teams? In this episode, we sit down with Dylan Williams, a cybersecurity practitioner with nearly a decade of experience in blue team operations and detection engineering.  We speak about how AI is reshaping threat detection and response, the future role of detection engineers in an AI-driven world, can AI reduce false positives and speed up investigations, the difference between automation vs. agentic AI in security and practical AI tools you can use right now in detection & response

Questions:
00:00 Introduction
02:01 A bit about Dylan Williams
04:05 Keeping with up AI advancements
06:24 Detection with and without AI  
08:11 Would AI reduce the number of false positives?
10:28 Does AI help identity what is a signal?  
14:18 The maturity of the current detection landscape
17:01 Agentic AI vs Automation in Detection Engineering
19:35 How prompt engineering is evolving with newer models?  
25:52 How AI is imapcting Detection Engineering today?
36:23 LLM Models become the detector  
42:03 What will be the future of detection?
47:58 What can detection engineers practically do with AI today?
52:57 Favourite AI Tool and Final thoughts on Detection Engineering

Dylan Williams: [00:00:00] We have the AI behind the scenes doing all these things all the time. But I think maybe this isn't far fetched, I think, to think what is a detection engineer that practitioners day to day look like in five to 10 years? Maybe they're more of like the head of DNR strategically. And it's no different than if you have a bunch of teammates or junior employees or interns, you want to write a new detection.

You want to improve blah, blah, blah. They're tasked with doing these things. It's going to be the same relationship, except the difference is the AI has the maximum expertise and they're doing it everywhere all the time.

Ashish Rajan: Detection engineering has been a challenging field for a lot of security operations.

A lot of it has always been around trying to find what signal do we care about, which ones are true positive. And you may be thinking, why am I talking about this AI cybersecurity podcast? As AI is evolving, detection engineering, the DNR space is definitely evolving and seeing a lot of impact from AI as well.

We have Dylan Williams who came and spoke about this particular challenge in the detection response space how AI is impacting it What's the potential future for it? What is practical today? Are we at that stage where [00:01:00] AI agents can walk around and pick up a detection rule for any endpoint out there because it has to understand all the context and blah, blah, blah.

But why not hear it from Dylan himself? Overall, it was a great conversation. If you are someone who's looking into the detection and response space and how AI can impact this and perhaps what is the true reality today as it stands for using AI to do detection response, writing rules for it. All that and a lot more in this episode of AI. If you are listening or watching AI for the first time on YouTube, LinkedIn, Spotify, whichever of your favorite platform you're doing this on, I would really appreciate if you could subscribe or follow on the platform you're in.

It helps us know that what we are creating is actually helping you. I hope you enjoyed this episode with Dylan and let's get into the episode. What up people? I don't think we've ever started like that, but I think it's just

like hyping up the people over here. Hey, welcome to AI Cybersecurity podcast.

Today we have Dylan Williams. Hey man. Thanks for coming to the show.

Dylan Williams: Hey, [00:02:00] thanks guys. Thanks for having me on.

Ashish Rajan: Actually to kick it off, could you just give a brief intro about yourself, man? I think what are you doing in cybersecurity?

Dylan Williams: Yeah, sure. So pretty much my whole InfoSec career is like blue team stuff, right?

Typical SecOps, the enterprise security team. I didn't really go more niche than that. So I've been like an operator for almost 10 years. And then just over the past couple months, we're starting a brand new company tackling some of the core problems around detection. So I'll be deliberately ambiguous there for now, but some really exciting stuff that we're building.

Ashish Rajan: So that's, yeah.

Actually, now that we mentioned detection, what's keeping you before we started recording this, we were talking about what's exciting you at this point in time, is there specifically anything in AI that is exciting you at this point in time, or what's exciting for you at this point in time?

Dylan Williams: Yeah, I think it's acceleration. I think that I've had conversations recently, like the past couple of years has been very similar to when the internet or the search engine or Google came out. And so people like us [00:03:00] who are so in touch with all of the AI stuff out there right now, it's insane.

The speed. And the access to things that we have that we just don't, right? For me, it could be your personal life. It could be your professional life. But my favorite example is this. I've talked to my previous co workers. There are definitely some skeptics, which is totally healthy, totally valid.

But are you going to sit here and go on Stack Overflow for hours or Google for hours to solve a problem? Or you have this incredible tool literally at your fingertips to get the same result in a fraction of the time, right? That's how I look at it,

Caleb Sima: There are people who use AI to their advantage, and then there are people who don't have a job.

Yep. That's the

Dylan Williams: Exactly. Yeah. I think the next 10 to 15 years is going to be so interesting because even different ages, right? But even kids who are growing up and not an iPad native world, but like an AI native world, like the tools that we use every day, or it's going to be baked in there, whether you like it or not.

And people are just going to be, like you said, you're [00:04:00] either using AI. Or you're not, and you're going to fall behind. So it's like this fluency thing that comes with all this.

Ashish Rajan: So actually talking about keeping up with things as well. I, if anyone wants to go on your LinkedIn profile, just see like this endless amount of research and study you're doing and sharing with other people as well, how do you keep up with AI man?

It feels I don't know, as much as. I feel I am trying to be on top of news. I don't know, o1 something came out today and I'm like, okay, there's another one. And I'm like, now, how do I use it? DeepSeek was the newest thing yesterday. Obviously you're in that kind of adjacent space of agentic and detection and all of that.

How are you keeping up with all of this out of curiosity? Because it's easy to get lost in the noise being created on all of this as well. How are you keeping it up with?

Dylan Williams: Yeah it's physically and mentally exhausting, and actually I'll share this blog I came across called AI Fatigue, and it's this new thing, so I'm sure you guys can relate to that.

It's literally a real thing, so I feel

Caleb Sima: I think we're also adding to that, by the way. We are adding to AI fatigue ourselves right now. [00:05:00]

Ashish Rajan: Reliable sources of AI fatigue is the AI Cybersecurity Podcast.

Dylan Williams: Yeah. But it's even myself, like I know, so LinkedIn and I talk to people about this.

LinkedIn is pretty, actually pretty good. I know it depends what kind of stuff you follow, but if you want a really quick dose of what's going on, that's really quick and easy way. There's like hundreds of newsletters out there, right? And I have like several AI newsletters that are summarizing the summaries of all these newsletters.

So it's very meta, but I feel like that is, it's, you gotta pick one because it's so exhausting to have nine different platforms and newsletters and blogs, but I'll tell you, I'll give you a few of my secrets that I feel like, let me get to the info I want a little bit quicker. And one of them is a platform called XAI, XAI

so their whole value prop is they have their version of the web. So instead of a Google or a search engine, it's like RAG over the web. It's all of their embeddings. So it just. And when you're looking for stuff, like it's semantically so much similar than just looking at a list of search [00:06:00] keyword results, so that's a pretty cool way.

Cause I know if you give it a try, you'll find stuff that you won't find from the other search engines. It's pretty, pretty cool. It's a group of AI agents that searches a bunch of Discords. Reddits, Twitters, every day and kind of gives you a summary of that, right?

Cause that's where the juicy stuff is, right? Not necessarily like published stuff. You want to see what's going on the social platforms, but yeah It's exhausting.

Ashish Rajan: And would you say, cause you've obviously narrowed down into the detection space yourself and I think a lot of the conversation you seem to be having around the detection space, as you mentioned earlier, I'm curious about, traditionally being an operator.

How is the non AI world for detection and what is, now that you've been researching and diving deep into the world of what AI could do here. And AI is also helping accelerate that a lot more. I guess maybe if you can describe for people how detection is looked at today and what that could mean with AI.

Dylan Williams: Yeah, a hundred percent. So I feel like status quo for detection [00:07:00] nowadays is, there's the technology part of it, right? Like all these tools that people have in their stack that are generating signals for the SOC to respond to, right? And then the other is the human aspect. And the human aspect is still very manual for all sorts of sizes of organizations, right?

What's different about it from other roles, I feel and this is all very similar to a lot of like niche specialist roles as in security in general. But you need to know a lot of different areas of specialized knowledge, right? And coupling that with the manual nature, it's like a really long and exhausting, complex task of writing a detection.

You kinda gotta be a threat intel expert in some form or fashion, right? You don't have to be crazy, but you gotta understand the tradecraft, right? You gotta understand how to write the dozens of detection query languages, depending on the tools that you're using. And you got to know the logs.

Everything you do is logs, right? So I feel like that causes a lot of this context switching, pivoting. But the cool thing about AI is placing it at like strategically little pieces in [00:08:00] that whole workflow that these people do, it'll help you speed up a lot. So that's where I see this going less manual more accelerated certain pieces.

Maybe one day we get to full self driving car, so to speak, a thing.

Ashish Rajan: Actually, would you say the, to your point about detection space being interesting? Cause the way I've always seen it is there was this whole world of, Oh, I only care about what's current right now used to be filled with false positive, people that have hired SOC operations usually would spend a lot of their time just trying to validate something is a false positive or a true positive that was used to be like 80 to 90 percent of the time and the 1 percent or the 10 percent you find is actually something worthwhile.

Turns out it was like a, it could have been resolved much sooner or misconfiguration or whatever. Are we saying that with the strategic placement of AI at different points in time and I don't want to name any, like the Splunk certification and all of that used to be quite a big thing.

With AI, are we saying that like the world for detection is going to be a lot less filled with false positive? [00:09:00] Hallucination is still a thing. Where are we at with that at this point in time of detection?

Dylan Williams: Yeah, no, that's a good one. Yeah, if we look at it as we have a much bigger lever now, right?

That's making the average detection person like a power user, right? We could hopefully think that will solve this problem, but I think that the false positive problem comes from a lot of different places than just the human labor that's keeping up with the constant tuning or rewriting rules or, that kind of a thing.

Caleb Sima: I would also add that it's, it's not just about false positives it's interesting because there's false positives can be mostly today are thought of as things at which you either need to figure out how to validate in post response or figure out how to write a better detection.

So it doesn't do a false positive, but I think actually a large majority of these issues aren't necessarily is this a false signal, but it is a signal we need, but does not necessarily indicate an incident or a breach. And so how do we get [00:10:00] enough of these signals that are correct signals? It's not like they're falsely flagging.

It's just that you need enough of them to correlate and to then validate. Hey, this is a real issue. This is a breach. This is something you should be aware of. This is not just an engineer who needs to SSH route into a prod system to fix something right there's a big difference between false positives and true detections, but detections that mean something.

Ashish Rajan: Oh, actually, that's a good point, man, because you know how a lot of the conversations with especially with detection engineers and I've been fortunate that with the whole cloud security podcast was suddenly the cloud security world has opened up a detection engineer role everywhere. I find it really interesting how initially we were living in a world where security teams were send me all the logs.

Doesn't really matter which one, because there was this notion that I don't really want the operator to be an expert in, I don't know, Java, JavaScript, whatever the application is written in, because they need the context of the application, [00:11:00] the infrastructure, and what is Ashish doing with all of that, need to combine all three, and probably something else to figure out, is this really the signal that I need?

So do you guys feel that now that we're moving into a more cloud, quote unquote, cloud native world where now we're AI driven and all of that, is the question easily answered for, we can make a sound judgment for what is that detection and what is that signal? And yes, this is the signal we need. Are we able to validate that?

Or is that still like a It's not an AI problem. It's more of a process problem.

Dylan Williams: Yeah, I'll take a stab. I think that is more of a process problem. So if you look at there's different maturity levels to detection, engineering functions or SecOps teams, right? Like at the low end, it's Hey, we just want to engineer some type of signals.

So we can know if something bad is going to happen. But at the other end of the spectrum, it's every single signal that we write, it's going to be validated in some form or fashion. And there's a whole spectrum to what that means too. But what I like about this is. How can you argue with an evidence-based thing?

It's like, how [00:12:00] do we know that the detection's working? Let's test it. So I think that's a really important part to consider. And that gets back to what I, my philosophy is like the best blue teams are purple teams. That's what I think about that.

Ashish Rajan: But can we simulate tests? 'cause I think that to your point, that used to be the biggest thing because a lot of the detection that was written, we just make a detection rule and wait for it to get triggered because no one wants to trigger a attack, like a simulated attack on a production system. Is that possible today? Obviously, it could be the part of status quo maybe.

Dylan Williams: Yeah, I think it depends. The way it's done nowadays, it's stitched together in a lot, from a lot of different products. So we have, BAS, Breach and Attack Simulation.

They seem to be not attack surface agnostic. So like they're very endpoint heavy. We have a lot of other tech in our enterprise. That's not a Windows or a Linux or a Mac. So that doesn't help us there. But yeah, it's still a fuzzy area. And so I know what teams do nowadays is we try to stitch together.

There's like a profusion of all these open source emu tools out there, like Stratus Red Team for [00:13:00] cloud. There's something called Dorothy for Okta. It's really cool to test your identity. All of these. So we tried Pacu and Leonidas, all of them try to find a tool, Atomic Red Team, to match it together.

But it's a pain. There's no uniform way to do it.

Caleb Sima: And red teaming and testing is just, is very difficult to do well, right? Because environments are very custom. Actually getting the permission, quote unquote, to some extent to actually go and test these things can be challenging.

If you've got a good engineering hygiene, you, it's going to be very difficult for you to say, Hey, I want to actually run tests and prod.

Like I want to go and run these, BAS tests in a prod environment. That's going to take you months of planning and working with your engineering group to say, get permission to say, okay you can go stand up an instance, go run some tests and prod like it's funny because, in my experience , some of the biggest things that inhibit us from [00:14:00] detecting or doing bad things is our own company. And making us so slow that the red tape is just tough to get over in order to do the right testing.

Ashish Rajan: Yeah. That's a very well said. As well, cause I think it's unfortunate, but true as well.

Sometimes we are our own biggest enemy, I think for lack of a better word. I do want to paint a picture for where detection is going. I think there's an understanding of what people have. And I don't imagine to what you said, Dylan, as well, where most environments today are more than just endpoints.

There's cloud, now you can throw APIs in there with all the AI third party in there. And you can also throw in the fact that now that I have internal. corporate systems that I'm the, the whole Okta hack, what was it? Okta Hack, one of the hacks that happened where it, I believe it was Uber.

I think that someone got access to someone's laptop, jumped onto 2FA and more walked onto all to their AWS, Azure, all the other accounts. So there's a lot more complexity in accounts or in an environment, use space with detection [00:15:00] engineering. Are you finding, in the research that you're doing, in this particular space, are you finding this one area a lot more mature for AI versus the other, like I don't imagine you're going to have mainframes in there. That's like that baby no one touches.

Dylan Williams: Yeah, no, I've looked at it. Like even not related to AI, if you look at the security telemetry itself, right? Like certain types of it had been around for a lot longer and are considerably more well documented than others.

And that affects the current world's knowledge about it and affects your access to it. So like perfect example would be, AI might be a huge accelerant for like us getting really deep dive good detections for Linux or things closer to the operating system, because we just we have such a good understanding of it, but stuff like Cloud, it might be different also because of like the shared responsibility model to it's not exactly comparable to that.

We have to like different levels of abstraction there. But one really cool thing about it, I think, is that what the humans do now, it's like we [00:16:00] apply our expertise, how good is this detection? Does it need to be rewritten? Have to write a new one? What AI can do that we simply can't is it's a scale and a speed thing, right?

So it's almost like if you can have something measuring the quality of a detection everywhere all the time. We're trying to apply this human labor problem at scale. It's all these things.

Ashish Rajan: And would you say the AI component is I don't want to use the word agentic because it sounds like everything is agentic and automatic.

It's a bad word. It's a bad word now. What part, wait, actually, how do you describe agentic? Because this is I could, one of the most interesting conversations for the past few days has been. The difference between automation and agentic AI, especially after the OpenAI people came up with operator, is it just scraping the web and create making web calls or was it as a truly agentic?

There's a whole conversation around this. How do you obviously Caleb had a description that he gave in the last episode. I'm curious. How do you describe agentic versus automation? And is that even at a stage that it can be applied in detection engineering today?

Dylan Williams: Yeah, [00:17:00] absolutely.

Yeah, so this is a really interesting topic. And I've read a lot of different definitions on it, but 1 of them that I think makes sense is an agent. It's anything as soon as you give control of the application flow to the LLM. And I'll give you an example. So it's a spectrum.

That's the answer that I like is that when ChatGPT came out, the way that we interacted with it was just text in and out of a prompt, right? And so I think it was Anthropic who came out with a blog last month about different architectures or design patterns for agents. But a lot of it was like very basic LLM workflows, it is like prompts chaining.

The basic incremental step of what's an agent is if are you allowing the LLM to decide what to do next, right? Because you're not just limited to a simple prompt in or out. But for detection engineering yeah, I think an excellent example is like I'll tell you how I went through my ChatGPT moment and then started to bring it to work.

I was like, this is incredible was copying, pasting a lot from [00:18:00] different AI models to do the heavy lifting of can you write a Splunk query or a SQL query? Can you extract a TTP from this piece of threat intel, right? But what gets really interesting is what am I doing in between?

I'm moving data around, I'm adding context. But I'm still pivoting to different tools. I'm going to go run that query on a SQL database. I'm going to look at the results. So it's like as it organically grows away from that, you get more integrated into your workflow. So it's totally agentic. Like it's all there.

I think the hardest thing for this is the scale question because it's a consistency issue, right? But yeah we're getting there. Yeah.

Caleb Sima: I think I remember I just read something where someone was talking about how it's easy to make a five minute demo on X or YouTube on this amazing web application or thing you automated with AI, but in order to actually produce this thing into a real product, it's now [00:19:00] taken a year and a half, and it's still not out because of a lot of the challenges and it's taking huge amounts of teams to do this, which then when you look back on it, you say, Hey, has AI really helped speed up this thing if we just didn't use AI at all? If we just built this in the way we normally would have done, I actually think the comment was, we would have produced this faster.

Ashish Rajan: Of course, we're not trying to shove AI into everything as well. So it is that problem that if something could be simple with just with automation, we're like, actually, can I use AI in this?

Dylan Williams: Yeah. The agent thing is interesting because the way I look at it now is. What's the difference between when ChatGPT came out?

We have this black box that, we can ask the question, could we get this stuff back? And it's with the release of these new design patterns, new architectures, we got the test time compute models like o1 and the RL train models like DeepSeek, it's almost I'm trying to look at how much less work does the [00:20:00] human operator have to do when they're interacting with this.

And it's like prompt engineering. That's from my experience, 90 percent of the work that goes into building apps around these things. But a lot of the prompt engineering it's not going to go away. I don't think, but it's going to change. And it's the new capabilities of the models are almost eating it.

So if you, perfect example, I was using I think it's the o3 mini that just came out. But something people forget is you got to prompt it totally differently. You do want to do zero shot, but you don't want to give any examples. It's pretty wild. Yeah.

Ashish Rajan: Oh, wait, do you have to do, because, so to your point then, because the previous version you had to say, hey, you are a cybersecurity expert with five years of detection engineering experience, blah, blah, blah. Then we went on to 4. 0, then we didn't have to say that as much. So with the newer one, what would be an example prompt from a detection perspective, out of curiosity?

Dylan Williams: Yeah, I would be clear and what you want and maybe give an example of the output.

But they're saying it's actually an hindrance if you put examples in there. So what someone might do if you want to go. Write me a Splunk query that [00:21:00] does X, Y, Z, a year ago, massive prompts, super structured I want a lot of examples, I want good examples, bad examples, and edge cases like this takes a lot of time to handpick that as the domain expert, but now it's write me a Splunk section for when someone logs into blah, blah, blah.

And it's, yeah, it's leveraging that it's pretty interesting. So you got to mess around with it. It's weird too, because we're not taught that way the past year.

Ashish Rajan: Yeah, I don't know, maybe, we can even go for the next thing, cause I was watching I think I was reading up on DeepSeek and how, whether you can have it locally or whatever, and there's a little conversation thread that I went into where actually that the true future with AI is that there is no UI technically, like at the moment, because of the way the first version we got from ChatGPT was, Hey, I have a search bar. I put a text in it. But the true version should be that I should just be able to call it out, but through my voice that, Hey can you just build something for me? Because, cause I had a detection or I have added a new software in the organization instead of me sitting down there trying to figure [00:22:00] out what is Splunk, what kind of queries can Splunk have, what's the context for the query.

I just say it and just happens. But that's not, I guess you correct me if I'm wrong. That's not still agentic though. That's just reasoning.

Dylan Williams: Yeah. Like I look at it as two things. So like a lot of these prompt engineering things under that umbrella are, chain of thoughts, step by step instructions, few shot examples, all these things, all they're really doing is this overall in context learning.

Like we have pre trained models. This is great. We don't need to train them anymore, but we're trying to get enriched the context as much as possible. We don't have any hallucinations, right? And that, these reinforcement learning models are doing that one way where we have to think less about this.

We have to do this less now. They're out of the box, so to speak. But I think the same thing will happen to agents. Look at it now. It's a pain. To build your own agents, right? There are different levels of abstractions, different libraries, but we don't have this out of the box experience for agents, but we are now with the CUA models like Claude Computer use and [00:23:00] Operator one.

Caleb Sima: So I also, feel like the way we interact with these things is obviously going to evolve, right? I don't know, Ashish we talked at the very beginning, probably in our first few episodes, how I was like, Hey, the way it's going to first, you're gonna start interacting with it with your voice, which we're already doing.

Then it's going to be absolute avatars that look like real video calls, like what we're looking at right now. And then it's going to be these always present, capable entities, right? Today you have to prompt an AI for something, right? You have to do this prompt in order to get a response, but soon when you start constantly feeding it video streams, audio streams all of these things, it's going to be constantly being prompted all the time. And so it's always going to be analyzing, understanding, being able to prompt you back without you having to prompt, right? Like a person. And so it will [00:24:00] be this ever present.

Thing while you work and do things that will auto suggest, maybe even auto complete things like, think about this we'll go to sort of Dylan's point, you're a detection engineer and you're going through and researching, hey, this event, and this AI who is always on monitoring will understand, oh, hey, you're trying to do this, I will go ahead as a task in the background and do this for you, do this for you, do this for you as you're going and doing your primary work.

I see you need to do this. I'll reach out to your peer here, and here to get more information about that event. Like it's going to be working with you, like in tandem, never you having to prompt it. And so I think like when we start getting to this point, it's like going to be really crazy.

That's going to be some crazy things.

Ashish Rajan: I think we have jumped the gun on the whole co pilot word as well. Cause I guess what do, what you describe, I think that's a true co pilot for me. Versus what, I guess people just wanted to jump on a term before anyone else took it. So they just [00:25:00] call everything GitHub Copilot, Microsoft Copilot.

Everyone has a co pilot these days. You're like that's not really my thinking of what a co pilot should have been. I don't know, would you guys agree?

Dylan Williams: Yeah, like I, I love, I think it's Langtrain team talks about this concept of ambient agents, right? But it's what Caleb said about. We have the AI behind the scenes doing all these things all the time.

But I think maybe this isn't farfetched. I think to think, what is a detection engineer that practitioners day to day look like in five to 10 years, maybe they're more of like the head of DNR strategically. And it's no different than if you have a bunch of teammates or, junior employees or interns, you want to write a new detection.

You want to improve blah, blah, blah. They're tasked with doing these things. It's going to be the same relationship, except the difference is the AI can has maximum expertise and they're doing it everywhere all the time. But you're the one who points them, go here, go do this, go do that. You just a higher level strategic control, right?

Ashish Rajan: Do you reckon the detection field, and obviously we don't really have a crystal ball at this point in time, but to look into this from a perspective [00:26:00] of where we stand today and obviously we're talking about what's the possibility, but what's likely today as we stand assuming no more new models come in who are even more efficient is the current model or the current state of detection engineering limited to the fact that if I'm a SOC operator, I can use OpenAI or whichever LLM model that I want to produce at least a close enough detection rule for most of my scenarios that I would care about in an environment. Cause I imagine there's enough data out there for Windows, Mac endpoint, popular cybersecurity products, Splunk and all of that.

Would that be a fair statement to make? Cause I'm coming from a perspective that people who are either watching or listening to this conversation, as much as we want to know what the future is, but I'm sure they want to know what the present is as well, as in at the time of this recording.

So I was putting a caveat, a big caveat, tomorrow something else comes out. We're like, Oh, actually this is completely redundant, but as a stands today, based on what you've researched, what is the very likely possibility [00:27:00] with detection and response?

Dylan Williams: Yeah, I look at it. I can look at it two things.

What are these new AI capabilities, whether it's a new tool that you use or it's baked into a product that you use now, it's like one is democratizing expertise, right? So it's giving you that breath that you otherwise wouldn't have, or it would take you a long time to get there, right?

And the other is just making you a power user, right? But to me, I look at this as I, the way it looks now is a lot of the AI add ons that come out of the products, they're pretty locked into the vendor. So we know that there's a sequel if you're using a Snowflake or a Panther or Elastic or Splunk, you're stuck in that world.

The thing that would be interesting to me is this more agnostic nature to it, right? And that's why it's closer to the operator of a tool we use, right? I think of it as like a functional level. So if you pick on a detection engineer, what are you doing on your day to day? Maybe somebody's doing the threat ideation.

They're doing more research. They're threat hunting. They're coming up with leads. These get passed on to the backlog. Those become detections. One day, that person's got to write the detection. They got to tune it. [00:28:00] Then they got to test it. Someone reviews and approves it. But I think these will just all become one thing, right?

I really do look at it. It's like a really powerful level lever, right? You can do any of these things in a fraction of the time, right?

Ashish Rajan: I guess maybe another example I can add here is the fact that at this point in time, we have a lot of data, but no efficient way of testing detection or coming up with a detection rule that is in the first few shots, a good one to start off and putting in production kind of a thing.

Maybe we are close enough to, hey, maybe we can give up a bunch of logs and say, what's the pattern over here and use that as a starting point to build a detection rule quicker. Would that be a fair statement? I guess that's, is that the reality?

Dylan Williams: Yeah. Yeah. Yeah. If we look at what LLMs are really good at, it's taking unstructured data and like pulling it all together and getting some insight from it.

But to me, a really intuitive example is thread intel to an analytic. It's that's pretty straightforward. We're already there. I don't think it's far fetched at all. If [00:29:00] you could product size or outsource the whole threat ideation period, as long as you're giving all the data to the enterprise, we have these logs, we have access to this tip or this threat intel, we just have the web OSINT stuff look at it all the time and see and write detection rules from it.

We're already pretty much there.

Ashish Rajan: Yeah. Yeah. And I guess, is there a question of, to actually funny enough to what you said as well? It's almost like a ambiguous entity waiting just to, Hey, I have a new threat Intel. One of the things that people go through in the SOC team is validating.

Do I even use this? Like when whole what was it? Heartbleed happened. The biggest question people were like, is, am I impacted or was this another one? The first question people are asking is, am I being impacted? That is a question that is so hard to answer for a lot of people. Because A, there's like different, all kinds of logs available, but you may not even know what kind of log to look for.

Do you feel like is that something that is also being looked into by AI and I guess how the models are working or where we're at an [00:30:00] initial stage from a threat modeling perspective where, oh, okay, we can do threat intel, we can have a I guess maybe an in depth validation of a problem that is potentially there in the environment and make a detection rule from it.

Or are we able to use AI to go further? Like there, I'm talking about agents. I guess that's kinda where I'm trying to go from this. 'cause are we at the stage where I can use agents to do all of this?

Caleb Sima: Maybe if I could I don't know Ashish if I'm interpreting your question. Dylan, what is, you named one sort of pragmatic and practical thing that you think AI is going to help in detection, which is this threat Intel coming in to detection streamline.

Ashish what you're saying is, okay, Dylan what are your hopes in the next year or two, what do you think changes in detection because of AI is existence, if you were to imagine, okay, two years from now, this is now what is it going to be solved, or this is what the detection now looks like.

What would that be? Is that right, Ashish?

Ashish Rajan: Yeah, you summed it up right. Yeah, I was trying to collect my thoughts together, [00:31:00] but I think that definitely summarizes what I was thinking.

Dylan Williams: Yeah, that's a good one. I think so if we make this assumption that AI is going to eat a lot of the practitioners work, right?

What are they going to be doing, and how's that going to affect what detection looks like for teams in 5, 10 years? If it's like it's our typical cat and mouse game like with the threat actors out there and malware and all this stuff is the detections are gonna take care of themselves.

I think it's a really cool idea to think about this, detections are self healing, right? That's all the time and toil that's spent by practitioners today is to do the O and M. It's anything else. It's Q and A and O and M. It's really boring stuff, but for you to have effective detection, it's like non negotiable.

You got to do that. And the hard part is doing it at scale for the whole enterprise all the time. Because we have so many different things that we want to care about, right? But yeah, I think it will be interesting. Because If we get to this point of we, I think we will be able to cover way more scenarios than we could before.

It's [00:32:00] think about it like this, if you have a hundred of the most world's elite detection engineers on your team, and it's we're so good because we have so much time and so much expertise, right? We can just go as deep as we want to go, and it's like the pure number of permutations.

That a red team or a threat actor, a piece of malware could do. So I think it'll get deeper and deeper like that.

Caleb Sima: Basically, okay. You now have a hundred detection engineers. What would you do with them? And if they're always working on your side, that means what you're going to see is, the biggest pains are looking at threat Intel, determining whether it's really relevant for our environment, and if so, where writing the right detections for it.

Testing these detections for it, ensuring that we correlate and have real alerts on things that are noticeable or responding to this is now what AI could potentially evolve [00:33:00] to where that now happens. And in, in a sense, the full automation of identify and what you're going to respond to are going to be real events that then can loop in the right human people.

Ashish Rajan: Actually, I'm curious in the future we see I know we're talking about detection response primarily at the moment. At the moment, the developers have been the first ones to adopt quote unquote Copilot from GitHub and from Microsoft and other people as that kind of evolves. One of the things that I see for today's problem is also the use of AI in our day to day I guess work, whether I'm a developer, product manager, whoever, I don't know if those logs are being actually, is there a log that we get for any of this?

There's nothing because it's technically a SaaS, no API. Are you guys in the research or the conversation you guys are having, are people actually doing any DNR on AI? Cause one of the things that I've spoken about, and this kind of happened in the cloud world as well. Was the whole [00:34:00] idea behind, hey, we wanna adopt quickly.

We are, start using the benefits of it. But what kind of people did not talk about for at least even in fact in the last few years was the whole, how do I know there's an incident and what is the detection that I'm looking for in incident In this? And here, obviously we're talking about how we can use AI to do detection for any number of scenarios.

Are we at this point in time? Cause I imagine a lot of CISOs are also wondering, is there any detection we have for AI apart from me looking at my API logs?

Dylan Williams: That's all that I know of it's in the funny part about it. So like it's new technology, new source of telemetry for the DNR team.

But it's different because it's like, if it's the API logs, are you looking at people's prompts? Are we just looking for stuff like prompt injection? It's not just like a audit log or what does that look like? Cause it's a funny, new piece of telemetry. Cause it's, I don't know.

That's unexplored from what I've seen.

Ashish Rajan: It's like going through someone's word document and going, I had a bunch of thought, I just wrote it down and I added some more thoughts later. I'm like, is [00:35:00] that really from an enterprise perspective? Obviously, for us, the scenarios that we care about are, hey, is there a breach?

Is there something happening right now that has been misconfigured? Obviously, I'm going to use the lamest example, but S3 bucket is open to the internet. The most lamest example doesn't happen right now. Hopefully, it doesn't happen right now.

Caleb Sima: Everything happens right now open databases and systems with no password still happen.

Didn't we just have a big incident around that with DeepSeek something or another,

Ashish Rajan: There is that, yes. It's supposed to be open source, right? Why should they store all of that information? But the point being then obviously keeping aside the whole open source and locked in AI models, have we gotten to a point where all these AI tools that are being used in organizations similar to what happened with cloud logs, the detection response team was not ready.

And now they've been thrown into the world of cloud. And at least you can validate. Hey, S3 bucket over the internet. . If I were to trigger that we're not at that stage, where are we in using AI to interact with endpoints? [00:36:00] Or are we still doing prompting to identify from a best set of data that, hey, this should be the detection you make to, so I hope I make the differences clear because one thing is I'm interacting with the actual endpoint and trying to validate, yep this is S3 bucket is open to the internet. The other one is I have a bunch of logs. And I am looking at this going, Oh, okay. It looks like there's a log which says that S3 bucket is open to the internet.

Caleb Sima: Are you basically saying having the LLM itself be the detector? Like feeding a bunch of logs?

Ashish Rajan: Yeah, feeding a bunch of logs and saying,

Caleb Sima: what do you see that is could be a problem?

Ashish Rajan: That's what you're, yeah. Yeah. Are we at that stage where, cause obviously vendors will try and sell a lot of things, but based on, you've done a lot of research in this, I'm curious as to, is there is there some truth to any of this?

Dylan Williams: Yeah that, so that particular use case I think is a double edged sword, but I think that is something to be figured out, and I'll give you an example.

The first issue, obviously is the [00:37:00] context window, right? In theory, let's say we're using a really big model like Gemini's 2 million. Can we dump all of the raw logs for whatever in there and tell the LLM, can you find something wrong? And so you're saying, can we take it a step further?

And it's almost like if your LLM is just a tap sitting on the wire, can you just tell me when you get something that I have not seen or read about yet, but that's pretty interesting. So I don't know if in, in theory, I think it could work. I think it's going to be like a speed thing, right?

I don't know, it depends how the data is. I don't know, it's an interesting one.

Caleb Sima: To Dylan's point, there's a couple things. There is the context window at which you have the amount of logs at which you can feed it within that context window or even, you can RAG, you could get certain, Hey, go do a certain sort of length process to this.

Yes. It could absolutely say, I see, someone accessing an S3 bucket and downloading information from a malicious [00:38:00] source, right? I believe it could do it. And even to further that, if you want to take this a step further Ashish is we could say, hey, based off of that, you can now be an agent and act upon the information you've identified in this log to further validate your findings.

Oh, I see something really weird here and here. But what I'm missing is the CloudFlare logs in order to do that, I will go out and get those CloudFlare logs, then smash them together, then re evaluate. Yes, I do think that there is a breach occurring, right? That I do think is very feasible.

However, you're still acting on very small sets of quote unquote log data. And so it's being done in a very static, like fashion versus sort of what Dylan was saying is what you really want is the ability to process real time data. So I can just stream logs constantly into an LLM. And then it can make decisions off of [00:39:00] that.

And then at that point, is the context window either one fixed, and so it can only look at it in these window sizes? Or is it going to be able to comprehensively understand all of the logs, process it real time, and maybe in some sense, fine tune itself on the logs at which it is receiving to be better at it, so it has more, quote unquote, memory as it builds.

So it can decide, out of all the logs I'm being streamed, what are the ones I want to RAG? What are the ones I want to understand and make decisions about versus the one that's in my ram and like constantly make these decisions. We're definitely not there. However, I feel like that should be something that is feasible in the future is how do LLMs directly process real time data and then take actions on that real time data.

And actually I built a little future, like a year ago, I did a presentation on this, where I said, when AI starts making a decision, acting is when [00:40:00] things get really interesting. And I put the next phase after that is going to be when agents are managing and operating at scale.

This is where it's not just making a single decision or taking a certain train of it's I'm now being able to get lots like, oh, managing a thousand vehicles AI vehicles, self driving cars, where it then in order to do that, it needs to be able to process real time data, right? And that is going to be super fascinating.

I actually think you could rig this right, Dylan. I feel like you could. You have a sample window, you have a context window, you could stream your logs, take your windows boom, fire, right? You could just, yeah. Yeah. And you could, similarly as any sort of compute. You could do the whole log, you just keep doing deltas of the log.

Yeah, you could snapshot replicate sort of an LLM doing real time processing.

Ashish Rajan: Cause your point, the prompt at the end of the day is a detection rule in a way, cause you're trying to, your prompt is a detection rule in this [00:41:00] case.

Caleb Sima: Yeah. No in the sense that your prompt would be very generic, right?

It would be just identify bad things.

Dylan Williams: Is there something right now,

or yeah, maybe you have a, an LLM agent or agents whose job is just cloud trail detection. And they're trained on every single possible threat all the time on what it's going to look like in the logs. And then you'd have a different flavor for each log type.

But the one thing I've seen that people aren't talking enough about is if so we're doing this, we're keeping the data in its original form, right? It's a JSON. It's a whatever windows event log. There are a few companies out there that have taken the approach of what if you're SIEM in the future, it's just all embeddings. It's all vectors. Let's take all the logs. Let's vectorize all the things and what that looks like. And what you get out of that. It's like this Mhm. It's on this ontology of what it looks like to the LLM. So this would live in like a giant memory state or a RAG database, right?

But that's another different way to do the same [00:42:00] detection instead of just, the normal loss, but that might be really interesting.

Ashish Rajan: Yeah, I think we definitely, I guess I love the art of what's possible as well here because it keeps me excited, but I also wonder at the same time.

At the moment, as it stands, a lot of organizations are working towards sending , Dylan to your point about CloudTrail logs to the SOC team. And a lot of SOC teams at the moment are not trained in what a cloud security alert should be. Like is S3 bucket open to the internet is really that important?

I don't know, maybe, but it happened eight hours ago. So go and spend the time doing it. Do you want about the context window? In, in terms of, and I know we're talking just about AI, but there is this whole concept of. I guess each SOC team out there is being given new environments, like we're just talking about like the endpoint side, but their application side of languages, like I think one of the companies that I was a CISO for they were using C sharp.

And those are the stuff that I don't know if there's enough data in there for it to be detection ready. I don't even know the right word for it. So are there any areas that you feel are [00:43:00] like I guess prime, I know we touched on, you gave me a real life example earlier. Do you see any areas that are prime to be disrupted by something like this?

I know, obviously assuming we become AI agent and all that, we can get real time information. In my mind, it seems to be cloud because there's a lot more API driven. More than on premise, but on premise has changed quite a bit. Are there environments which people can look at, and perhaps people can keep an eye out for, developments in that particular area.

For the next evolution to what Kayla was saying, where now we've gone from the initial stage of, now we're potentially agentic. Is there any existing technology at the moment that's prime for disruption in that, at this point in time, where it, this is actually that when it's, when we are ready, that is the area that's going to have the biggest impact.

Dylan Williams: So you're asking like the equivalent of so we went from on prem, certain log source to cloud, cloud native stuff. What's it going to look like in the future for that?

Ashish Rajan: At the moment, the data security problem with a lot of [00:44:00] people have been that or DLP doesn't do cloud native. It doesn't understand.

I have data in my RDS database , it understands my on premise or Oracle server, but it doesn't understand my cloud native quote unquote, Kubernetes stuff. So I can't do data security there.

Dylan Williams: Like the number one issue that teams deal with now is new telemetry is on board and new logs. We were on a detect on this stuff.

And if that's the case, it's probably going to be more on the threat modeling side versus what's actually happened out in the wild. But I look at that as like. really, I think one way to think about it is signal potential. So like you mentioned this idea of that the logs are, they're not very good for detection.

Like we're, we need to detect for this system, but maybe we can't. And that goes down to the core issue, the visibility issue, right? So I don't know, as these attacks get more sophisticated, maybe there's a whole approach of we need much deeper visibility into systems that we don't have. And one example is.

We talk about CloudTrail is pretty, pretty straightforward [00:45:00] for certain detection rules. It's well documented and understood. We see a lot of attacks manifest in those logs all the time, right? It's pretty actionable. But the other side is certain logs or telemetry are more reserved to maybe the EDR world.

No one's writing detection logic on event tracing for Windows. Maybe they are, I don't know, but it's like different scales of abstraction, right? So that's interesting to think about. Like one thing I think would be really neat is Jared Atkinson from SpectreOps came up with this concept called capability abstraction.

But it's like the robustness of a detection rule. And the whole idea is that at the top, you have the names of the tools, FindMe, Mimikatz. Sure, it could work, but it's not very good, right? We think about the Pyramid of Pain. An attacker or malware could just change the name.

It's not a very good detection. But you want to go deeper into the lower levels of no matter what the tool is going to be called, these conditions are always going to be true. That's a really good detection. But we don't see that same methodology for the cloud or cloud native log sources.

But maybe it's because, back to the shared responsibility model, we just don't have that. [00:46:00] That level deep in the telemetry,

Caleb Sima: The other aspect of this is understanding context for an organization. So in the example of and what Dylan was saying is both CloudTrail and threat modeling. It's also really important to say it's not just that an AI, I think, can help you create the right detections.

But we've been talking about it more and specifically of, can it create the detection? The other question is, should it create the detection? And what is the right detection? For example, writing a detection saying flagging a root SSH access may or may not be bad. How do you know the difference?

Writing the detection is pretty easy. AI can do that really quickly. The biggest question is, Should it write it? And if so, where or what criticality should it be? Or does it need to be paired with others in order to understand correlation and make it an issue? And like that, I think is going to be the hard part.

Writing a bunch of detections, creating it from threat [00:47:00] intel. AI is going to be able to do this. It, we already know it can. But really I think the hard part is, okay, I'm parsing CloudTrail in this organization for these events. Why are they important and what criticality is it based off of anything else?

Because again, one detection can be super, super critical in one area of an organization, but then be noise and another how's it going to know the difference?

Dylan Williams: Yeah, we need to see more of threat informed defense. And I think as an industry, this is starting to come around, but it's that whole idea of, there's just too much threats and noise out there.

But I do know what is ground truth is what my enterprise is, what's its risk profile, what technologies do I have, what systems but it's finding that crossover there, right? Cause that's stuff that, you should care about because an exploit could actually happen on there.

Ashish Rajan: And the party that, that information is stored in people's head as well.

Dylan Williams: Yes. That's the hard stuff to get to is the tribal knowledge. Yeah. But it's going to be a context thing.

Ashish Rajan: It's an interesting one.

Caleb Sima: Dylan, I'd love [00:48:00] to, when you think about what can detection engineers today practically do like with AI today? That, either there's open source out there or something that you would say, hey, today, you don't have to be an AI genius. Here is my recommendation of either the top three things or the top three open source, or this is where you can use AI right now, and it will help you.

Yeah, that's a great one.

Dylan Williams: First thing first is that the frontier models are incredible. GPT 4. 0, Cloud 3. 5. Just go right to those. Who do you use? Oh, Claude. Claude. I'm gonna be Yeah, partial to Claude. Yeah, Claude is why? Yeah. So I've talked to people about this and I think it comes down to, it's totally anecdotal.

We're like, oh, this OpenAI is way better. It's but I part, for me, it's a UI/UX thing too. So some people, it's I love Claude's artifacts feature. So the way it works is if you're writing something or coding [00:49:00] something, it will pull it up to the side. It's more of this back and forth. Living, changing thing, right? But GPT, OpenAI, they have their Canvas feature and these other stuff, too. So there's pros and cons to each, but but, yeah. If you, so a perfect example is, go to those if you can't, that's fine, too, right? There we're at a level where you can just host the models locally.

Use LM Studio or any of these things. Raycast is a really good one I've heard lately. And, but yeah, that's the number one thing I run into is like, when you're bringing this to work, it's a third party, we don't want to put sensitive data in there. That's totally fine. All these models can be hosted on like a Bedrock or a Google vertex and stuff like that.

But if you want to take it a step further and you're like, I love this thing, I'm tired of copy pasting stuff back and forth between these chat windows. I would just go use a tool like Flowise, or I think it's n8n but it's very quick. To get hooked in, because all the LLM is, it sits behind an API, right?

And so you can start to design these step by step workflows, and, integrate the AI that way.

Caleb Sima: But, What was the other

Dylan Williams: one? N8N? [00:50:00] Was that the N8N. It's like a low code workflow automation tool. It's been around for a while. The number eight N.

Caleb Sima: So those are the tools. What do you do? Yeah.

Dylan Williams: I would start with, I always tell people don't do anything until you, you have a pretty good grasp on prompt engineering, because I think you'll fall into a trap of I've seen so many people use these tools and they're like, they suck, man.

These are so bad. Like I asked Claude what a vulnerability is. It doesn't know what it's talking about. And I tell you nine times out of 10, it's show me your prompt. And it's a bad prompt.

Caleb Sima: But I think actually we'll just say, I'm sorry, but I am not able to assist you in security exploit or hacking things.

It's not safe.

Yeah, exactly.

Dylan Williams: Yeah. But yeah it's one of those things like if you're a power user of these tools, you're going to get rewarded pretty quickly. And so you get locked in and get really excited and take it a step further, but I would just slow down and understand a little bit, spend an hour to understand how to write a prompt.

Caleb Sima: And then what do you think today a detection engineer should do? What should [00:51:00] they automate or use that you think is effective right now for something that they themselves could probably put together pretty easily. I would say,

Dylan Williams: The threat ideation piece is pretty straightforward.

You might have the sites that you normally go to, RSS feed a blog, a research, we got Wiz data, saw Datadog, they write all these excellent research pieces out there. That might be where you go, where you have a detection idea. But the painful part is taking that and go writing a SQL or a Splunk query.

Query or something, right? . So if you can shorten that, then you can focus on the more fun ideas of thinking about more clever ways. For detection ideas and stuff like that, right? But yeah, I think the biggest thing is every place is different, right? So you have different tools, different technologies in your detection stack, but in general, once you're there, it's not gonna change that often.

So what you can do is go look at your existing detections and your existing logs. Like those are things that you can save and put somewhere and the LLM has access to. And it will generally give you pretty good quality results, right?

Caleb Sima: Is there a [00:52:00] particular service or something that you think you would call out I would say even two that can help them are, super point and click RAG or even point and click agent, right?

Agent is a hot thing. What is the easiest thing for, Joe Blow and AI to point and click, build an agent to go do some work or use RAG. Yeah, a hundred percent.

Dylan Williams: Yeah, I would say I would go to LangGraph Studio . It's created by LangChain. What's cool about that is you can run it locally or deploy it on their cloud behind an API, right?

But if the studio part is because this, if you're not used to being in a VS code and coding out the agent's prompts in there you can look at it that way too, right? But it's very turnkey. The other one I tell people too is pretty turnkey experience is CrewAI their thing is the multi agent idea, right?

You have a manager agent, different threat hunter here, whatever you want to call it. But those are pretty turnkey to get started and get going.

Caleb Sima: Yeah. Awesome. And I think Ashish, we should just do a little bit of final thoughts?

Ashish Rajan: Actually, what's your favorite AI tool, man, out of [00:53:00] curiosity?

Dylan Williams: Ooh, that's really hard. Number one, like you just keep going back to, yeah, I'm gonna have to say Perplexity, I think. Ooh, yeah. And I would say pay for the pro version. And if you haven't tried it with the DeepSeek yet it's pretty wild, man.

Caleb Sima: Wait, so what does DeepSeek model in Perplexity give you that, Claude OpenAI regularly doesn't?

Dylan Williams: Yeah. so they added a DeepSeek R1 in there as an option to select. So you ask it a question, show me restaurants, blah, blah, blah. It will do the planning chain of thought step,

In addition to the web search results. But you can just tell it's a, it's way more in depth or how a human would do it.

Like you don't just. Search and. It's you give it a try, but I do not use Google anymore. I just don't.

Ashish Rajan: Yeah. For people who are on the free version versus the pro version. Is there a difference? Is there a significant difference between the two?

Dylan Williams: Yeah, it depends how much you use it. If you use it a lot, I think it's worth it.

Get the pro version for sure.

Caleb Sima: Yeah, I think it's, I think pro is the only one that you can choose which model [00:54:00] to use as well, if I'm not mistaken. Yeah.

Ashish Rajan: Yeah. I think maybe going back to Dylan, what he was saying about AI fatigue as well. Do I even know which model should I be using? I feel like being a cybersecurity person.

I would always use the latest model, but sounds like I can't generate images with the most latest model with ChatGPT, but I've sometimes wanted images. So I have to go back four notches to even be able to generate new images. But then I'm like in this constant battle of this is not the perfect image.

Keep going. Keep going. Keep going.

As well, like now I'm in this like world of, Hey, which AI should I be using at any given point in time?

Dylan Williams: Yeah. Like for perfect example, I didn't use Gemini a whole lot until recently. But I think you'll navigate to OpenAI, Claude, Gemini, or Perplexity for different reasons.

I don't know if there's a more of a central place, multimodal. So like what you, it does voice text and video and,

Ashish Rajan: That's interesting. So you, instead of if you realize that Ashish wants an image of his golden doodle sitting in the backyard, oh, I need to write a [00:55:00] version.

And instead of trying to do some reasoning, no, not needed. Just

Dylan Williams: Yeah, they have. Yeah, that's interesting. They have, yeah, like the router will go and pick like you just said, based on, let's use a tiny model also. So you don't blow up my OpenAI bill on the cost of the tokens for this. Yeah.

Ashish Rajan: Cause that's what DeepSeek was. I think I saw, I think it was a professor of New York University. I want to say they, he did a infographic of DeepSeek and how, why it was so significant as they discover, or as a thing that was announced that kind of shook everything. Cause it has this concept of almost like a, imagine an office building.

When you send a query. To open AI Accord, basically the entire building is being searched for the answer and hey, until you find Dylan, whereas the way DeepSeek is working at the, you meet the quote unquote virtual receptionist or reception person and they go, okay, oh, is your credit, you've generated an image.

Oh, I know which agent would work for that. And then you just go, it's very more. way more efficient in how it or uses the [00:56:00] information versus let's just get other generators going, get their nuclear power plant going and

Dude, like I appreciate it coming over, but as a final thought do you have any final thoughts for people who are looking into this from a detection engineering perspective and what is the possible future they could be looking forward to, hopefully.

Dylan Williams: Yeah. Yeah. I would say it's pretty exciting times. I think a lot of the work that you find takes a lot of time and is annoying in your day to day job. That's going to go away pretty soon. And we're detection engineers. We love doing the fun stuff understanding new threats, threat intel, threat hunting, stuff like that.

What we don't like doing is tuning rules. All the dang time, that's not the fun part. So leave that to the LLMs. They will do the crappy work. Hopefully soon. So Awesome.

Ashish Rajan: No, that's a great end and where can people find you on the internet of the world to talk more about the space, man.

Dylan Williams: Yeah, just Linkedin

that's where I hang out.

Ashish Rajan: That is a swarm of LinkedIn connection request coming towards. You're just giving a heads up. Yeah. Yes. Yes. Thanks. Thanks Dylan. [00:57:00] But I'm looking forward to the next conversation when you are on that next stage as well. This is, this has been valuable for me. So I appreciate you being transparent and honest as well.

Thank you so much for listening and watching this episode of AIS cybersecurity podcast. If you want to hear more episodes like these or watch them, you definitely find them on our YouTube for AIS cybersecurity podcast, or also on our website, www. aiscybersecuritypodcast. com. And if you are interested in cloud, which is also a sister podcast called cloud security podcast, where on a weekly basis, we talk to cloud security practitioners, leaders who are trying to solve different kinds of cloud security challenges.

at scale across the three most popular cloud provider, you can find more information about cloud security podcast on www. cloudsecuritypodcast. tv. Thank you again for supporting us. I'll see you next time. Peace.

No items found.