AI Agents vs. AI Agents: The Future of Security Operations | Interview with Monzy Merza

Dejan Kosutic:

Welcome to Secure and Simple Podcast. In this podcast, we demystify cybersecurity governance, compliance with various standards and regulations and other topics that are of interest for consultants, CISOs and other cybersecurity professionals. Hello. I'm Dejan Kosutic, the CEO at Advisera and the host of Secure and Simple Podcast. Today, my guest is Monzy Merza.

Dejan Kosutic:

He's the he's the co founder and the CEO of Crogl, and this is an agentic AI for security operations. And prior to Crogl, he was working as a security researcher for the US government government and also for Splunk and Databricks. So in today's podcast, you'll learn what are the best practices for security operations and also how AI is changing those processes. Welcome to the show Monzy.

Monzy Merza:

Great to be here Dejan. Good to see you again.

Dejan Kosutic:

Great to have you here. So you mentioned earlier that we are entering this agent versus agent world. What do you actually mean by that?

Monzy Merza:

I think if we look at security practices before, it used to be, you know, there was an attacker somewhere, a bad actor would attack an organization, and it was a human actor driving some tools, but it was a human actor that was in charge. What we're seeing now with the development of AI is that people, bad actors, are tasking agents to go and run campaigns and execute attacks, but once those agents have started their work, they don't necessarily go back and ask the human to say, Hey, do you approve this or do you approve that? So the change and velocity of how things work are very, very fast. I think we saw a couple of recent examples of that in the attack against the Mexican government, where there was one attacker, one agent, and another announcement that was made recently with McKinsey. Very similar situation, one attacker, one agent, very short windows of time to execute very, very sophisticated attack campaigns inside of an organization.

Monzy Merza:

So that's what I mean by that is we now are living in compressed attack scenario, we're also living in terms of time, but we're also starting to get into a world where the cost of conducting any given sophisticated attack is much lower. Someone with an idea now has the capacity to have a smart agent to execute those attacks. And what that would mean then, on the flip side, is that the enterprises will have to defend not just against human based speed and human based attacks, but agent based speed and agent based attacks. And so now we're in this agent to agent world. And so to defend that agentic speed, the enterprises will now need to use agents to defend against that, because otherwise we are not fast enough as people.

Dejan Kosutic:

Yeah, it's very interesting, and I would say very realistic unfortunately. And let's analyze this a little bit and let's take a look at, let's say, what are the If you can just explain briefly what are the core security operations or core, let's say, security operation activities and actually how AI is changing each of these activities.

Monzy Merza:

Yeah, so I think when we look at from a security point of view, some people like to draw a spectrum of there are things that happen pre attack, during an attack, and post an attack. And if we look at it from the perspective of the security teams, you can think of it as security teams, there are some members of the security teams that deploy technologies, that they might deploy an endpoint detection platform, or might deploy log monitoring solutions, and that preparatory. And then they deploy alerting systems, so that they can get alerts, and they can see if something bad happened or something went wrong, even from a compliance perspective or a threat perspective. And then the third piece is what they do after the attack. If they detect an attack, how they continue and investigate.

Monzy Merza:

So I kind of divide the world into these three spaces, some of the observations that we can make is on the tools side of the preparatory side, more and more security tools are now starting on the defensive side. Security tools are trying to use AI and the tools themselves, and these could be something as simple as easier to ingest information or easier to make build connectors to data lakes faster, and those sorts of things, all the way to actually doing, creating better detection logic to create better alerts. Then you have this middle pillar of, okay, you've got an alert, what do you do? And that's the world where Crogl lives in, that when an alert comes in, well, do you investigate that alert? And human analysts historically would have to go to one data lake and another data lake and the Doctor system and the email system and the cloud system and connect all these dots together, and we're seeing this emergence of AI SOC agents that are able to do a lot of that, a lot of their work, and really bringing a lot of value to organizations.

Monzy Merza:

And then the third piece is in the response space, where then AI agents now have this opportunity, whereas you see activity going on, they can make configuration changes, for instance, into an environment, they can stop inactivity, or remove an email from a mailbox. It used to be SOAR platforms that did this, which required rules. Now those platforms are evolving to start to use AI to do these things a lot faster, with less brittleness and with the rest rules, but more from an understanding based perspective to go in effect. So we're seeing technologies emerge in all three of these areas.

Dejan Kosutic:

Let's maybe focus on this second and the third phase. How exactly these agents then work, let's say, on defender side?

Monzy Merza:

So I'll give you an example, I mean, from one of Crogl's customers. It is a very sophisticated, large enterprise. And what they have is have three different security incident management and response systems. And you would think that they should just have one, but that's not true. People are using whatever tools they need to utilize.

Monzy Merza:

Very large global enterprise. And they have about 100 analysts worldwide. And so what this organization wants to do is when an alert comes into their environment, they want to connect to their different data lakes, where they're storing their data. They want to automatically figure out whether this alert is a real alert that is a signal of a breach, or whether this is just some sort of a false positive. And so with Quoggle as the AI in this example, what Quoggle can do is it can receive the alert from the alerting system, and then it can connect to the different data sources and the data stores that they have, and enrich that information, and walk through a MITRE ATT CK kill chain analysis to see if this particular alert manifested itself in some big problem or not.

Monzy Merza:

And so that is all done through automation and this agentic system under the hood, so that it can connect the dots between different data sources, run different queries against different data lakes with different programming languages, and the user doesn't have to type any of the query languages. The user doesn't have to know the analyst in this case, doesn't have to remember the schemas, doesn't have to remember that email data sits over there and endpoint data sits over there. And then at the end of the execution, Crogl gives a report to the user to say, Well, this is what happened. So that's a very clear example of the evolution of where we're going in terms of the investigative cycle. And then if we push to the farther pillar and say, Okay, well, what's the response to this?

Monzy Merza:

What we're able to do now is communicate with other technologies and send the signal to remove something from a mailbox, for example, or in a more sort of compliance driven way, or audit or regulatory requirement point of way, is to automatically document what is going on with this investigation, what should be done, or make a recommendation, or also call out, well, these are the gaps in this investigation, because maybe a certain data set was required to investigate this, but it was not present. So really taking that all the way to the extent of enabling people to go and remediate the problem faster.

Dejan Kosutic:

And how do you actually draw a line between what AI agents could do, let's say completely autonomously versus where they actually need to have a human in the loop?

Monzy Merza:

Yeah, I think in there we have to first kind of maybe agree a little bit on a little bit of an operating model. And that is when we think of the world, and if we agree that there's lots of agents that the attackers are using, and those agents are faster than a human attacker, and those agents are cheaper to deploy than a human attacker, then we have to accept a world where in order to defend against that kind of speed, we need that similar kind of speed, ideally faster. And so we need to use agents. Now, if you start to mediate agents with humans, you're back to the same problem again because you're not moving as fast. So, we have to, I think, come to a conclusion now to say, okay, need to divide up the workloads or the categories of work into work that will be mostly autonomous or almost all autonomous, and work that is going to require human in the loop.

Monzy Merza:

So, I would propose to the audience that I think we have to start imagining a world where human is sometimes in the loop for some things, and for other things, doesn't always have to be in the And I like to use sort of a mobile phone analogy, where if you go all the way back, there were these PBX operators that would plug the cord in when somebody made the call to connect the dots. That was a long time ago. And then we came to phones that we could dial ourselves, and now we just, whether you use Google or Apple or whatever technology you use, you can just say, call my friend, and the phone call just happens. And so we've reduced that human in the loop to a great degree because the technology maturity has evolved. And so it's a long winded answer, but I think that's the way we have to approach the world.

Monzy Merza:

So now the question is, at what point it should be fully autonomous? And at what point this agentic operation and security operation should be human in the loop? Now, I would just share back with the experience that we're seeing with customers. So what we're hearing what people want to manage is, at the very beginning, when you deploy an AI system, people want to build confidence in that system for a very specific set of use cases, so that they know that the system can just run on its own. And so you take a small subset, and then you execute against that.

Monzy Merza:

And then when you're satisfied that, okay, I'm gonna let the system build the context and the system do the investigation, I'm convinced of that now, so I let the system do that. But I still hold the decision making responsibility to the human. So that's step two, where the system is now autonomously running on certain use cases, and the human is the decision maker. And then slowly, you keep expanding the aperture of how many things the system is able to do independently, and the is still the decision maker. And at some point, you can then start to test to say, the human's decision and the machine's recommendation are almost always in alignment.

Monzy Merza:

And if they're mostly in alignment, then you can take those use cases, and those use cases can be fully autonomous. And then the human can be the decision maker. And there are certain circumstances where the machine hasn't seen something in the past. Like, for example, in security operations, oftentimes somebody might have an advisory or a threat intel report. Now, machine has never seen it before, and so it may have some idea based on prior knowledge on how to operate on it, but maybe a human wants to have that experience interactively, one step at a time using the machine as an advantage, where maybe they don't write query language or code, but they're able to guide the machine through that process.

Monzy Merza:

So that's how I like to divide up the world is there are certain tasks that can certainly be fully automated Mhmm. And there'll be other tasks that require human decision making that cannot be automated at least for some time in the future.

Dejan Kosutic:

Yeah. And what do you think? Are there any decisions that in the foreseeable future will have to remain on the humans?

Monzy Merza:

Yes. I think there will be a lot of decisions that will have to remain in the human. And it's not so much a mark on the maturity of the AI technology itself. It is an operational and process oriented reason. Because today, for example, in security operations, if I want to say to somebody that this person is an insider threat to the company.

Monzy Merza:

And I've gathered all of this evidence. And because now I'm about to accuse someone, and I'm about to kick them out of this organization potentially, or take them to court. I think those kinds of decisions really will continue to require a human being to very carefully assess the machine's determination as to what has happened, so that we're not making mistakes and wrongfully accusing people, or causing damage to the organization, or releasing some piece of information too quickly out into the news, or sending an email that maybe should not have been sent because somebody should have checked with the legal department, for example. So I think there will be a lot of examples of process and procedure within the operation world where that will continue to be very, very human mediated for at least until significant policy changes happen. I think that's one area.

Dejan Kosutic:

Okay. All of these things that you you mentioned are actually, let's say, organizational things, maybe legal things, and and but they're actually not related directly to the incident response. Right? And within the, let's say, domain of incident response, do you think that there will be need will there be any decisions that will need to be made by the humans in the future?

Monzy Merza:

Yeah, and I think I use that example as an extreme example because it has facets of multiple parties at play. Because what usually happens in a let's go back to the inside of threat use case, instance, and we recently did a study for AI and published, and we're going to release a report on this here very soon, that talked about people's concerns. And there's over 40% of the organizations that we surveyed that are still very concerned about insider threat. And so what oftentimes happens is insider threat alerts start in the Security Operations Center, because the alert looks something like, Bob is moving too many emails out of his organization, or data has been leaked for a certain important intellectual property documents that have been moved from one area to another where they should not have been moved. So those things usually start as an alert, and if there's a responsibility, so if the AI is running and trying to figure out and connect all the dots, it still ultimately has to come to a human analyst.

Monzy Merza:

And that human analyst should then make the decision that, Okay, I have the data, and the data is saying these things happen, but is it really Bob? Can we really ensure that it was Bob who did this? Because maybe it wasn't Bob at all. Maybe somebody co opted Bob's computer because he had a piece of malware on there, and that's why it happened. So I think for these high priority use cases, or if you're going to remove somebody's access because there is a transaction happening right now and somebody brute force into a person's account, and now you need to remove that access.

Monzy Merza:

Like, do you really want to remove that access? Or is it because that person was in their car and they were trying to type and they got locked out Because things were bumpy, right? And so, no, you shouldn't kick them out of the system. So I think there is a lot of reasons like that where humans have to be continually in the loop and security operations.

Dejan Kosutic:

You also mentioned this concept of, if I understood well, a kind of maturity where actually it takes time for, let's say, to align the decision making between, let's say, humans and AI systems? How do you actually achieve this maturity? What kind of method needs to be used here to really gain more trust in how AI systems make decisions?

Monzy Merza:

I think we're so early in this journey. Right now, we're kind of in this, I don't know, almost like kids in a celebration that found firecrackers or little sparklers, and they're running around. They're all really happy. Everybody's excited, and I think that's kind of the phase many of us are still, where the builders of AI technologies are working really hard trying to build really, really good technologies, and the space is moving really fast, But on the organizational side, process side, really trying to understand and come up with what does the maturity look like, I think we're still early. There's organizations a number that we're trying to, especially The US, on the NIST and government organizations, even MITRE and others who are trying to understand how to communicate this so that organizations can mature through.

Monzy Merza:

I don't think anybody has the answer to that, so I kind of dodged your question a little bit, but then the question is, okay, well what do I do? Because people are starting to adopt these technologies, so what we see people doing in practice is acquiring technologies to help them, and I'll just use a security operation example again, acquiring the technologies, being really focused on the use case, not saying I'm buying AI, but saying, it takes too long to investigate medium priority alerts, and most of them are false positives anyway, but I still need to investigate it, and it takes too long. And I don't want to devote people to that problem. I want the AI to fix that problem, address that problem for me. So you have a very clear outcome, you have a very clear use case, and it's easy to measure whether you're getting the outcome or not after that experiment.

Monzy Merza:

So I think that those are the kinds of patterns that would be helpful to people rather than saying, yeah, we're gonna get AI, we're gonna do magic, and and I think people who approach it that way will be generally unsatisfied.

Dejan Kosutic:

Okay. Now if most of these things are, let's say, starting to be done by AI agents, does this mean that humans like security analysts will be simply not needed anymore?

Monzy Merza:

I think what will happen, and I wrote a blog post on this topic not too long ago, it's what we're observing And let's just talk about base principle for a moment, right? If the threat actors have AI and they can attack using it, and if the defenders have AI, can defend and using it, now the AI advantage, you can argue, is now a common denominator. And within reason, we can always argue that the attackers have a greater advantage just by the physics of attack versus defense. But outside of that, both parties now have an agentic platform set of capabilities. So now the question becomes, well, who is going to be more superior?

Monzy Merza:

Because you can buy the same if you're attacking me and I'm defending, I can buy the same model that you're using to attack me. I can do all that. So at the end of the day, what's different is how do we design the systems and how do we choose to operate those systems. So that brings it back to the security teams. There is a little bit of a word game, if you forgive me, is that I think there's going to be fewer analysts in the future.

Monzy Merza:

However, I think there's going to be more security engineers in the future. And I believe that the total number of people in security operations is actually going to increase and not decrease. Because what's happening is, another sort of base principle piece here is that we have a completely net new footprint. So, if you go back a little while back in time, the internet was formed, then enterprises had sophisticated networks, internet connectivity, SaaS products, then eventually cloud came around, and mobility came around. Each one of these created new footprints and new threat landscapes, and now with the emergence of AI, we have a new footprint and a new terrain and a new threat landscape, and so that is going to evolve and more people are going need to work on it.

Monzy Merza:

Now people are going to have to learn new skills, they're going to have to evolve alongside of that, But there's going to be more people, more security engineers in the future than there are today because all of this is going to require effort and human decision making.

Dejan Kosutic:

And for these security engineers, what kind of skills will be needed in the future? Just related to that, what kind of capabilities will defenders have to create actually to be on the edge and really be able to beat these attackers?

Monzy Merza:

Yeah, I believe what's going to happen is that it's going to go back to some of the basics, or maybe reinforcement of some of the basics. Security engineers are going to get an opportunity to spend more time learning about the environment, and learning about the business processes, and learning about what the outcomes are and what the priorities are for the business. And then as a consequence, we'll be able to design better systems using AI, or better processes using AI, both for the usage of AI for defense, and also for the overall constructs of what the enterprise systems or what the IT infrastructures or the interfaces of different applications and services should look like. And so they're going to have to become more of, you know, to learn more about technology and start to learn about and start to think in terms of being a designer and an architect, and less about trying to constantly remember how to write a particular query in a data lake, for example, or how to remember where the logs are. Because that's what oftentimes happens right now with security operators, when there's alert to try to figure out, well, who do I call?

Monzy Merza:

Where is the log event? How do I query? How do I stitch it together? How do I submit the report? They're at that level right now.

Monzy Merza:

Now with Agents systems, a lot of that work can go away because you can just simply ask the question or the system already has the agent to do that work so that you can elevate it. Now you can start thinking about the broader sets of problems. And having said that, I don't think anybody really has a crystal ball here because we don't know what kinds of use cases will emerge. We have some sense as a community, but I think there is more use cases. I'll give you a concrete example.

Monzy Merza:

If we go back a few years, ten years or so, and you look at and I and I came to you ten years ago, fifteen years ago, and I said, you know, Dan, there's gonna be, like, all of these alerts coming from this thing called GuardDuty, and we're all gonna have to pay attention to it because there's gonna be this thing called the cloud, and people are gonna use it, and there's going to be all these ephemeral credentials and all these ephemeral machines and we're going have to secure all that. And you guys are Monzy, get out there. It's just compute. It's just some storage and it's just some networking. We know how to do that.

Monzy Merza:

It's just the same. It's just somebody else is managing it. It's the same as an enterprise network. But it was a completely new footprint. And so we created new things to work on and new things to do and to manage and defend.

Monzy Merza:

I believe it's going to be very similar. We haven't seen all of it. There's going to be a lot more change. I don't think anybody really has the answer. But we know one thing for sure, I believe strongly, is that there's gonna be more of it.

Monzy Merza:

It's not gonna be less of it.

Dejan Kosutic:

Isn't it I mean, agentic AI in itself has it has security vulnerabilities, obviously, with prompt injection and so on. Isn't it actually a kind of an absurd to actually you want to increase the level of security by using more a technology which is insecure in itself? How to resolve this?

Monzy Merza:

Think we have to look at this from a perspective of maturity, again, as consumers and users of the technologies. I think there lots of implementations of AI technologies that have security vulnerabilities. Most recently, people have downloaded things like ClawBot and a whole bunch of these other agents that they've tried to use that have a lot of different vulnerabilities. But I think what would the less But there are other developers of Injective Technologies that, and I'll include Crogl in this bucket, is we spend an insane amount of time watching over how secure the system is. And for Crogl in particular, that is one of our big value propositions to our customers, is that Koggle is a secure and governable system.

Monzy Merza:

And so, it doesn't mean that every AI product is like that, but it's a quality of an AI product that can be achieved. And so, it's a maturity curve thing, and as new vulnerabilities emerge, we're going to have to manage those vulnerabilities. I feel like I just keep going back in history, but there's so much to learn from the other inflection points. Whether it's networks or operating systems or anything, all of these systems have had vulnerabilities. And over time, as the discipline matures and we manage those vulnerabilities, we get the benefit.

Monzy Merza:

Even something as simple as a car, although cars are not very simple anymore, but they have vulnerabilities too, and there's risks and there's dangers, but we use And we manage those risks. And I think similarly, any technologies that brings great benefit will have these vulnerabilities, And that goes back to the buyers. So the buyers have to ask the right question, and the security users, especially security operations, to ask the right questions of their providers, of their vendors, of technology developers to say, How do you manage these risks? And have an open conversation.

Dejan Kosutic:

Okay. And is there some kind of, let's say, unit test or I don't know, some kind of a scenario where you would actually measure if an AI agent within a SOC is, let's say, secure enough, or I would say, does it perform well? So, is there a way actually to measure this maturity?

Monzy Merza:

Yeah, I think there are some techniques that you can use that are fairly straightforward. Mean, at the very base principle, can just use a lot of the vulnerability scanners and vulnerability assessment systems, Reversing Labs and others, have capability that can look for just basic patch management kinds of issues, so that the software that you're buying, the agents and the containers and all of those capabilities don't have inherent vulnerabilities in them. A lot of these scanning technologies can test for things like whether passwords are stored in the clear or passwords are just passed around and there's no secret storage and all of that. So at a very simple, at a base level, simple approach is whatever technology that you're getting, you have to be able to assess it. Now, it's a SaaS technology, oftentimes you can't do that.

Monzy Merza:

So then you have to ask your SaaS provider, hey, give me some assurance. How do you do vulnerability testing? How do you measure? Do you have a process? And some organizations do a really good job.

Monzy Merza:

I will put Crogl in that bucket. Other organizations, there are many of them who don't do a very good job. They just hide behind the fact that, oh, we don't share that information or something. I think that's a bad answer, because people who are building security tools should provide great assurances to their customers on the security of their platforms, So I think that's one way to do it. The other way that we're seeing is there are a number of companies emerging in this space now, just like other companies in the non AI agentic space for vulnerability testing and for red theming.

Monzy Merza:

And you can procure these services and you can red team your AI agents or other products that use AI agents that you're procuring. So I think there are some best in that area, I think there's clear opportunities today.

Dejan Kosutic:

Okay. Let's switch gears a little bit towards governance and let's say standardization. Did any kind of a framework emerge which would, let's say, describe how, let's say, AI powered security operations need to look like?

Monzy Merza:

We haven't seen anything that has really taken hold. I think a number of organizations are trying, and when we work with our customers during the buying cycle, we will get questionnaires essentially. Our customers organizations that we engage with are generally very, very mature. They're like a Fortune 500, Fortune 1,000 organization. They have very intelligent, resourceful teams, and well resourced teams that are very capable and competent.

Monzy Merza:

Or we're dealing with government organizations that are also very capable and competent, well resourced teams. We see some common denominator in those questionnaires, but I haven't seen anything where I could say, Oh, this looks like the same, you know, section seven through 11 of a certain format. I think a lot of people are trying to figure that out right now. There is no governance model, and I know you're not asking this question, but I feel that adoption of AI technologies is already well on its way, and so the governance teams and policy teams, I think, have a tall order of a task in front of them to try and catch up to provide good guidance and not slow their organization down. I

Dejan Kosutic:

hope that we'll see some kind of frameworks because they will certainly, I would say, make it easier for everyone to kind of align with. Okay, but from your experience, how actually then to set up a governance and basically how to control and manage these security operations that are more and more automated?

Monzy Merza:

Yeah, I think it should really and the governance practitioners know this really well. You always want to start with the end goal in mind of what is the business objective that you're trying to achieve. I think in this, if we just take the security operational units as a part of the business that you're trying to write policy for, then you have to kind of appreciate the fact that these teams need to operate a lot faster using technology that is an emerging technology, but at the same time that technology has to be secure and operational and testable. I think that should be the grounding principle, and then you have to map it back to your organizational priorities on how you pull things together. So I'll give you an example, right?

Monzy Merza:

So it's one thing to say, to have a questionnaire that says, how do you prevent, let's say you're the buyer and I'm vendor and your form says, How do you prevent hallucinations in your product? Well, I can give you an answer to that, but fundamentally, I would say to some degree, hallucinations and error in an AI product are a feature. They're not a bug. Because, and I'll tell you why, I know you're smiling, because if the system was too rigid and it had no flexibility, it would be a rules based system. And then it goes back to the structure system, and you look at how humans operate.

Monzy Merza:

For us, creativity and a little bit of leap of faith is a feature. That's how we move things forward. We take guesses, probabilistic guesses, but we take guesses at things. Sometimes it makes very poor judgment. So I use that hallucination example because it's so top of mind for so many people.

Monzy Merza:

So yes, you should ask that question, but there has to be a follow on question, At least one more follow on question. And that follow on question should be, under what circumstances does the system give answers that may not be incorrect? How do you measure that? And what are those error bounds? So you can understand the usage of the system and then make up your own mind, rather than somebody say either the system doesn't hallucinate at all, because believe me, you don't want that AI system.

Monzy Merza:

And then on the flip side, they say, Well, that's just the nature of AI, so good luck to you. So if you're really concerned about it, then you can't really manage it, you're on your own. But I think the reasonable answer is we know the system hallucinates. It hallucinates under these criteria or under these circumstances, And the system is also smart enough to know that it hallucinates because there's a check-in the system to see. So when you run, for example, and you get a query result, and it has an email address that's spelled w w w dot dan dot caustic dot something something and doesn't have an symbol in it.

Monzy Merza:

Well, that's clearly incorrect. So there should be something in the system that they can look up and say, Well, that doesn't look like an email address, or That is an incorrect email address, because your organization's format is firstname. Lastnamecompany dot com. And if this one comes in and has some other format, then you either have an anomaly or the system is hallucinating.

Dejan Kosutic:

Okay, but how would you typically actually align, let's say, business goals of a company with security operations? I mean, where is this link and what is the best method to do it?

Monzy Merza:

So I'll give you an example. So we read the AI SOX Summit a couple of weeks ago, and our keynote speaker, Ajit Gadam, who who is a fraud and trust executive at HealthEquity, he gave a very important example. And he said, When a HealthEquity client goes to a doctor and swipes their card for the first time, it's usually a place where they've never been to before. It's usually to a provider from where they haven't seen access for that person before. And it's usually a large dollar amount.

Monzy Merza:

So that just says fraud, fraud, fraud, fraud, fraud all the way through for any transactional system. But he said, in our case, that transaction has to go through because that is my customer and they only use this card when they need it most. And so that particular, his security operations teams and his fraud teams have to really understand that's the business use case. So they can't take something off the shelf as a detection model, or as a detection script of some kind, and just implement it. So the rules of the business and the expectations of the business, that's how they manifest themselves into security operations, so that when an alert happens, the security team is aware that it's happening in line with the business processes.

Monzy Merza:

And I think that's where there's an advantage with AI agents, where it's easier to implement lots and lots of these things, because a lot of these agents have this ability, and again, to plug Crogl, Crogl can do this, takes document and maps the document to the detection and response and investigation criteria for the operator. So this way you're not running around and causing an alarm for something that is out of context for your business process.

Dejan Kosutic:

Okay, so in the context of governance, once you align the goals with the business, what other elements do you need to have in this governance of these security operations that are more and more automated?

Monzy Merza:

Yeah, so a couple of things are emerging as we talk to The first, there is a set of expectations. The first set of expectations that people are now arriving at is that data does not live in one place. Organizations have multiple data lakes, but they also have multiple applications and services. From a governance point of view now, the policy writers have to think about, well, these agents can have the ability to connect to my financial system. During a security investigation, I get an alert and say, you know, Bob is an insider threat risk, or I have a malware that was detected on Bob's machine.

Monzy Merza:

And so I want to know, well, what's the blast radius of this particular alert? So from a government perspective, if you say, well, we can't give you access. The security team is not allowed to have access to the financial system, or the security team is not allowed to have access to the human resource database or the travel management system. Because previously it was difficult. You could say that in theory that, oh, they should check what's going on, but it was difficult to do.

Monzy Merza:

Today, integrations are really fast, they're easy to accommodate, and they're easy to allow if you really wanted to allow that. So now the policy authors have to think about, well, is this the kind of thing that I want to allow? Because now the investigations are going to be deeper. So now it goes to both sides of the equation. Now they can demand more detailed or in-depth requirements of the security operations team.

Monzy Merza:

And on the flip side, they can enable the security operations team to have deeper access. So what we're learning is that oftentimes people will hook up not just their security data lake, but they'll connect Crogl up to their other systems like S3 buckets, for example, that has data that is not directly related to security alerts or not directly related to a security sensor, but it has other information, so that you can tie those pieces together, so you can really do a broader assessment. So I think from a vendor perspective, the governance leaders have to ask, or the policy writers have to ensure that whatever product you're buying is flexible enough to connect to lots and lots of data sources. So that's one item. The second piece is, just like we were treating data stores and data lakes in the past, where we would say, Oh, I want to send my security data from my endpoint system and my email system and my authentication system, you now have to expect to live in a world where you're going to get to choose your own models, your own large language models, because different models are good at different tasks, and different agents are good at different tasks.

Monzy Merza:

So you have to buy products now, AI driven products or agentic products, that have that flexibility. And specifically, the one that you want to write in as a requirement is one that allows you to bring your own model, or allows you to choose your own model. And it's not just about procuring, because businesses are starting to create their own models in house, and because they have this proprietary data, and they create models based on that data. And so the security team needs access to those models, so they can better sensitize whatever they're working on to the broader business objective. So that's another example.

Monzy Merza:

And I think the third example is really in the governance space, where policy writers need to demand from their AI security providers or agentic security providers that they need to have products that are transparent. So it's clear, just like our example about hallucinations, that the product should be inspectable. The product itself should be secure, so it should not have vulnerabilities and known issues that are unacceptable. And really, the product should have a concept of working the way your organization works. Just like I gave the example of health equity, that example is very bad in a credit card transacting company, but it's desensitized to the process then, right?

Monzy Merza:

So the agentic system needs to be flexible enough to where it can adapt to your processes and not be a system that somebody just trusts upon you and says, trust me, this is really, really cool, you should just use it. So I would say those are the other three elements in terms of the governance that should be asked and should be requirements.

Dejan Kosutic:

And how do you normally, let's say, distribute roles and responsibilities for security operations, especially again when they're mostly automated?

Monzy Merza:

Yeah, so I think now things are going to have to change, and there's a lot of organizations who are kind of bouncing around the screens, and this is another one of those areas I don't think anybody really has answers. People are trying lots of experiments. I was at a conference recently where the chief information security officer of a publicly traded company said, we are not going to hire any more security analysts. That's one extreme view of that world. And she did not caveat, like my caveat to say, Well, they're not going to be analysts, they're going to be engineers.

Monzy Merza:

She just said, No more security analysts. On the flip side of it, at the same conference, the chief information security officer of a very large AI company said, I'm hiring more and more security engineers for my security operations. And so people are trying different strategies and different tactics to sort of tier their jobs and their role functions. And so I think the tiering model, the way it used to be, like people would say, a tier one, they're the first filter in a security operation, and tier two are the ones who do the investigations, and tier three are the ones who do more proactive and threat hunting kinds of work. That's where the tiering is.

Monzy Merza:

I think the tiering is gonna get compressed in terms of that segmentation, and some people are obviously going to be more competent than other people as they have more experience, and they may be better at using tools, but I think the segmentation walls are going to get divided up, but then there's going to be like an accordion, like you compress some things and some other things start to open up because people are going to specialize. But I don't think really anybody has the answer, other than lots of stuff in the social media space where people are saying, tier one is going go away or tier two is going to go away. Yeah, maybe, but I believe it's not going to go away. I think they're going to just change what they work on on a day to day basis, but there's going to be segmentation in the use.

Dejan Kosutic:

Yeah, and certainly as with any other revolution, I mean, AI revolution will bring lots of new roles that we still cannot see, and this will probably happen in the next already two or three years.

Monzy Merza:

Yes, I mean, I believe that very strongly. Really, when we look and this is not just sort of looking at a crystal ball, but we just look at what's happened in the past with the emergence of technologies that followed this sort of a pattern. Now, one thing that's notable, I don't want to just say, Oh, this is the same as everything that happened before. What's notable about what's happening right now and what we've observed is we sometimes forget how fast things are moving in this space. GPT 3.5 public preview was, I think, announced sometime around November 2023.

Dejan Kosutic:

2022 actually, I think.

Monzy Merza:

So yeah, so 2022. So I'm off, I'm even off by a year. And so we're living in a very compressed time window. Things are moving very fast.

Monzy Merza:

So I think a lot is going to change. A lot has evolved. Nobody used to say agent this time last year. Some people were starting to say it. Most people did not say it.

Monzy Merza:

Now it's in the general vernacular of everything that we read. So things are moving very, very quickly. People are deploying agents and bots and all these things and downloading things and installing it secure or insecure, doesn't matter, but it's making its way into the public mainstream. So I think we have a lot to learn and a lot to see. But at the same time, I would say we have to practice a little bit of caution.

Monzy Merza:

But at the same time, I think the organizations have to be very aggressive and continue to learn in terms of what happens because these agents bring significant competitive advantage. And I can tell you as myself, as a startup CEO, is what we're able to do at Crogl as a small team would probably require 100 plus people, 200 people, five years ago. We are more capable, more accurate. This is true for business things and it's true for security things, it's true for product engineering and all of the different facets.

Dejan Kosutic:

Okay, great. And to wrap up the call today, so what would be kind of top things that you would recommend to security officers when using this Agents AI in security operations?

Monzy Merza:

I think my biggest recommendation would be you have to be specific about your use cases and you have to be specific about your outcomes and be okay with taking baby steps. And the next thing I would say is you have to optimize for optionality. The space is moving really, really fast and you to buy products and services from organizations that have built for flexibility so that you continue to have more choice. So those would be the three things.

Dejan Kosutic:

Great. Thanks for these insights. I I really learned a lot today, Monzy, and thanks again.

Monzy Merza:

Thanks, Dejan. It's good to hang out with you. I appreciate all the questions.

Dejan Kosutic:

Great. Thank you everyone for listening or watching this podcast and see you again in two weeks time in our new episode of Secure and Simple Podcast. Thanks for making it this far in today's episode of Secure and Simple Podcast. Here's some useful info for consultants and other professionals who do cybersecurity governance and compliance for a living. On Advisera website, you can check out various tools that can help your business.

Dejan Kosutic:

For example, Conformio software enables you to streamline and scale ISO 27,001 implementation and maintenance for your clients. White label documentation toolkits for NIS 2, DORA, ISO 27,001 and other ISO standards enable you to create all the required documents for your clients. Accredited Lead auditor and Lead implementer courses for various standards and frameworks enable you to show your expertise to potential clients. And a learning management system called Company Training Academy with numerous videos for NIS2, DORA, ISO 27,001 and other frameworks enable you to organize training and awareness programs for your clients' workforce. Check out the links in the description below for more information.

Dejan Kosutic:

If you like this podcast, please give it a thumbs up, it helps us with better ranking and I would also appreciate if you share it with your colleagues. That's it for today, stay safe!

Creators and Guests

person
Host
Dejan Kosutic
CEO at Advisera & Cybersecurity governance expert
AI Agents vs. AI Agents: The Future of Security Operations | Interview with Monzy Merza
Broadcast by