Techzine Talks on Tour

A Ferrari needs brakes, innovation needs cybersecurity

Coen or Sander Season 2 Episode 19

The cybersecurity landscape has transformed in recent years, with managed security services gaining widespread acceptance as organizations face the realities of today's threat landscape. In this conversation with Erik van Buggenhout, responsible for managed security services at NVISO, we delve into the practical challenges that have made outsourcing security operations a necessity rather than just an option for many businesses.

Erik shares an analogy that illustrates how we should think about cybersecurity: "It's like the brakes on your Ferrari. It gives you the confidence to accelerate because you know you can stop when needed." This perspective shifts security from being viewed as an obstacle to an enabler that allows organizations to move faster while managing risks effectively. That is the idea, at least.

We discuss the fundamental reasons organizations struggle with security basics. Think of things like the proliferation of tools but also the challenges of prioritizing foundational controls like asset management and patch management. Despite vendors' promises of next-generation solutions, many security teams find themselves drowning in alerts while they can't see the forest for the trees.

The conversation also touches on the rapid evolution of AI in cybersecurity. While approximately 75% of security alerts are already handled automatically through simple playbooks, agentic AI presents both tremendous opportunities and significant risks. There's a clear role to play here for the concepts of transparency and explainability, and, as a result, trust. Agent-to-agent interactions will only make this more important.

We also try and be as practical as possible for our listeners. How do you build effective defense-in-depth strategies that combine automation, AI, and human expertise in a layered approach? He challenges the notion that humans are the "weakest link" and instead advocates for creating security controls that users can trust.

Listen to this latest episode of Techzine Talks now!

Speaker 1:

Welcome to this new episode of Techzine Talks on Tour. I'm at the RSA conference 2025, I think we should say now, because they're very, very sensitive about how you call it and I'm here with Erik van Buggenhout. You're one of the founders of Enviso, a Belgian company, if I'm not mistaken, and you focus mainly on managed services at the moment.

Speaker 2:

Right, yes, so indeed I'm responsible for the managed security service offering at NViso.

Speaker 1:

We do a bit more than that, but that's my primary role, Okay and just briefly, before we get into the topic of today, which is probably something with AI. You never know what does NViso do. Could you explain it to me? Yeah, absolutely.

Speaker 2:

So we're a pure play cybersecurity services firm. So what does that mean? We don't sell products. We're not into the business of doing integration or to sell you the next vendor product. That's in our business. What we do is we look at how can we deliver valuable services on top of the products. So of course we try to know very clearly what are the different products we're good at and how can we potentially leverage them in our services and the services they range from the more high-level governance stuff. So let's say, how do you organize cybersecurity as an organization to the real, very deep pen testing, red teaming, that kind of stuff. We also do engineering, so helping build things at organizations and that could be organizing their own SOC, building out architectures, engineering capabilities. And then of course and that's as I already mentioned, my focus at the moment is really managed security services, where we do 24-7 MDR that kind of stuff.

Speaker 1:

We saw a big increase of managed services over the past couple of years. Right, a big increase of managed services over the past couple of years. Right, because there used to be a bit of hesitation when it came to managed services, especially when it came to security, because a lot of companies didn't really want to outsource any of their security.

Speaker 2:

But there's been quite a substantial and fundamental change in that, right yeah absolutely, and I think it makes sense to that extent that you can never really outsource your, your whole risk posture right it should.

Speaker 2:

Yeah, yeah, I shouldn't of course, but but also the running a quality 24 7 operation, because then that's fundamentally what it is. That is often cost prohibitive for organizations, and, to give a silly example, if you're running an MDR and a managed detection response, you need different skills, you need analysts. You need from the let's call it, the more tier one level analysts although we'll get to that later. There's a lot of automation there these days, thank God, but it goes up to malware analysis, forensic experts, and that's a wide variety of different skill sets that most organizations just cannot afford to have on staff all the time, so it makes sense that they look into partners to help deliver that. What I will say, though, is that we see more and more organizations which do a hybrid setup, and they basically outsource the, let's say, the 24-7, the queues, the handling with all the massive wave of alerts. That's what they ask a managed security provider like us.

Speaker 2:

The hardcore SOC stuff basically, basically, yes, but when it comes to custom things and then that's, of course, very normal um business logic well, we can be the best soccan list in the world at the visa, but we, we will never know your full business as well as your own people. And I think that's where the that's for me, where the, the magic sauce or the mix is it's having an expert team on site for the hardcore stuff and have your own mix, of course, on top of that still and obviously you're also helped with by lots of rules and regulations that are, that are impending or are already there, depending on in which country you are in the in Europe.

Speaker 1:

if the legislation for NISTU is already there, I mean Belgium was quite early, the Netherlands is famously late to the party, but these things also sort of have an implication for your managed approach. Right, because without a SOC it's very hard to do the reporting duties for 24 hours, 72 hours and all that stuff. It's impossible without a SOC and a lot of companies they can't really build a SOC or keep it running. Yeah, so that also helps in the increase of managed services I would expect Absolutely although I will also say and that's I love, but it's actually not that funny.

Speaker 2:

The thing that has gotten us a lot of traction as well is just the whole advent of cybercrime and ransomware.

Speaker 2:

Which, let's say, 10 years ago that really started going all over the shop. And so because before you had these organizations which, ah, but cybersecurity, I don't have anything sensitive I don't have, and that well, we all have our, have our secrets right. But that could be to a certain extent true. But nowadays organizations realize it doesn't matter what we do. If we have a security push which is too low, we might just get hit by a ransom actor who honestly doesn't care about us. We're just a vulnerable thing. So we did just know.

Speaker 1:

yeah, you probably try to attack at least 10,000 people more, so he doesn't care about who is so he doesn't care about who he's targeting. Mostly he doesn't care about who he's targeting.

Speaker 2:

And we have seen situations where compromise happens and then we realize they don't even know in what environment they are. Well, they probably know, but they don't know. We've had instant response cases where we actually kick out attackers out of an environment again and then we notice they could have done so much more if they would have known, but they didn't know that there was a lot of secret data. They just ransomed everything and asked for money.

Speaker 1:

Well, maybe that's also partly because of automation on the attacker side. Absolutely, they probably also use a lot of automation and just immediately try to lock everything and ask for ransom and and get money yeah and get money Absolutely, then you're quite lucky because they didn't get to your very secret data, so that's good.

Speaker 1:

And so what do you see in daily practice from your managed service background or your focus at the moment when it comes to your customers? Who are they? What are their main struggles? I mean, I would assume ransomware, but I mean yeah.

Speaker 2:

So you have the typical threat actors. There's, of course, cybercrime. Criminals get creative and increasingly so. They get creative to get money and they will employ, I would love to say, the latest and greatest techniques, but, to be fair, they will reuse things that work, and what still works is things like fishing. We know it, it's been around for ages. It's very difficult to deal with because it's just there's many different ways of doing it. It involves the human element, which can be fooled, and that's very annoying to fix. So that thing just keeps on happening and we make, Is it?

Speaker 1:

even possible to fix. I've been kind of coming to the conclusion that maybe we shouldn't try to fix it anymore.

Speaker 2:

Well, we make progress, and a good example of that is and I don't want to talk too much about vendors Microsoft is omnipresent and they are. Take macros, office macros, which is basically that used to be like the primary attack factor, and it's still there. But Microsoft, I think last year or one year and a half ago, they basically changed the default setting to have much more restrictions on macros, and that has indeed had a big impact on how cyber actors operate. They now have to find another way of doing it, but they're still. It's a cat and mouse game, so they're adapting, but a big hole has been plugged now, but it's interesting to see how it, of course, but then the water just tries to find another, absolutely, and that's how we keep on going.

Speaker 2:

And that is um, and at one point it actually annoyed me that that, with, with um, humans were really being blamed, like ah, but of course these, uh, these people don't think and they click on stuff. Yes, but it's not that easy, I'm not a big fan of the old.

Speaker 1:

humans are the weakest link. No, but exactly I don't like it.

Speaker 2:

We need to. As a cybersecurity industry, we should be able to have cybersecurity controls that users can trust. And the truth is somewhere in the middle, but the whole user clicking that doesn't work.

Speaker 1:

I heard an interesting quote last week from somebody who attended one of our roundtable discussions and he works for uh, he worked back then. He worked for a big dutch um incident response company and his ceo at the time. He said I, I, I click on links because I have a sock. That was, I mean that is the mindset at the end of the day, that we should work towards right so that people can do what they want. I mean the attackers won't be successful. I mean that's a dream. I mean it's probably never going to happen.

Speaker 2:

But it's about defense in depth and the whole point of if your entire organization grinds to a halt because someone clicked a link, then you have a very different problem Because technically, let's say, the user clicks the link, there should be controls in place on your endpoint to stop any payloads from executing. There should be, for example, web proxy controls to see, okay, but what are you connecting to? And then, if all of that fails, you still should have a detection response capability to see it and to nip that in the bud, that attack.

Speaker 1:

Obviously, one of the problems with defense in depth is that that usually entails many different tools and many different controls and many different platforms and all that stuff, and that makes it even worse. I see a lot of an increase in platformization at the moment, but then, on the other hand, I don't see the total number of security companies decreasing.

Speaker 2:

How does that work? Yeah, it's a very good point, and a big problem I find is that we keep on throwing tools at the problem. It's like, ah, but we need a tool for this, and then we need a tool for that, and then we need another tool for this. But at the end of the day, it's very often about the fundamentals and and use what you have because there are. You would be surprised by how many security configurations you can get right by configuring your existing systems well. And it's about processes. Of course. It's about the whole batch management, these kind of things. Um, it's about asset management knowing what you have. What are you going to protect the old chestnuts? Yes, yeah, but those are not. The problem is those are not the easy ones to fix. You don't have a checkbox, there's no setting to fix them. But they are the fundamentals.

Speaker 1:

Absolutely Without the fundamentals right, you're going wrong.

Speaker 2:

Yeah, that's the point.

Speaker 2:

And you keep on throwing tools and they cost. Let's be fair, cybersecurity tools it's, it's expensive, so cheap. And then I have a colleague of mine and won't say name because it was Eric and and he always says, the moment that something becomes a security tool, the vendor slap a few extra millions on the price point because, because it's just that these things cost a lot of money. But really, if you then look at how much overlap they start having it, because these tools are, they have a specific focus. I'm not saying that of course we need tools do you see?

Speaker 1:

do you see the number of tools decreasing in practice? Do you see that happening? And because, because you already mentioned the companies like Microsoft. There are other big players that want to be there to go to a one-stop shop for cyber security as well, but is that actually really happening? Conceptually, the desire is there. I think Companies want it, but is it actually happening?

Speaker 2:

That's actually a good question. I think the desire is there for organizations, although there is also the don't put all of your eggs in the same basket, and that's also something that still lives.

Speaker 1:

But that's usually more an operational kind of thing. I mean, in my experience, right, you don't want to put all your data in one cloud because you want to be able to. We want to have more flexibility, but from a security standpoint is there? I mean, sure, the security vendors themselves get breached and they have leaks. So I understand, but other than that, is there a reason for not having all your eggs in one security basket if it's a good platform?

Speaker 2:

Yeah well, there's different points of view on this. There is one which would say and this, i've've seen this, but this is going away a bit you used to see organizations which had multiple layers of firewalls and in the multiple layers they used different products the checkpoint here, and then they use a Palo Alto there, because then if one missed it, then the other one could still catch it, something like that that's always fascinated me.

Speaker 1:

Sorry for interrupting, that's always fascinated me in the security industry. I go to many events and then in one week you're at an event and you see a huge slide on stage we detected 35% of things that nobody else detected. And then the next week you're at another security event and that vendor says the same thing. And then that left me thinking, wow, we're missing a lot. That's probably not the message they want to send out, but go on.

Speaker 2:

But what is helping in this whole mess? Let's say, organizations like MITRE. They were in the press last week, I think, with the whole with the US defunding government.

Speaker 1:

When we broadcast this maybe it's a month ago.

Speaker 2:

Oh, yeah, yeah, of course. Yeah, my apologies.

Speaker 1:

No, it's about the CVE database. Yeah, the CVE database.

Speaker 2:

But one of the other things that they're very good at what they're doing is they do these attack evaluations. So they have the MITRE ATT&CK framework it's called and they track all these TTPs and then they basically look at the different vendors and they test them according to their TTPs. The funny thing is they never announce a winner. They don't even create a ranking, but the but they all do. They all say, oh, we, we had 100 detection 100, absolutely because they all won, because they they use the data and they just turn it the way they wanted to to say we won and that shows the very interesting.

Speaker 2:

And even if you go objectively I I read all these things as well every year when they come out and if they look 100 detection is not very, that's not, that's not's not very hip and happening anymore Then you get, oh, maybe false positives or maybe config changes, maybe that's interesting, and I mean as long as you add steps to it, you can always have some way of being the best Exactly, but this is where also it comes to and I think this is important it's about using those tools, because buying them, deploying them out of the box, you need to add stuff. Let's say you buy a SIEM. It could be a Sentinel for Microsoft, it could be Splunk, elastic, whatever you want. You need to build your own detection rules on top or find a partner who can do it for you, like us. But you need to use these tools Because out of the box, they're not going to have the well they have a level of performance.

Speaker 1:

It's not magic Exactly, and this is often, which I find.

Speaker 2:

so this often happens in organizations, unfortunately. They buy a tool, it sits there and then they get breached and it's like, oh, but we have this product? Yes, of course, but are you using it? Is someone watching at it?

Speaker 1:

Yeah, and then they see something else and they buy a new tool, and then the next vendor comes in and says, oh, but here's our solution.

Speaker 2:

Yeah, but you're doing the same thing all over again.

Speaker 1:

You're going to buy this tool, have it sit there as well, and then it's gone and I've also seen people predicting the death of Sims, for example, which is, I mean they may change their names or whatever, but and some of the cloud security? Now that's very happening nowadays. Obviously they can do, or XDRs or whatever you want to call them nowadays they can do part of it, but they can never do the physical stuff right. So they can do their application, kind of sim kind of things, but they cannot do the network or they cannot do it. They cannot do the application, kind of SIM kind of things, but they cannot do the network, or they cannot do the servers or whatever sometimes. So it also confuses people a lot, all this stuff.

Speaker 2:

Yeah, but this is also and I find that, and maybe I'm a big fan of NoBS, it's actually one of the values of Envisa but NoBS is very important to me and we have this habit in our industry of rebranding something every X amount of years and then seems a good example. No one wants to talk about seem anymore because seem it didn't solve all of our problems a lot of data.

Speaker 2:

Now it's all like, but fundamentally, we're still talking about the same problems. There's telemetry coming from a wide variety of different systems cloud network, endpoint. We need to collect that together and analyze it. That's it. How do you want to do that? You want to call a data lake, a seam and xdr that and. But because we do this, I I find I find it very interesting in the security industry, we keep on doing this and it's always next gen this and next gen that, but fundamentally, we're still talking about the same things I'm still waiting for the next gen firewall.

Speaker 1:

I mean, it's uh yeah they promised. They promised me the next year firewall 10 years ago, but I still. I still haven't seen it. But are you familiar?

Speaker 2:

with the, the hype cycle. Yeah, I love that one because it shows it like it's this peak of inflated expectation.

Speaker 1:

Well, it's gonna solve everything and then, of course, I mean, that's how, that's how the market works. Maybe that's not the best way of trying to solve a problem, but that is how it is. Yes, but let's just. We digressed about 15 minutes about stuff. Yeah, but we haven't talked about AI yet, which is obviously that's sort of a that's a sin, right?

Speaker 1:

Yes, we have to talk about that Because I get the impression, looking at it from a relative, relatively just the sidelines, I get the impression that AI it's going so quickly that it's virtually impossible to secure. I mean, that's a bit, maybe too much, but I see. So last year at RSA's RSAC conference it's nowadays sorry, I should say it right, there's, there's actually a card on the table here nobody can see it because it's a podcast, but that we have to say it, say it correctly, otherwise they're gonna get cross with us. So last year it was also a lot about AI and security and I don't think we've solved the stuff that we talked about last year and this year. Obviously it's all about agentic and especially agent to agent and you get into the MCP kind of space. It's going way too quickly for organizations to adapt to right.

Speaker 2:

Yes, absolutely, I think it is lightning speed. It's crazy and indeed, this year it's all about Genetic AI, which goes a step beyond Gen AI. Right, because Gen AI you're generating content, there's analysis. With Genetic AI, you're really talking about operations and actions and agents talking to one another, deciding what do we do, and a lot of organizations are looking at this in the thinking what do we need to do? Now, because there is, there is, of course, the. It's also very exciting, it's scary, but it's also very exciting because when you talk about we talk with the hype cycle.

Speaker 1:

Yes, a lot of organizations want to start using it. They are.

Speaker 2:

They are starting to use it without actually thinking long and well enough about the security consequences, and I think this is sometimes where I like to go back to the fundamentals of things. Take AI. A simplistic example is of course you have Microsoft Copilot. There's many different, there's a whole ton of them, but principally speaking it's the same aspect. Is someone asks questions about their day-to-day work to this Co-Pilot, this AI assistant which is sitting next to them saying, oh, I can help you with that? How do you make sure that that assistant is not suddenly sharing the deepest secrets of the company with this person who's asking the question? And then you get to data classification, which is a fundamental. So it's an older thing, but it's not easy, solvable, and that now becomes a very big topic. How do we make sure that this is set up right before we can start living?

Speaker 1:

well, yeah, and then the next step if you classify it, you also need to do something with the classification, otherwise classifying doesn't make a lot of sense.

Speaker 2:

Yes, you have to enforce, indeed, and for the classification, that otherwise classifying doesn't make a lot of sense.

Speaker 1:

Yes, you have to enforce, indeed enforce, the classification. But that comes back again to what you the point you made earlier. Right, so you can buy a tool that classifies your data and you just put oh, this is tier one, tier two, whatever you wanna call it. But if you don't do something with it as an organization afterwards, like building your own tool on top of your sim data, then you have a lot of classified data, but you're none the wiser, right?

Speaker 2:

Yes, exactly. Then you get that word of gold, which is indeed it's good to have a tool to use it, but how do you enforce it? How does it really live in the organization? This AI thing, I think, is indeed it's scaring people, it's exciting people because it's we can do a lot, but how do we do it without taking too much risk? And an amazing analogy and I'm gonna throw it out there because I love it it's the whole idea of cyber security is often looked at as it's slowing us down. Ah, cyber security, that's the enemy of progress, right, because they're the ones saying no, no, no, you cannot do this. You cannot do this because it's too secure.

Speaker 2:

Yeah, the department of no, but fundamentally it's not like that. I like to compare cybersecurity to the brakes on your Ferrari. And what does that mean? Imagine that you get a Ferrari. I'll buy you one someday.

Speaker 1:

Oh well, that's, I like it. I already like it. Great analogy.

Speaker 2:

But I'll tell you, it doesn't have brakes. How fast are you comfortable driving that Ferrari? Not very fast because you don't have brakes. And that's the beauty about cybersecurity. That's what it should be. It should be the brakes on your Ferrari. It gives you the confidence to accelerate because you know that you have the brakes. You have controls that you can use if you need to, and and that's a very good way to look at- cyber security?

Speaker 1:

and how much? How much do organizations need to understand about the AI that they use? Because you gave the example of copilot and you're not really sure what it's doing. Obviously, you can, you can put guardrails and you can do all that stuff and then hope for the best that it doesn't share anything. But do you, do you, do organizations need, need a more fundamental, better understanding of AI, agentic and all that stuff, and if so, is that even possible?

Speaker 2:

yes, and so what you see indeed is trustworthy. Ai is gonna become extremely important, because you need to have transparency what did the AI agent do and why did it make a certain decision? And you have things coming up like and that's an open, it's also commercial, but it's also an open source project, like AgentOps, which is very interesting. And what does AgentOps do? It integrates with typical agentic AI frameworks and it provides full visibility on decision trees what's the AI agent doing? What are they communicating? What does one agent say to the other? Why did they make a decision? They can even allow you to do replays so you can see what did they decide. So and you notice that this is where we're going. In order for people to trust AI, it needs to be transparent. We need to have control over it and visibility into what is it doing and why is it making certain decisions.

Speaker 1:

And then you get more into the reasoning models that actually can tell you step by step what they did and why they yes.

Speaker 2:

Yeah, and add to that, it will. Still, it will take a while before, especially in security operations to just talk about my field for a second before we really say AI is great and I think we will do crazy things with it, but removing the human out of the loop, that's going to take a while, because and I don't think we don't necessarily have to it's about well there there is a parallel I always make between first gen AI and now also partly agentic, with with.

Speaker 1:

Basically all of that started with RPA. Right, that was building automation workflows based on things that you programmed yourself and you recorded a workflow and then it, then the bots did it themselves, but then when they encountered something, they just died and this doesn't work anymore. Yes, and then Gen AI came and basically was an improved version of RPA that's what I usually call it. But you need to have some autonomy if you want to have your automation and your workflows actually improve because of AI, right, yeah, so there is gradation, there is levels of automation, right yeah, you're very right and I think what I said with human in the loop, it doesn't mean that none of the flows can be autonomous.

Speaker 2:

it doesn't mean that none of the flows can be autonomous. It means that there is some supervision as well, and making sure that you have, that there is a feedback loop and that the AI doesn't poison itself because the moment that it starts making wrong decisions and then using that to because take SOC what we do is we handle security alerts right and we now use I can tell you, about 75% of all security alerts at the moment are already handled automatically. But that's actually with silly playbooks with just automation payments, pre-built logic which has if then else do this and, and that works quite well you don't need all the all the fancy stuff for it.

Speaker 1:

Basically, yeah, so for us.

Speaker 2:

We're actually we. We are now starting to deploy more and more ai on top of it. Most of the stuff is already automated automated because there's actually a lot you can do, but that's sort of based on the old-fashioned signature.

Speaker 1:

Exactly so you know, this is a worm or this is a whatever.

Speaker 2:

But it doesn't have to be that static, Silly example. Let's imagine you have a geographical improbable loan for summer because today you're in the US and yesterday you're in the Netherlands. How do you deal with that alert, that analysis? Well, you get context. Who is Sander? What privileges does he have? Are there any events from his endpoint? Are there any security viruses and malware that we take from his endpoint? Did Sander ever log in from the US before? Did Sander use MFA? All of those factors would contribute to a decision yes or no. That's a playbook. It doesn't.

Speaker 1:

You don't need a genetic AI for that no, no, my credit card company could use an update. They actually, they actually blocked my card once. Even though I'm in the US about 20 times a year, they once one time they say no, we suspect fraud here.

Speaker 2:

But what will definitely happen is we started doing SOC four or five years ago and we built a lot of automation and it's still great because, honestly, it's also cheap to execute. Automation playbooks are very cheap. Ai is still. It's getting more and more commodity.

Speaker 1:

It's AI commodity, but what I mean by that is.

Speaker 2:

It's becoming less cost prohibitive, but we still use a lot of let's call it old school automation just because it's very effective and it works really well. But the next step is now using more and more AI, because the new players who will enter this field they will start with all of this automation, because what they will say is we don't need to build all this logic because it's a significant investment.

Speaker 1:

We just go immediately if you already have it. You shouldn't I mean absolutely you should keep it right.

Speaker 2:

I mean, or we're not gonna break down something that works very well just because there's a fancy new thing around and our plan is we use automation playbooks for the simple stuff, then we use a layer of it's called a genetic AI to deal with a bit more complicated scenarios where you need the dynamic process to make a decision, and then a layer above is humans, and I think that that's a very good way of looking at it.

Speaker 1:

How transparent is that still? How explainable will that stay?

Speaker 2:

You mean to an end?

Speaker 1:

user, to organizations who are trying to understand why their processes and why their workflows go the way they go.

Speaker 2:

Yeah, well, you could compare it to currently your typical SOC security operations center. You have tier one, tier two, tier three analysts. Let's say tier one, simple automation playbooks, that's that. Tier two, you could say it could be a genetic AI. And tier 3 is humans. Something like that is perfectly. That's how I would see it work and that's actually what we're having quite some success with setting it up like that, but yeah, so and looking forward a little bit, I think we agreed on the fact that it all gets.

Speaker 1:

Developments are going a bit too quickly. Do you foresee maybe not a cataclysmic event, but maybe do you see the industry as a whole putting the brakes on it a little bit? Because before we started recording, I told you about the Morgan Chase guy who published something about, who are actually internally really hitting the brakes at the moment to look at a more well thought out rollout of AI from a security perspective. Do you see that happening? I don't.

Speaker 2:

No, it's a bit like an arms race. And, okay, we're a commercial company, right, so it's not real arms, but who's going to tell us that we need to stop? And, of course, internal organizations. I get it. We need to be mindful of getting our fundamentals sorted and a good approach on how do we leverage AI. How do we do it? Step by step. But the vendors if they slow down, their competition is not going to slow for them.

Speaker 1:

They're all in the race and it's very similar to the whole well, unless obviously all the organizations say look, it's all very nice that you're doing, but we know we're not adopting it yet. Exactly yeah, that's how realistic is that?

Speaker 2:

to get a commercial organization to slow down, their customers have to say you need to slow down. It's a similar dynamic, but this is a bit extreme as the power struggle between US and China AI dominance. They know none of them is going to slow down because they know the other one is not slowing down. So we live in very interesting times.

Speaker 1:

Well, you've seen from the past that an arms race can actually stop right. I mean the nuclear programs let's not get into too much politics, but there are reasons to stop an arms race.

Speaker 2:

But I mean then all the different variables should align and just yeah, the stakeholders have to talk to one another and say we all agree that we need to, but do you think?

Speaker 1:

do you think Organizations and end customers need to be much more critical about, about this, towards the, towards the Companies that are trying to sell them all this, all this modern AI stuff?

Speaker 2:

Yeah, I think it's a. Fundamentally, what becomes more and more important is critical thinking, just the ability to think about what is this and where is it coming from.

Speaker 2:

That's a big ask for a lot of people, because what we already see and I read a very interesting opinion piece about that from a while ago it was so we trained this model right the first foundational models and we trained them on real human data. All of the progress that we've had over the past thousands of years. We put it all on the internet, but now there's so much data being generated by AI models, themselves Synthetic data.

Speaker 2:

Yeah, which is going to, to a certain extent, poison if you're training again, because you're training on synthetic data, depending on the use case case right, for some use cases it is actually very useful.

Speaker 1:

Yes, synthetic data, but for some of it, maybe, maybe it isn't.

Speaker 2:

Yeah, and that's going to be another thing to to to be mindful of is how does it, how realistic is it going to be to to to train foundational models in the in the future from scratch, if you there's a couple of big players now which are quite far ahead? Yeah, it's, uh, it's an interesting thought maybe to, to.

Speaker 1:

To start to finish off with what are the um, the questions that customers should ask companies like yours, yes, or other, or vendors of of security solutions or ai solutions, or what are the types of questions they should ask?

Speaker 2:

yeah, I think it's it's it's about being again, I like no BS. So, being very concrete, what exactly do you use AI for? How do you use it? What's the value at, where is it and how do we indeed get visibility on what AI is doing? Can we get transparency about it? How does it work? What does it do? Can we make sure that it properly references, for example, the data, sure that it properly references, for example, the data sources that it's getting the data from to make certain decisions? So I think it's about transparency, it's about knowing, and this is sometimes tricky because it will also lengthen processes as well.

Speaker 1:

Right so in terms of sales processes, because you're doing your due diligence much better than yeah and what security vendors don't like.

Speaker 2:

well, I have to be careful not to be too broad, but what they often don't like is the question how do you do what you do? For example, how do you build your detections? And then you get this sometimes answer like ah, but it's proprietary and this, that's not going to work. That's not going to do it with AI. We need to be transparent.

Speaker 1:

Usually they like it. When you're so in awe of the demo, you've seen that you immediately sign and like I want this, yes, yeah, but that's not how we should look at it. In the age of AI and agentic and all that stuff, there needs to be much more explainability. So that also means that the vendors themselves and you also have to do some more studying and some more explaining to your customers, absolutely but I always find that the best approach is to just be transparent about what can we do.

Speaker 2:

what can we do? Why do we do certain things in the way we do them? I think it's about building trust.

Speaker 1:

I've always been a big fan of transparency and security, but it's extremely hard. Yes, because nobody wants to tell anyone. It's the same thing like the platformization that we talked about earlier. A lot of companies that are offering security platforms are calling themselves open, but when you really start to talk to them, then they're partly open, but they will always have some proprietary stuff that they will never share, because that's the goose with the golden egg, basically. And as long as that happens, it's extremely hard for that type of vendor and for that type of party in the market to be completely transparent about how they do what they do.

Speaker 2:

Yes, of course, and that's where I think critical thinking comes into play to know. At a certain point the vendor can say look, we cannot go too far into that. But if the first question you ask is met with resistance but we cannot explain what we do because it all just works, that would be a reason to say okay. But this sounds a little bit like hocus-pocus, right?

Speaker 1:

Well, it was interesting that you said, because I was at an interesting panel discussion earlier today over here at RSA and there was this guy from a big security player and every time that the moderator of the discussion asked him something about how does Fender, whatever, how do you do this, he would always say, well, I can't say anything about how we do it, but I can speak broadly on the topic. Then you have a lot of self-importance, I think, if you do that.

Speaker 2:

Yeah, but absolutely, I sometimes think we're all rocket science. Let's just talk about what we do, how we do things.

Speaker 1:

I think at the end of the day, everybody wins if we do that.

Speaker 2:

And a good example because, just thinking about your question earlier as well on the what do organizations need to ask and how fast can they go the co-pilot rollouts of Microsoft. The biggest issue they're encountering to roll it out, the biggest resistance they're getting from companies, it's data security. So you notice that and I was actually quite happy about that to understand companies are actually thinking. They see this risk and they're like, if we do this, we need to get data security in order.

Speaker 1:

That also means that we did a good job as journalists, because that's what we've been telling people as well.

Speaker 2:

Yeah, but it's a good reflection, and of course, the advantage for a vendor like Microsoft is well, we have a solution for data security, so there you go. So it's a win-win. Fundamentally, it's a very interesting.

Speaker 1:

Yeah, and also the understanding. What I also quite like is somebody that was not related to security, but I think it also really really plays well in the security debate is that the differentiator is rarely the AI. The differentiator is always the data, and that goes for AI that you put on top of data to actually create stuff for your organization, but it also goes for using for securing AI, but also for using AI to secure your company. Right, you can use the best AI, but if your data is not in order, then your data security is not going to happen either. Then we've got to get back to the classification thing, because we tend to talk so much nowadays about the AI and how that works and how interesting that is, the faster model.

Speaker 1:

Is the token length In the end, if you have the ideal AI model and you set it loose on rubbish data, it's a disaster waiting to happen, and so then the same goes for security. If you have an AI model that is trained on the wrong data, it doesn't really do anything for you either.

Speaker 2:

Yeah, and I like the saying that AI is just. Ai is going to scale what you have much harder and much faster, and if what you have is chaos or bad, then it's going to scale that chaos, and that is important. It's not going to it's not going to fix your fundamental issues, but it is. It is very much an enabler and adversaries will also use it. So, yes, as defenders, you also have to use it, but it will not solve those fundamental issues.

Speaker 1:

So so the the big, the big lesson, at least for today's, is to to just decrease the chaos first, because, before you, actually, uh, because if you, if, if the chaos is gone and you have everything, the fundamentals are in order, there's nothing stopping you from deploying ai in your organization, because then you can actually secure it better as well.

Speaker 2:

Right, yes, it's about putting the brakes in your Ferrari so you can accelerate, all right.

Speaker 1:

And I will keep you to the Ferrari promise With brakes please. Otherwise well, I'll do my best. Thanks for joining, Thank you.