Techzine Talks on Tour

Threat hunting is very important, but also very frustrating: how can AI help?

Coen or Sander Season 1 Episode 17

Both junior and senior security analysts encounter frustrating challenges with threat hunting, making it hard to really make it an integral part of daily SOC operations. How can we make things easier and less ad hoc for them? We sit down with Albert Caballero and Adriana Corona from SentinelOne to discuss this in depth.

At its recent OneConnect event in the Benelux, representatives from #SentinelOne gave keynotes on different topics that relate to each other in different ways. On the one hand there was a keynote from Albert Caballero, Field CISO at SentinelOne, about #threathunting. On the other, there was a keynote about (among other things) Purple AI, by Adriana Corona, Director of Products for AI at the same company. As it turns out, AI can and will have fundamental implications for threat hunting, and for #cybersecurity as a whole. 
 
Besides the somewhat conceptual impact of AI on threat hunting, Adriana and Albert delve into the emerging use of Retrieval Augmented Generation (RAG). It's not only AI that helps humans, but also the other way around. Expert-created threat intelligence enhances AI effectiveness and accuracy. 

Another big topic in the conversation is the pivotal importance of #transparency and #trust in #AI systems. We contrast the contextual precision of specialized AI with more generic platforms. Transparency of how AI works is absolutely vital for cybersecurity purposes. That's the only way to reach reliable decision-making in cybersecurity. There are challenges too, for sure, so we also talk about how to tackle those.

All in all this is another must-listen for people and organizations looking to optimize their #security posture and stack, by using AI or in other ways. And let's be honest, who isn't in 2024. 

Speaker 1:

Welcome to this new episode of Techzine Talks. On Tour, I'm at OneConnect SentinelOne's local event. Joining us today are Adriana Corona and Albert Caballero. Albert, you're the field CISO and Adriana you're Director of Products for. Ai, that's right, that's a good start.

Speaker 2:

Yeah, correct. I love how you say Caballero, very well pronounced, sir.

Speaker 1:

Yeah, and I attended both your keynotes today, right? What I would like to do in this podcast is just try and connect the two a little bit, right? So, because you mentioned, the thing that triggered me was that you mentioned during your keynote, adriana, that one of the main reasons to go for Purple AI is that the biggest frustration for analysts, whether it be junior or very senior, is always the threat hunting part. So I thought that would be an interesting way to connect these two things.

Speaker 3:

Yeah, absolutely, and that's what we discovered talking with security teams, our own security teams, our customer security teams. That threat hunting and investigation takes hours and is very difficult to do well, even for experts in the field.

Speaker 2:

Yeah, I mean, I would definitely agree there's. You know, once upon a time I ran a SOC and we did threat hunting. But you know, in too many organizations the practice of threat hunting is not well integrated into the workflow of security operations right.

Speaker 1:

Why is that? Because is that a sort of a circle kind of thing? Because it's so difficult. It's very hard to integrate, yeah that's a good point.

Speaker 2:

I mean it should be, but I think that it's always thought of as less operational right. Threat hunting is more of an ad hoc exploratory function where the security operations center is very much a repetitive investigative type of analysis that happens over and over very much in the same way and then it gets escalated or thrown over the fence. Threat hunting usually happens by more advanced senior analysts that have the time to go out and start just looking for bad stuff that has already happened, as opposed to SOC analysts which are trying to trigger and respond immediately to things that are happening in real time.

Speaker 1:

Yeah, but especially with the move from prevention to more detection. Even though we still do prevention, obviously threat hunting becomes more and more important, I would imagine right.

Speaker 3:

Yeah, and I would also say that for both threat hunting and investigation, the thing that makes it hard is the same, which is you need to know what you're looking for, first and foremost, and know how to find it in your own logs and, although that sounds easy to say out loud, when you're in the middle of an investigation, to piece together the evidence you need to compile. So, not even threat hunting, but the investigation use case, you need to know a lot of details about what you're looking for, the data schemas, the query language you're using and where to find the data.

Speaker 3:

that's actually a similar problem that a threat hunter faces, so we can use pretty similar solutions to help both use cases.

Speaker 1:

But it's not weird that it's hard to find, because most of the time you don't even know what you're looking for, right? So but that's a fundamental kind of yeah, I mean know the the known unknowns and the unknown unknowns, right.

Speaker 2:

Uh, these are things that you know are just difficult to find, assuming you even know what it is that you're looking for. Uh, what has been lacking is, um, and, as adriana pointed out, there is solutions. There are solutions out there that allow both activities to become a lot easier and a lot faster, but, more importantly, to integrate together, right enrich with what threat hunters already know, and that they're also surfacing things that are unknown to them, that threat hunters can key on or pivot on and start their hunts in a much more advanced kind of way.

Speaker 1:

Yeah, so how does this work then in terms of? Obviously there are different types of organizations that need to to do this. Not every organization does it in the same way. Is there a practical kind of kind of of approach to integrate this threat hunting into your daily soccer routine, so to speak?

Speaker 2:

Well, I mean enter purple right.

Speaker 3:

Yeah.

Speaker 2:

I think that machine learning is a type of AI that's been around for a long time and it's very good at deciding what's good and bad, but generative AI really has given us a window into the interaction of humans with that black box that we've known as AI for a long time. Right, so, as a human, if you can ask it a question and it returns some type of technical data points that you can pivot on, I think is incredibly, you know, useful. So, yeahrena, what do you think?

Speaker 3:

I would agree and I would say the first problem of making sure you can integrate both into one operation is having one platform.

Speaker 1:

So one thing that I and by both you mean threat hunting, Threat hunting and investigation and other things like triaging and more.

Speaker 3:

So the foundational aspect is the platform right Having a single place to house your data that's fast enough.

Speaker 3:

So imagine you don't have an AI to help you yet, and how people would do this as a team working together. You need a platform that's fast enough at returning responses that you're not sitting there for 30 minutes waiting for the first question that you ask. In this case, the question is a very sophisticated query to return results, so that was already a necessity. If you want to collaborate at all or learn from each other, then the next layer that we added was what if you don't need to learn the query language? That also is very difficult and it's error prone right. So even good someone who's good at scripting needs to figure it out and troubleshoot their own queries it takes time to build too.

Speaker 2:

Yeah, you know, um, I was a master of index equals asterisk right and then you know, because I've been using some SIEM products for 20 years now and you have to start at the top and it's almost like you're digging yourself a hole deeper and deeper and next thing you know, if you find nothing, you've got to start all the way back at the beginning.

Speaker 1:

That's always been one of the main objections to SIEM products in any way right. Sure A lot of logs.

Speaker 2:

Yeah. No idea what to do with it, yeah, where to look or how to query them, right, yeah.

Speaker 1:

It would be great if we could solve that problem.

Speaker 2:

Yeah, a lot of logs but a lot of times, as Diana put it out, not all the data you need right, and a lot of the times the data is in other systems, IT systems. You know, cmdbs, right, with different schemas and databases, different query languages right. So not having the data all in one place and logs are one thing, but obviously we also generate a lot of endpoint telemetry. There's also forensic data. There's a lot of unstructured data out there that is just not generally available within a SIEM and you can't you really just can't query for it. You got to rely on vulnerability reports and then go through them.

Speaker 1:

I think you mentioned, uh, during your keynote. You mentioned the photograph, the jpeg.

Speaker 2:

Yeah, yeah, so yeah so for, for example, malvertising campaigns right. A lot of times you get emails, they get blasted out to thousands of users and they contain an image that people click on or whatever. That's not a file that's going to trigger a log. It's not going to trigger a detection, yet it's a very good indicator of compromise. So how do you search for that on thousands of systems at scale and at machine speed? It's just very difficult to do.

Speaker 1:

You need a lot of intelligence behind it, but you also need as you point out, you also need a certain degree of integration, otherwise you will never be able to do it right. So let's just dig a bit deeper into Purple AI or AI in general, maybe in what you're trying to achieve as Sentinel-1. Because it works in different ways. Right? So, you have the sort of the summarizing capabilities, but you also have other AI capabilities inside your platform. Could you elaborate a little bit on the different phases of Purple AI?

Speaker 3:

Yeah, and I mentioned the very first focus was deliberate. Ai yeah, and I mentioned the very first focus was deliberate, and you mentioned earlier us talking about how hard it is to do investigations. So the time it takes is hours, maybe four hours per investigation, and it's just difficult. You need expertise. So our first mission was solving that expertise gap using AI. So we have an architecture where one of the foundational components of it is having really expert knowledge in our knowledge service that already informs the AI about our data schemas, query language, threat intelligence, other security sources and what is your knowledge service?

Speaker 1:

just for listeners who may not be able to understand what that means. So the knowledge service is a component listeners who may not be able to understand what that means.

Speaker 3:

So the knowledge service is a component of our architecture. We have a RAG architecture. That's Retrieval, Augmented Generation, and so one of the foundational pieces of RAG is the actual thing that you're retrieving, which is the source of truth knowledge. That is so important because that's what provides the expertise. That knowledge is created by expert threat hunters including our own teams.

Speaker 2:

Yeah, you know, adriana, one of the things like how does the AI know that when you say am I being attacked, that you're not talking about someone you know hijacking an airplane or holding you at knife, at gunpoint or knife point, as opposed to a cyber attack. So I think that definitely, what you're explaining is analogous to the difference between this type of architecture and just asking Chad GPT a question.

Speaker 1:

Right, yeah, because that's probably the association. A lot of people will have right. And I have to confess something I'm a trained linguist.

Speaker 2:

Oh, the headster caballero and the Adriana. I have some thoughts on the efficacy of chat.

Speaker 1:

GPT and gen AI in general. Sure From a philosophical standpoint. A linguistic point of view on the efficacy of chat, gpt and gen AI in general.

Speaker 3:

Sure so from a philosophical standpoint, a linguistic point of view. How did you feel about speech to text before speech to text was popular? Did you have reservations about it?

Speaker 1:

I only I ask because I had a lot of linguist friends who thought it was impossible. I still, I mean, if you've used the Otter app ever, right?

Speaker 2:

I mean, I'm not sure if you know that the Otter I've heard of it.

Speaker 1:

And it transcribes everything that somebody says.

Speaker 1:

I've been using that for about 10 years already and that was brilliant already, but the thing is that when you go the Gen EI route, it starts to do stuff with that line, it starts arriving at conclusions that maybe I think there's a very fundamental difference between and I think RAG really made a big impact on that, but especially at the beginning, it came up with all sorts of BS, which was completely untrue, obviously, and that also stems from the fact that it's very difficult for a very different for a machine to know a language and for a machine to know a language and for a human to know a language right. So a machine does never come up with anything new, it just gathers some stuff and it says look here, I have it. That's it.

Speaker 2:

Yeah, and when it does the data cleanup right, when it starts removing stop words and doing the basic data clean up, there it really doesn't understand kind of the sequence of words and the meaning underlying human speech, right.

Speaker 1:

There's a bit of a long intro, but to my question about the efficacy of Purple AI in this instance, right? So, because you also use part of this, but you obviously, like I said, rag partly solves to a certain extent, because it doesn't make up stuff anymore. You have a fixed source of truth where it gets its data from, but still, it can still come up with wrong conclusions. How do you make sure that purple AI, or AI in general in your solutions, doesn't do that?

Speaker 3:

Yeah, and I have to say just to be completely clear, there's no way to prevent hallucinations with any large language model architecture that exists today. I won't say that. It's never going to be possible. So for us it's not about 100% making sure every answer is exactly the right answer. What we have is transparency. So when you ask Purple a question, we show you the full Power Query that's returned as Purple's response to your question and the full results that were generated by that Power Query.

Speaker 3:

Power Query is our language for querying our data leak, so there's no guarantee that we use an equal sign instead of contains every time, because the system might actually there might be ambiguity in your question where it could be interpreted one way or another. So I think it's not about being exactly accurate. It's about being so transparent and so useful that the end result is a good decision made by the analyst and, I think, Adriana.

Speaker 2:

To that point, the results that you get from PRPL are based on data that exists in the data lake Exactly. It's not a hallucination in that it's going to give you makeup logs that didn't exist right or it's not going to introduce data into your telemetry. That isn't really there.

Speaker 1:

Well, it depends on what you're asking, it, right?

Speaker 2:

Of course.

Speaker 1:

If you're asking, am I safe? So a generic question you want a yes or a no?

Speaker 2:

Ideally, you want a yes, Well, ideally you'll get a no.

Speaker 1:

It depends, but if you ask it, could you give me a list of devices that haven't had this or that patch? Sure, I fully believe that that's true. What comes out of that one?

Speaker 1:

Right right, but if you get more into the interpretation then it's very hard to trust is maybe too big a word, but it's quite hard to really determine whether the response you're getting is, yeah, the level of accuracy. The level of accuracy it's quite hard to determine, I think. Do you have many conversations with customers or prospects about this sort of a trust issue with AI? I would imagine it comes up quite often. I can't be the only one having this question.

Speaker 3:

No, I think we're definitely still in the trust but verify stage with new AI techniques and that's why it's so important for us to be transparent with the output of our system. So transparency is one of our tenets and we follow it with every feature design because of this. But we do also have we've seen customers use these tools in different ways than we had seen them use other tools More conversationally. So what I mean is, within our tool, purple AI, you can ask a question that might be a bit ambiguous, get a result that a security analyst looks at, understands why that was the result but is like that's not quite my question. What we've seen, which I think is the new remarkable thing with these tools, is they will just ask a follow-up natural language question to correct it. They will just say really quickly no. So I meant sometimes they do say sorry and apologize.

Speaker 3:

No sorry, I meant this type of thing and not that type of thing. And the tool will provide the correct answer in the second step because there's enough context within the single investigation to be able to steer the questions and answers into what you were actually looking for.

Speaker 1:

And this the other uses that you haven't, that weren't really planned at first. That's a good thing, right, I imagine.

Speaker 3:

Oh yeah.

Speaker 1:

Because? Or are there also other uses that you say? Well, I'm not too wild about using it like that.

Speaker 3:

So one thing I will say.

Speaker 3:

I don't know if it was surprising, because I used to be a cybersecurity analyst, so I should not have been surprised by this, but we all kind of forget that security is not the only job of a security person, right?

Speaker 3:

There's a lot of administrative overhead, communication overhead. That has to happen because when you're on the security team, sometimes you're not the person who resolves it right, so you're not the person who's on the laptop, you're the person who's in charge of making sure the person who resolves it right, so you're not the person who's on the laptop, you're the person who's in charge of making sure the person on the laptop doesn't do the wrong thing, right? So there are so many points in the day where you have to switch from I'm actively investigating to how do I say this in a way that someone else will understand and act. All of that. These new techniques are actually very good at that, and because our tool automatically summarizes many things, including summarizes your results, summarizes the events themselves, summarizes alert details, it's actually been used by teams as a self-documenting investigation tool, so they just share it.

Speaker 1:

So it has also a huge business impact in that sense. Absolutely, so they just share it. So it has also a huge business impact in that sense, absolutely. It's much easier now to show somebody who's not a security analyst.

Speaker 2:

Another aspect that I'll add to that the communication is critical, absolutely, but I also.

Speaker 2:

So part of my experience was running a rather sizable engineering and architecture team and we maintained these platforms.

Speaker 2:

We maintained Sentinel-1, a lot of others. You know SIEM platforms, CNAP tools, all of which require major configuration and management lifts right, and a lot of times you know we need to be debugging something and we don't know the CLI command to do something right, or we don't know andI command to do something right, or we don't know, and it'll take us 15, 20 minutes to go log into the knowledge base of that platform vendors, go ask the search bar, right, and to try to read a knowledge base article and say, oh, that's the switch I was looking for to debug this particular item. You know that's another. I think, and at the endNI you can attest to it is one of those huge kind of yeah, not that we didn't anticipate people would be using it, but it's just such a huge time saver for security engineers. So it's not only the sock and the threat hunters that are leveraging this, but it's the platform engineers, it's the architects that developers right now absolutely obviously you run into the same things that need to be corrected if you don't document something Right.

Speaker 1:

I think that's sort of a yeah yeah, no, absolutely.

Speaker 2:

But you know, to the point of being able to save these sequence of conversations and being able to share them like you would a Jupyter Notebook or how you would, you know, in Google Collab or something, and you know you're sharing the last 20 questions you provided. You interacted with that AI. It gives a lot of insight, you know. It helps, along with detection engineering, with detection as code, right, when you get to a point of having reusable objects to go, then hunt with and you know, your most experienced threat hunter gives you his two weeks notice or her two week notice, and then you with, and you're, you know your most experienced threat hunter gives you his two weeks notice or her two week notice, and then you have, you're left with, you know, six months before you can backfill. Uh, does all of that knowledge go away with them or do the last 20 hunts remain there for someone to take over?

Speaker 1:

and look at sounds a lot, a lot like the uh, the issues companies encounter when a good developer leaves. Yeah, yeah 100% With all the documentation, exactly.

Speaker 3:

He didn't do it Right, or hopefully he did, yeah, and in this case it's basically self-documenting, so it's not about someone having that good hygiene to make sure they wrote down every step. It's just the investigation steps are natural language.

Speaker 2:

That's a big, big time saver. And again, with the whole power query thing, like you know, again, if I didn't have to learn regex, I would have taken a lot more vacations. I would have been able to do a lot, have a lot more family time if I didn't have to learn how to query by some tools in my backend databases.

Speaker 1:

So the fact that especially Gen AI is non-deterministic, so it's impossible to really retrace, at this point at least, how it came to a conclusion, is not a reason for companies to be cautious of the technology, or how would you? Because I think that's at the root of this issue. I think I can the root of this issue, I think.

Speaker 3:

I can take a first stab at this, so I do think that that won't be an unsolvable problem long term. I feel like there are models emerging that are better at reasoning.

Speaker 1:

Yeah, the latest one, right yeah.

Speaker 3:

So, and the technology here is progressing so quickly that I wouldn't take it for granted that this is going to be a major concern long term, but let's assume that it is a major concern.

Speaker 2:

Yeah, if not, the podcast would be over.

Speaker 3:

So this is something I really believe, because our foundation was how do we improve the day to day life of an analyst Right life of an analyst right? The proof of being successful at that is analysts telling you their day has gotten better and easier and faster. Right, and that's what we're hearing. So I go back again to sometimes it's not about the tool itself being a perfect replacement of a junior analyst, for example. That was never our objective. Our objective was how can we do some of the hard work in a way that's going to make you, the person doing the work, so much more efficient and more likely to make the right decision right, because an analyst can make mistakes as well, but now they have a tool that can be there side by side with them, helping with making these decisions?

Speaker 1:

I like this answer to be honest At first. When you launched Purple AI in April this year, it was launched as an analyst, as a stock analyst. Really, I mean I get why you do that in terms of marketing, but that's not the reality, is it? I mean it would be nice, especially if you're going towards the autonomous, ai driven kind of future that we all say we're going towards. I don't know which year that will be, but that's not realistic at the moment, at least I would imagine.

Speaker 3:

Yeah, and I think I don't know if you want to take that yeah, I mean no.

Speaker 2:

So you know I, yeah, and I think I don that require actual human thought to get accomplished. That being said, to play a little bit of devil's advocate, I will say that do not trust your LLM entirely today, right, trust, but verify, certainly. And also, you know, ai is not just a cybersecurity thing today, right, it is a business enabler and it's also a situation that many businesses are implementing at a speed with which they don't understand what they're doing in a responsible and ethical way. So you know, as a security professional, it's our duty to make sure that the rest of the business is implementing AI in a responsible, ethical and safe and secure way. So don't certainly leverage AI and continue to leverage it in the cybersecurity space for threat hunting, for troubleshooting, for engineering, for coding 100 percent, coding 100%. But also look outside of the security organization and understand how your business is integrating AI into all the rest of the platforms out there your SAP, your HR system, your inventory, your CMDBs.

Speaker 3:

It's everywhere.

Speaker 2:

Right your chatbots are a liability if you don't have security countermeasures.

Speaker 1:

On our platform. I think we wrote two or three stories about agents. Sure, Salesforce is doing all sorts of agents.

Speaker 2:

Workday Salesforce. Ai now is like the whole name of the company. It's called Agent Force.

Speaker 1:

Now, oh, agent Force, oh man, that's crazy, but especially and that's, I think, not necessarily the sense of one AI part. That's also interesting. But moving forward, the securing Securing AI.

Speaker 2:

Securing AI is a huge topic.

Speaker 1:

And I think it's not really in the spotlight yet enough. Yeah, I mean we try to.

Speaker 2:

it will be, we try to preach about it, myself and my colleagues. We have the luxury that Adriana sometimes does not have to talk outside of the challenges that the industry as a whole have as thought leaders and people that just blog about the technology challenges of tomorrow. And certainly there are so many vulnerabilities when implementing AI and that's just implementing someone else's AI. You know if you're building, you know we are lucky to have had ML Ops as part of our organization from day one Right For over a decade around a decade we've been, we've we've understood how to implement AI, machine learning and AI in a way that's operationally repeatable and safely deployed. Most organizations are just dipping their feet into AI as if it was a brand new kind of implementation tool. It's not a tool, it's an entire practice. It requires a process and it's operations. If you don't have ML ops LLM ops that are mature, repeatable and and you understand how you're doing it it becomes a real liability.

Speaker 1:

I think last year was a bit of a bad year for AI in the sense that, because it was the first year after chat GPT broke right, everybody thought we've solved everything and came out with lots of huge castles in the sky. You know all that stuff, so do you see that it's getting more practical in that sense as well? So people are really getting more in tune with what AI actually is, and companies and organizations. So we need to do something about this, and it's not only about all the good things you can do with it, but also how do I make sure I can keep on using it?

Speaker 2:

yeah, I mean I do. I do believe that people are becoming more aware and more comfortable with the fact that AI is here to stay and that they need to implement it in a way that they would implement any other type of operations within their business. They're not looking at it as some third-party gimmick. It's something that needs to be integral into their operations.

Speaker 3:

I think they're also feeling that they can't afford to not adopt.

Speaker 2:

To not do it yeah.

Speaker 3:

Because your competition is going to be using these techniques.

Speaker 1:

But in general that's a very bad way of doing things. So on a board level, say we need to do something with AI.

Speaker 2:

Yeah.

Speaker 1:

What I don't care, that's not really the.

Speaker 2:

So many comic strips on that Dilbert was made a member.

Speaker 1:

Yeah, yeah, yeah so and just last question before we round off, because I think we're already at the half hour mark, which is more or less our cutoff point. Sure, you launched in April Purple AI I. Just what was the? What's the feedback been like from early adopters or people that you or companies that you trialed it with, and what were the most well, the most outstanding kind of remarks you got back?

Speaker 3:

from it. Yeah, we've been getting a lot of positive feedback.

Speaker 1:

Oh, that's good.

Speaker 3:

Which I'm very happy about and I mentioned earlier, like our mission was help security teams and we've been hearing from security teams, it's a helpful tool, so check item one check Specific feedback we've gotten. Even at a poc stage we had a customer tell us their investigation reduced from four hours to 20 minutes, which is huge. That's what we set out to do.

Speaker 1:

But when you actually see it, transforming people's teams, that's where we see the value and that's and that's where we see the value and that's based on was it 4,500 incidents a day or something? So was it one of those 4,500 that they investigated?

Speaker 3:

It's one of them right. So it's like 4,500 alerts, 67% of which can't be addressed. So the ones that you do choose. So there's three hours just to triage alerts and decide. This is the one I'm investigating. The one that you do choose to investigate will take hours if it's a significant.

Speaker 1:

So that means a very significant kind of.

Speaker 3:

Yes.

Speaker 1:

I'm safer.

Speaker 3:

Yes.

Speaker 2:

It's almost like you can address them all. No, not a chance.

Speaker 3:

I mean, we've also done a lot of those Thread Ops challenges.

Speaker 2:

Those are so amazing yeah, I was at Black Hat not too long ago in the US and you know the whole Mortal vs Machine thing was just it's so fun to see someone who's never touched a Centro 1 console go in, use purple and go up against versus machine. Thing was just it's so fun to see someone who's never touched a Sentinel-1 console go in use Purple and go up against our most senior power query expert and just kick his butt.

Speaker 1:

I'm actually going to OneCorn in Vegas Nice.

Speaker 2:

Well, you'll see the mortal versus machine, and I've already been invited to do the mortal. Have you Okay. Will you be the mortal or the machine? I'm not sure.

Speaker 1:

Well, let's put it this way I think I'm going to be the mortal.

Speaker 2:

Okay, okay then you'll probably win my power.

Speaker 1:

Query kind of knowledge is not very, very, very okay awesome, all right, so um, yeah, and just I always, I always ask this just to maybe do you also get some feedback that says I wish you could do this more or better?

Speaker 3:

So more to the fact, where is this going. Yeah, I think one of the most important pieces of feedback we're getting a lot is adding Purple throughout the whole console experience. So we started as a dedicated investigation notebook interface for threat hunting but we heard immediately from customers they wanted purple summaries in their alerts and on their assets and they want alert queries. So really the feedback we're getting is yes, but give me even more functionality across the entire platform.

Speaker 1:

And how feasible is that? What's your answer? Well so.

Speaker 3:

I'm excited to get that feedback because it's our mission to have an integrated, single, unified platform, so we're actually very well positioned to deliver AI at a platform level. So for me, it's one of those challenges that I'm happy to hear, because I know we can achieve it and we can do it well.

Speaker 1:

Okay, well, I will keep that promise in mind. Hopefully, all your customers and others will agree with this in a couple of years' time that you've delivered on this as well. All right, so, paul, thank you for joining me. It was a very interesting conversation. I think I hope our audience also found that interesting.

Speaker 2:

Otherwise we won't have another one.

Speaker 3:

Thank you All right.

Speaker 1:

Thank you very much.