Techzine Talks on Tour

2025 won't be about AI, but about secure AI

Coen or Sander Season 2 Episode 2

AI and cybersecurity seem to go hand in hand. That is, much physical and virtual ink has been spilled over the impact AI had and has on security solutions, as well as on how attackers use AI for bad purposes. But what about the security of the AI organizations use every day? We talk about this with Ric Smith, President - Product, Technology, & Operations, at SentinelOne in a conversation we recorded at AWS re:Invent.
 
At the end of the day, a large chunk of any discussion about AI security is also about cloud security. Much of the AI that companies use, stand-alone or as as a part of (SaaS), runs in the cloud. Hence the setting and location for our conversation with Smith. 

In our conversation, we talk about a large variety of topics. Starting with the initial skepticism about the security of the cloud, we work our way to present-day discussions of securing the AI that runs in the cloud (and on-premises, for that matter). AI comes up with its own challenges to cybersecurity and as such adds to the already complex patchwork that is the security stack.

We're not going to give away too much about the content of our conversation with Smith. We do promise, however, to also provide some actionable insights for customers regarding their security posture. And Smith also has something to say about how the security industry as a whole could and should do better when it comes to collaborating with each other.

Speaker 1:

Hi and welcome to this new episode of Techzine Talks. On Tour, I'm at AWS reInvent and I'm here with Rick Smith. You work for SentinelOne, right? I do. Your role changed a little bit over the past year. Twice you used to be the CTO, then you were the CPO and CTO and now you're president and something else, right?

Speaker 2:

Yes, now I'm president of product technology and operations. Somehow I can't keep PTO out of my title. So product technology and operations. Somehow I can't keep PTO out of my title, so that obviously referenced in a lot of jokes.

Speaker 1:

PTO is a pay time off and that's the Exactly exactly. So, yeah, obviously, when you talk about AWS, you immediately hit on topics like cloud and AI, and then, when you think a little bit more about that, you immediately I mean, at least I do think about cyber security, of the cloud and of AI.

Speaker 2:

Seems like a pertinent subject, given where we are at, but do you also see that?

Speaker 1:

I mean, just how did that kind of space evolve over the past couple of years? So the cloud security, ai security, kind of thing, could you just outline a little bit for the listener as well?

Speaker 2:

Yeah. So let's start with the beginning and we won't go back to Genesis. No, let's not do that. It's a long story. It is a very long, dry story, lots of perils. When you look at cloud security as it is today, it actually turns out that public cloud infrastructure is quite safe. In many ways, it's more safe than running your own infrastructure.

Speaker 1:

That's completely different from what people were saying 10 years ago. Right, Totally, totally.

Speaker 2:

So most concerns around public cloud infrastructure is how quickly a misconfiguration can go badly and expose, you know, your corporate infrastructure to the public internet, and so, when you look at the problems that you're really trying to solve, having a perimeter within an on-premise infrastructure helps negate that issue. So that's like the one major thing that you'll find within public cloud infrastructure. Now take that away and think about it. From what do people try to do when you stand up large data centers or IT infrastructure? The first thing that they're trying to do is get an asset inventory, which they can never accomplish.

Speaker 2:

No, that's one of the big challenges for everybody, basically For everybody, unless you're AWS, and so in this case, they know every action, everything deployed, it's all available to them. They also know all the activities, all the transit in terms of data moving in and out, packet infrastructure, et cetera. What's important about that? It means that they have knowledge of that environment that most enterprises will never have, and so when you really look at the issues that we find in those environments, it's really basic stuff. So you've got vulnerability management, you've got misconfiguration still dominating most of the things that you'll find in there, but you have the wherewithal that you have the discovery of all the assets at any given point in time, so it makes it easier for you to identify them, discover them and then go remediate.

Speaker 1:

Potentially you can do something about it Exactly.

Speaker 2:

But that, potentially, is sort of the operative word, exactly. So think about it as bringing structure, and this will bring us to the next topic, when we talk about AI in combination or in conjunction with that. So then, if we look at AI and security around AI in general, you're really consistently worried about data poisoning and exfiltration of data. There's some other things in there, but those are the dominant ones that we're concerned with. So you want to protect your training data. You want to make sure that you're in control of the questions being asked of it, so that you can thwart off either malicious prompts or prompts that are meant to extract corporate data.

Speaker 1:

So where are the concrete challenges in that? So it's not. I mean, if I listen to you correctly or interpret you correctly, it's not necessarily about how the infrastructure underlyingly is architected. It's more about the types of workloads and the things that are happening around those workloads that make it sort of an area to do something about. That's actually phase two.

Speaker 2:

Oh, that's phase two, we're still at phase one. So if we take these two basic primitives, so you've got misconfiguration vulnerabilities. Now I have asset management structure around my cloud environments and then if you take how I basically build my models and the governance structures around that and how people are allowed to execute in them to protect my data, when we combine those two worlds then you start getting into what they call agentic architecture, mm-hmm. And this is where it gets very interesting, because those agents are actually acting on that infrastructure at this point in time. So you can imagine having prompts where you can go and say please deploy these ec2 assets or these ephemeral workloads on, you know, whatever infrastructure you choose, whether you're going Lambda or using Kubernetes. When that starts happening, you actually have communication between these agents doing activities on the infrastructure. That's when we start seeing a different attack surface and that hasn't happened yet.

Speaker 1:

No, because is the nature of that communication different from the communication that you see with other sort of workloads and other kind of environments?

Speaker 2:

It's completely different, because you end up having agents speaking English to English in these cases, right, and so they're interacting in much the way you and I would interact, and ultimately, what they're doing is prompt generation between the two and, as a result of that, they take action on top of that and they can manipulate that in interesting ways. And so when we start talking about the next attack vector and the combination of those things, that's actually what we have to worry about, and no one really knows what's gonna happen there yet, because we haven't done it, it's just starting. So, like AWS, for instance, we know that they're actively working on that and that we should see announcements sometime next year contemplating this kind of activity.

Speaker 1:

Yeah, so so, in general, how much more complex does your security posture or your security sort of strategy get with all of this happening, even though we're not really sure what's actually what's actually happening? Yeah?

Speaker 2:

well so. So you still have to deal with those basic primitives that we talked about earlier, right, so you're still gonna have to deal with the governance and hygiene that you normally would with your cloud infrastructure. If you're monitoring those things directly, then when agents start acting on these things, you'll know when there's problems. It's the rate of the problems that's gonna become more and more of an issue, and so there you also need to have high productivity tools to help you deal with all the net new findings that you're going to have as a result of all of these agents working at scale constantly.

Speaker 1:

And there's also the issue of it being in a lot of cases, or in some cases, real time as well, right, yes, so that's a I know from. I know, I think Tomer, your CEO, talked about real time being crucial for AI in terms of the defense kind of thing. Yes, but it's also crucial for AI and how you deploy your workloads and how you run your environment right, a thousand percent, yeah.

Speaker 2:

And you can imagine, on the back end of this, threat actors aren't going to be sitting idle If you go and close a port. The first thing that they're going to do in the future is deploy AI to reverse engineer the changes that you've just made to find or discover the next point of entry within your infrastructure.

Speaker 1:

Yeah, so in general, do you think there's enough sort of emphasis or focus on building security into cloud and AI, especially AI, I would imagine, at the moment?

Speaker 2:

I do, I do, I do. I think that the hyperscalers are doing a very good job of it. I mean, we rely heavily on AWS today, and so that includes, you know, sagemaker Bedrock. They provide most of the governance in terms of what we do from a training perspective and from a runtime perspective.

Speaker 1:

Because that's something maybe not everybody knows, but in Bedrock, for example, there's a lot of governance, security, all sorts of policies built into that entire stack right, totally, totally.

Speaker 2:

I mean, I'm not an AWS employee, so I'm not trying to advertise AWS, but speaking practically, since we majority run on that, things like SageMaker and Bedrock, in combination with the general apparatus around securing AWS infrastructure, helps us out a lot. It simplifies the work that we do. If I was running everything in a data center and doing this all on my own, that actually is a monster task.

Speaker 1:

Yeah, not even because the assets are impossible to get sort of in view anyway, right, so, yeah, totally so.

Speaker 1:

What is this? When you talk about cloud and especially who's responsible for what you always, somewhere down the line, you end up with this shared responsibility kind of thing that's going on. So still your data, so you're responsible for your data. You're responsible, so you need to build and, for example, the identity access management of AWS is not the same as another identity access management kind of thing that you still have to connect and build and run on the cloud as well. Yeah, how does that work? How does that shared responsibility model work in this sort of AI kind of world? Is that still? How much of it is that responsibility shifting maybe towards a certain so let me make sure I understand the question.

Speaker 2:

So let's pick on open AI or actually, in this case, let's pick on Claude. So Claude works on Bedrock. So is your question if something nefarious happens around their LLM, who's responsible for that? Is it AWS responsibility, or is it Claude's responsibility, or is?

Speaker 1:

it yours Because you're actually running it right. So are we entering into sort of a maybe extra layer of shared responsibility? It's not only the cloud provider and the customer, but maybe it's also the company that develops the LLM which could be I mean, that's a valid kind of point, I think, to think about this.

Speaker 2:

Totally. But it's not a new place for us. I mean, if you look at like a hardware vendor, you know they provide hardware, they don't provide the operating system. The operating system sits on top of it and you have sets of applications that sit on top of it. That responsibility has existed in software for, you know, decades. Now I don't think it's any different. Now are we good at understanding who's responsible for what in that relationship. Yet I think it's still fairly ill-defined.

Speaker 1:

But I think we will go through the same methods and madness that we did in terms of separation between OS, hardware and application over time but lately I've been trying to draw some parallels between move to the cloud and now they move to AI, or maybe they move to big data, and I've come to some sort of an intermediate conclusion that we haven't really learned a lot from the mistakes we made during the cloud transition era and we're doing the same things again in the AI era. So I remember when the shared responsibility model was a hot topic. That was a reason for a lot of companies to say, well, let me think about this a couple of weeks more, or a couple of months more, or maybe a couple of years more, because I'm not sure if I want to do this shared responsibility thing. And I think that's happening again now.

Speaker 2:

I think you're right. I think that why it looks a little bit more nefarious, if you will, in this case, is that AI as it stands, in terms of generative AI, can dupe a lot of people, and that's new. We haven't had technology that's capable of doing that, where it imparts a level of wisdom and intelligence that we personify and consume and then may take wrong actions on as a result of that, and I think that confuses the relationship that we have with it. But in terms of how we deploy and architect these things, there are some very specific layers that we know the boundaries of. So if I deploy Lama on Bedrock, then we know what actions that model is going to take. We know what governance Bedrock is going to provide, and if I choose to put a RAG architecture in front of it, I know what my responsibilities are in those terms as well.

Speaker 1:

Because in a sort of a RAG example, you would still be responsible for the data in that kind of repository or whatever you're going to put it right.

Speaker 2:

Yes, now, the foundational training that was provided on the model, if I have not changed that or manipulated that anyway, well, we know where that responsibility lies. I think it gets a little bit more fungible when you start talking about fine-tuning. Yeah, that's. That's when the responsibility becomes harder to dissect.

Speaker 1:

Yeah, and one of the other things that's really happening at the moment, at least what everybody tells me that's happening. I mean, I'm not sure if I have a lot of customers on record saying this, but the speed at which AI is going and at which the companies that offer AI want to go is rather kind of high. So I would imagine that also sort of impacts your security kind security strategy.

Speaker 2:

To a degree. I mean, the pace is high, but in the industry you'll hear that we're reaching a plateau. And so when you look at all the capital deployed into all these companies trying to create either the foundational models or try to build applications on top of them, we're reaching a point where there's so much capital load but not enough return on investment and you're starting to see the pace calm down a bit Now. I think that's a transitional period. I think it will ramp back up once we start seeing use cases actually monetize, but there's a clear gap in terms of investment and returns right now.

Speaker 1:

So let's look at the customer end of this, right. How do you advise them to look at this, at these developments, especially when it comes to their cybersecurity posture, because they've thought about that for a long time, hopefully, and they've come to some sort of good situation, yeah, and now this comes up, and then, oh, so what does this mean for me as a customer?

Speaker 2:

so, so let's let's take this in a couple slices, so one outside of cybersecurity. As a technologist, when I'm interacting with my peer group or providing advisory services or counsel to anyone who is developing software Like I, am ardent about the fact that you should be trying to figure out what your game is on LLMs today, and it's not just about the technology in development.

Speaker 2:

It's also about your day-to-day work. Like you should be incorporating that in your day-to-day work, like you should be incorporating that in your day-to-day workflows whether you're an HR, whether you're an IT, doesn't matter. It's not cheating. Everybody's going to be using this. You should be proficient, yeah, or you're gonna get left behind makes sense. So that's one side of the story. Now, if we talk about cyber, there's really two key cases that we have to look at, and so one is what does it actually provide in terms of net new capabilities for threat actors, and the initial onset really is that they can develop malware variants much quicker, which means that they can also develop new techniques which can evade a lot of the defensive mechanisms that we've got out there.

Speaker 1:

So it's not only about volume of attacks.

Speaker 2:

It's also about actually developing new techniques with Gen AI. All of that sits in one camp. Then you've got the issue of well, now I can generate a lot of content quickly so I can do spear phishing either over text or email what have you? Spear phishing either over text or email, what have you? And the quality goes up so it doesn't look like it's somebody from somewhere in the world that is not proficient in the language that they're targeting. All of those things become a whole lot more efficient. Then you have the identity stuff that's going on, where you can fake a voice. Now I can call the help desk and I can get a set of credentials as a result of that, or an employee, and masquerade as an employee of a company. This is how we've seen a lot of the major attacks happen in terms of them being identity based.

Speaker 1:

Yeah, especially deep fakes are obviously big. Yes, that's a very tangible impact.

Speaker 2:

Very, very, and so that's what's happening with the threat actors today. The next evolution for them is that the speed and intensity with which they can do discovery when you go and you close a misconfiguration is going to increase, and so that means that they will find more vulnerabilities, which means that there are more vectors for attack for them to come into your infrastructure, and they've got more tooling as a result of being able to deliver all of these different kinds of payloads as a result of the malware variants that they can generate with new techniques. So that sounds like it's like the Armageddon is about to happen, and that's probably what we're going to see over the next anywhere from, I'd say, two to five years Armageddon. Are you predicting Armageddon? Now? That's maybe.

Speaker 2:

No, I get what you mean. You need defensive mechanisms against this, and so if you're not actually relying on AI for defense, then all of that is going to take over your world, and that's going to be what you're concerned with. So you need to be able to deal with automation, because you've got a high volume of findings and alerts that you're going to be dealing with, much higher than we see today, and then, on top of that, you need new adaptive detection mechanisms that can keep up with all of these new techniques being delivered, and that's really the world that we're evolving into in terms of cyber today.

Speaker 1:

That's not very easy, especially given the fact that we don't really know where it's going.

Speaker 2:

Well, I think we just wrote the script for where it's going to end up, but that's not a pretty place to be. When you really look at how cyber's developed, it's like the monetization of it is actually the key driver, and that's why ransomware has been consistently popularized. Billions of dollars run through the ransomware industry, which funds billions of dollars in R&D.

Speaker 1:

To come and try to figure out how to break it. Yeah, it's an industry, right it's?

Speaker 2:

a Totally, totally, and so with that, they are also well capitalized to continue down that path. And that's not including, you know, nation states and whatnot as well.

Speaker 1:

But just to sort of get this completely straight, companies like SentinelOne are working on a solution for this, I would imagine.

Speaker 2:

A thousand percent.

Speaker 1:

Because there is. I mean, armageddon is there, but we are going to sort of combat it, right?

Speaker 2:

A thousand percent. So this is why we've developed our AI SIEM, this is why we have Purple AI. These are all the basic measures that we are developing over time to help defend against what we see coming in this world. Obviously, that also transcends to our endpoint technology, whether you're talking about cloud workloads or traditional IT assets. All of that comes together and that's actually what we're working on and driving towards, because that's the future that we see.

Speaker 1:

And what does the cybersecurity industry in general need to do better, or maybe different, towards this future? Because you can't do everything alone, right? I mean, there are limits to what you can do and there are limits to what others can do, so ideally, there will be more cooperation, or collaboration, or co-opetition, or whatever you want to call it now.

Speaker 2:

I think that's what we have to hope for. I think the cybersecurity.

Speaker 1:

I always learned that the hope is delayed, disappointment or something like that.

Speaker 2:

You're not wrong, but this is a critique on our industry at large. I don't think we're good collaborators, and that's something that I think we really need to work on within the industry in and of itself. I think we have a lot of combative competition and insulation as a result of that, and that insular behavior is not good for the customer, and when you really look at what's happening, none of us individually can defend against the army of threat actors that are out there. This has to be something that has to be a joint defense strategy, and so I think for all of us in the industry today, we need to kind of disarm a bit and start coming together, to be fair it is a very competitive market right.

Speaker 1:

Completely, there are 3,500 security companies in the world and I think every day 10 new ones pop up.

Speaker 2:

Yes.

Speaker 1:

Especially when you're looking at especially now with AI coming up. I see a lot of parallels with API security coming up.

Speaker 1:

Then all of a sudden, you had API security companies. Right that they didn't exist before. They were part of WAF or secure gateways, whatever you want to call them. And now AI comes up and maybe next year something new comes up, and that always comes with its own cohort of security companies. We're continuously adding to this mix, and everybody's there to monetize, everybody's there to grow and to become more relevant, and so I quite understand why it's hard. But maybe the real question is are we going to be disappointed about this? Because yeah, even though it's necessary to do it.

Speaker 2:

I think it's a matter of perspective. I think we're always going to see progress and we have to relish in the progress that we make, but we're also going to see travesties along the way as well. No one's going to prevent all breaches. Things will happen as a result of human error or misdirection. That's going to happen. So at some point we will all be disappointed by these things. But you know, every time you see kind of these waves, we're basically developing new mouse traps to ultimately trap a mouse in a new way that has been discovered or a new attack vector that's been discovered or opened up as a result of a new technology. That's just part of the industry in and of itself. I don't look at that as a bad form of competition. That's a good thing.

Speaker 1:

I think it's more in the established firms where there should be more collaboration across the board and, just to round, maybe also the podcast episode more or less off from a customer perspective. I always try to end it with some concrete kind of things that they can do. Yeah, and I always like the concept of what are the questions you should be asking your security providers or your MSSPs or whoever does your security maybe you have your own security team. What are the questions they should ask? So the people that have to decide what they want to go for in terms of solutions or whatever, what are the questions I think concretely?

Speaker 2:

So I'm blessed with having a wonderful CISO. So Alex Stamos is our CISO and I think he has a great understanding of this, and I think people get too worried about the tech out there that they can consume. And what you should really start considering is what are your choke points? And it's that basic. So, yes, you should have endpoint technology deployed. Yes, you know you should be using zscaler or a similar apparatus, but you're looking at how do I have containment across all of the different assets that I have in my infrastructure? Focus on that first, and that will guide you to the technology that you should ultimately deliver or implement.

Speaker 1:

Yeah, and then just to come back to the beginning, the answer to that question is easier to give in a cloud environment, because you have much more insight into what's happening and where everything's going, and all that stuff. How do you do that?

Speaker 2:

on-prem. Well, believe it or not, the concept actually came from on-prem oh okay. So that's why people love firewalls, for instance. It's the one-way entry into your infrastructure and the easiest way to shut down anyone who's connected to your infrastructure. The cloud kind of opened that up and made the perimeter a little bit loose, if you will. So, yes, it does make it easier for you to identify these things, but kind of that ability to shut down the internet from your infrastructure. Those days are gone.

Speaker 1:

Yeah, but that's still a technology kind of perspective, because if you really closely to any organization and you say what are my choke points, those people are usually between the chair and the screen right At a desk.

Speaker 2:

So the people yeah, so you know the policies that you would engage for something like that as well. You can't connect to our infrastructure without being on Zscaler. Now, if I have that, it's equivocal to what we were talking about earlier in terms of that firewall right.

Speaker 1:

So it's also a matter of really thinking about your layered approach to security. But then obviously the big question is how do you keep that in check in terms of costs, because that can spiral out of control very quickly. If you, if you just keep adding on layer on layer on layer, then your cfo is going to say, well, maybe, maybe, maybe, stop doing this for a while.

Speaker 2:

Yeah I mean there's a balance, and there's also a usability issue as well that comes into play if you've got too many layers, but philosophically, that's where you start and that will guide you to what technologies you should be looking at and obviously constrained by your budget maybe that's a topic for a for another one, otherwise you're gonna think we're gonna be here for another 45 minutes.

Speaker 1:

How to shake your.

Speaker 2:

CFO for more dollars to secure your infrastructure.

Speaker 1:

I mean, obviously there are big things going on there, right? So who? Who does your CISO report to? How do you? How do you do that in an organizational standpoint? But there's a lot of discussion on that as well. Completely Some other time maybe, but for now, thanks for joining.

Speaker 2:

Thank you for having me Always a pleasure.