Techzine Talks on Tour

Trust and AI in cybersecurity: difficult but crucial to navigate (Alex Stamos, SentinelOne)

May 13, 2024 Coen or Sander Season 1 Episode 3
Trust and AI in cybersecurity: difficult but crucial to navigate (Alex Stamos, SentinelOne)
Techzine Talks on Tour
More Info
Techzine Talks on Tour
Trust and AI in cybersecurity: difficult but crucial to navigate (Alex Stamos, SentinelOne)
May 13, 2024 Season 1 Episode 3
Coen or Sander

For this new episode of Techzine Talks on Tour we sat down with Alex Stamos during RSA Conference. Stamos is Chief Trust Officer at SentinelOne, and - among other things - also a Stanford professor and former Chief Security Officer of Facebook. We took a deep dive with him into the concept of trust.

Trust is a key component of our digital lives. It plays a role in our personal as well as in our business lives. Organizations need to know who they can trust before entering into a relationship with a vendor. At least, that's how it should be. In practice, though, a lot of time is wasted on things like vendor risk management.

The existence of vendor risk management and its accompanying forms in itself isn't a bad thing, according to Stamos. However, thinking that potential problems with some kind of product or system that is based upon tens or hundreds of millions of lines of code are going to be solved by someone filling out some kind of spreadsheet is far from realistic.

Know who to trust

One of the key questions we discuss during our conversation is how organizations can know who to trust. That's a big question that deserves a substantial discussion. We go into the role a big player like Microsoft plays in this aspect. Should you let the company that builds Windows be responsible for your cybersecurity as well? And how can a cybersecurity company prove to their customers that they're worthy of their trust? In other words, how do you provide enough transparency?

Another topic we discuss during this episode of Techzine Talks on Tour is how AI will impact the cybersecurity industry. Not only from a trust perspective, but also from an architectural and process perspective. Stamos is of the opinion that the world moves towards automated cybersecurity, thanks in large part to AI. This will give the defenders an advantage over the attackers, at least for a while. 

Automation will be crucial, because of the sheer amount of defending organizations need to do. This also implies, according to Stamos, that the industry as a whole will have to move towards a centrally orchestrated way of gathering and analyzing data. Only then will it be possible for organizations to properly defend themselves. There's obviously a self-serving component to this statement, but we think fundamentally Stamos isn't wrong. It's a good concept to discuss, that's for sure. 

Listen to this new episode of Techzine Talks on Tour now!


Show Notes Transcript Chapter Markers

For this new episode of Techzine Talks on Tour we sat down with Alex Stamos during RSA Conference. Stamos is Chief Trust Officer at SentinelOne, and - among other things - also a Stanford professor and former Chief Security Officer of Facebook. We took a deep dive with him into the concept of trust.

Trust is a key component of our digital lives. It plays a role in our personal as well as in our business lives. Organizations need to know who they can trust before entering into a relationship with a vendor. At least, that's how it should be. In practice, though, a lot of time is wasted on things like vendor risk management.

The existence of vendor risk management and its accompanying forms in itself isn't a bad thing, according to Stamos. However, thinking that potential problems with some kind of product or system that is based upon tens or hundreds of millions of lines of code are going to be solved by someone filling out some kind of spreadsheet is far from realistic.

Know who to trust

One of the key questions we discuss during our conversation is how organizations can know who to trust. That's a big question that deserves a substantial discussion. We go into the role a big player like Microsoft plays in this aspect. Should you let the company that builds Windows be responsible for your cybersecurity as well? And how can a cybersecurity company prove to their customers that they're worthy of their trust? In other words, how do you provide enough transparency?

Another topic we discuss during this episode of Techzine Talks on Tour is how AI will impact the cybersecurity industry. Not only from a trust perspective, but also from an architectural and process perspective. Stamos is of the opinion that the world moves towards automated cybersecurity, thanks in large part to AI. This will give the defenders an advantage over the attackers, at least for a while. 

Automation will be crucial, because of the sheer amount of defending organizations need to do. This also implies, according to Stamos, that the industry as a whole will have to move towards a centrally orchestrated way of gathering and analyzing data. Only then will it be possible for organizations to properly defend themselves. There's obviously a self-serving component to this statement, but we think fundamentally Stamos isn't wrong. It's a good concept to discuss, that's for sure. 

Listen to this new episode of Techzine Talks on Tour now!


Speaker 1:

Welcome to this new episode of Taxiing Talks on Tour. I'm Sander and I'm at RSA Conference this week and I'm here with Alex Stamos. He's Chief Trust Officer at Sentinel-1. Welcome to the show Not the RSA Conference show, but this particular show.

Speaker 2:

Yeah, I'd much rather be here than at the actual RSA show, so thank you for giving me an excuse to escape.

Speaker 1:

That makes two of us. That's good. So obviously, you being the chief trust officer, we're going to talk about trust, which is a big topic.

Speaker 2:

Yeah.

Speaker 1:

But a very important one, especially in cybersecurity, but also when it relates to AI and everything that goes with. That Is trust. Trust is an important part for customers when they select a cybersecurity solution or whatever solution, especially from big tech and all these things, or is it? Is it really important for customers to trust?

Speaker 2:

Well, it should be.

Speaker 1:

But what's your experience from your conversation and from the market?

Speaker 2:

Yeah, I think companies are waking up to the number of organizations and projects and sometimes nonprofit developers and random anonymous people upon which their IT systems function. We've gone through the last 10 years, a real shift to the cloud, and at the beginning of this I just remember I'm old enough to be in this industry. As my friend Dino DiZavi said, I'm old enough to remember two Microsoft trustworthy computing memos.

Speaker 1:

So it's like how we're marking out the time.

Speaker 2:

Every 20 years Microsoft has to write a memo, and the move to the cloud was a big deal right, because the way people understood I want to trust something is you have ship software you put on a device that you control and a data center you control. You're able to control the perimeter. Therefore, all you're trusting is the quality of the code. So moving towards a model in which the DevOps, the security operations, a bunch of that stuff belongs to somebody else was a huge deal 10, 12 years ago. And now we're living through the consequence of people first being. Companies were first overly careful and now.

Speaker 2:

I think people are kind of nonchalant about the idea of, like I can use the SaaS product and not really understanding what data is stored there. What risk is there, what possible risk is there from this cloud product?

Speaker 1:

making the East-West movement to other systems, and there's different levels of trust as well, right? So you can say you trust that the service works the way it works, and you heard a lot about repatriating kind of stories about companies moving into the cloud too fast and all this doesn't really work as as advertised. I'm going to go back, but you're specifically talking about the, about the trust as relating to um, to the security and to the, maybe to the well-being of organizations, right?

Speaker 2:

yeah, and I'm seriously talking about. You know, we're at a security conference yeah the security products we've moved into.

Speaker 2:

Most companies are now in this weird hybrid mode where they still have a significant amount of on-premise infrastructure and they have a bunch of stuff in the cloud and they're learning that.

Speaker 2:

It is not the intersection of vulnerability sets, it is the union that all of the problems that you always had on-prem still exist and you have these new problems in the cloud and bad guys are getting very good at taking an advantage that they gain in one place and taking it to the other domain. And so how do you handle trust in that environment is quite complicated and lots of you know every major company has a vendor risk management process and I find that to be, personally, one of the biggest wastes of money that we have kind of in global capitalism, or at least in the security security industry, is that a ton of time is spent getting companies to fill out these questionnaires and you know they're all kind of the same but different enough that you have to spend a ton of time filling it out and it's it's rare that that the consumers of these products actually sit down and think with among themselves what are we using this thing for and where is it hooked up, and is this thing an acceptable loss?

Speaker 1:

This really gets into the old compliance kind of discussion, right? So a lot of companies do stuff because they have to and they don't really think about why something makes sense or why something else doesn't make sense. That's what you're getting at, right?

Speaker 2:

Yeah, and it's often the vendor risk management process is a one-time thing, upon initial acquisition or when you re-sign a contract. It's not a continuous idea of I've got this thing sitting out there that's possibly risky to me.

Speaker 1:

And it didn't really help very well either, right, because?

Speaker 2:

It has not helped much.

Speaker 1:

We've seen a lot of third-party software kind of attacks which ideally you would catch before they happen. Well, I don't think it's just.

Speaker 2:

There's no realistic process that I'm going to use some kind of product or system that is based upon tens or hundreds of millions of lines of code and that you filling out some kind of spreadsheet is going to fix any problems. What you need to do is you just need to be thoughtful about what the product is being used for and then planning for it to be a a reasonable sacrifice right like if you, if you can't 100 trust a product and nothing you really can then, you really need to limit what the risk is to the rest of the organization so and then be very careful about applying the controls you put in place to specifically match this thing that you're using, of what possible impact it has.

Speaker 1:

So how do you do that then, if you don't fill out tons of vendor risk forms?

Speaker 2:

How should an?

Speaker 1:

organization tackle this, let's start with it. I mean doing.

Speaker 2:

The forms are fine. I wish people would move to kind of there are a couple of standards out there. It would be way better if you had standards that were applied to, from my perspective. There are a couple of standards out there. It would be way better if you had standards that were applied to, from my perspective I mean part of where we're moving in cloud services, including security services, like the ones that products that SentinelOne provides is.

Speaker 2:

We have to provide the transparency to customers that they can trust but verify right that they have their own visibility into what we're doing and what's going on, and I think what we're going to see coming out of some of these kind of cloud scandals that I'm sure we're going to talk about, there's going to be a significant push for companies to have a much greater level of transparency into their cloud.

Speaker 1:

Yeah, so that goes all the way from the basic infrastructure to everything that they deliver on top of the cloud.

Speaker 2:

Yeah, right. So I feel like the infrastructure as a service platforms have done this best right In that you've got like Amazon CloudTrail, for example, google's- got an equivalent of the ability of getting pretty fine-grained logs into some place that you can ingest them, look for anomalies, use them for instant response and such.

Speaker 2:

Where this is less effective is in the SaaS layer, where you have SaaS products that are only often logging user actions. An admin logged in and did this. They did that. They're not giving you any visibility into their internal state or changes that they've made, and I think that's a mistake, because those logs do exist they just don't make them customer-facing, and I think giving customers some chance of catching intrusion is going to be key.

Speaker 1:

Yeah, so it's up to the SaaS providers to make sure that customers have access to this. But then the next step is obviously that customers need to be able to interpret them and make sense of them right, and that's also quite a big ask, I think.

Speaker 2:

A key first step is the plumbing right. So another thing that I've been talking to a lot of companies about is, if you want to play against the level of adversity that most companies are facing today, you have to have all your data in one place. You have to have all your data in one place. That's very fast, and that is the real weakness of a lot of companies is their security. Data pipeline is 15 years old. It's 10 years old. It's based upon Splunk, some other product that was built from the spinning disk days. It is not cloud native.

Speaker 2:

There might be a cloud component, but when you look at it, it's clearly an on-prem product that's been shoved into the cloud and, as a result, it makes it hard to put data in or, if you get the data in, retrieving it is actually quite a slow thing. It's much more of an archive than a live data store and so, as a result, doing the smart things you want to do where you have AI looking for anomalies, where you have the ability to do gen AI, creating queries, where you have the ability of like huh I want to pull 100 million records and create a training set for my AI system of like huh. I want to pull 100 million records and create a training set for my AI system.

Speaker 1:

Those things are incredibly hard based on that data platform you're on, and also, I mean not to mention those platforms are extremely expensive as well, usually right.

Speaker 2:

Yeah, I mean, the joke being that Cisco bought Splunk because it was the cheapest way for them to get a license. Yeah, yeah, yeah.

Speaker 1:

That's the. I mean we are seeing the end of the SIM era, right? I mean it's more or less overtaken by. Maybe only for compliance and auditory purposes you still use it, but for most other purposes you don't really need it anymore.

Speaker 2:

I think that the modern move is to move into a security data lake, a data lake that has both security-sensitive logs, but perhaps operational logs as well. Then you build a SIM layer. Sim is a feature, it is a view. The SIM is a view into the data lake. It is not the thing that's holding the data itself.

Speaker 1:

No, no, because then you're approaching it from the other way around and that doesn't really work.

Speaker 2:

If you do that, right, yeah, and so I think sim functionality is still important, especially for flow control right Of. I got this alert. It got picked up by this person. It tied to this ticket right and those are the kinds of things that SIM-like functionality is still necessary, but the underlying thing should not be a database that is only used for SIM.

Speaker 2:

You need to have data lakes that you have the ability again with like significant, that are based upon modern storage structures that are natively distributed, that have a huge amount of RAM caching, so that you have the ability to do things like do high-speed parallel analysis.

Speaker 1:

That's also important. For example, obviously I'm from Europe, so we've got to get the NIST 2 kind of thing going on in October, and for that you need to be able to. So step one is you have to report a breach within 24 hours. And then step two is, even so the reporting duty itself is that you have to provide a full written kind of report within 72 hours.

Speaker 2:

Right, and if your infrastructure is slow and you can't really find anything, which I just would say it's crazy, like there's a ton of breaches that I've worked in which you don't have containment in 72 hours, like if you come into a company and they don't have EDR, they don't have a data lake they don't have EDR, they don't have a data lake, they don't have a single pane of glass for the responders, and you keep on asking a question like I need your Palo Alto logs, and they're like okay, great, it'll be six hours when.

Speaker 2:

I go to every single one and pull off the console. Those companies are not going to have a response in 72 hours.

Speaker 1:

No no.

Speaker 2:

So I think it is nice that they're being aspirational companies.

Speaker 1:

It's one of those very ambitious plans that the European Union has. Usually they're drawn up by people. It always works out great.

Speaker 2:

My understanding is every EU plan has worked out fantastically.

Speaker 1:

The thing is one of my pet peeves in that respect is that these things are drawn up by people who don't understand what they're talking about. Right, Shocking. I saw one of the first versions of the Cyber Resilience Act that they are working on and they wanted to make upstream developers open source developers, wanted to make them responsible for the security of that code, which is obviously. That's just not how open source works and how communities work.

Speaker 2:

No, not if you want people to work for free. You're not going to generate all this liability for them? The no, not, if you want people to work for free.

Speaker 1:

You're not going to generate all this liability for them. The only thing you're going to get is that all the open source development is going to leave the EU. That's basically what's going to happen. You don't want that, but we're digressing a little bit. But so this trust kind of concept from a SentinelOne perspective, you can actually give that to your customers so you have full transparency on.

Speaker 2:

Well, we have transparency. It's something we're continually working on right. This is actually through our security data. Keep on working on exposing more data and putting, but all of our stuff ends up in the same place as your logs for your own platform. But yes, this is something that is on a continuous process for us is giving customers the visibility necessary to understand, because you know we are in a critical place. We have to recognize and many of our customers are already in kernel mode.

Speaker 1:

On thousands of machines.

Speaker 2:

You have arbitrary execution Now. You can turn off a bunch of the obvious execution paths, but it's still incredibly sensitive, Even if you turn off the most dangerous functionality.

Speaker 2:

We have incredibly sensitive data and so it's really important for us to demonstrate to customers we're doing a good job up front to fill out their vendor risk questionnaire to demonstrate and then if something goes wrong, that we're up front and honest with them and then, like I said, to allow for trust but verify, to continue to put all of our logs in a place that they have access to them themselves.

Speaker 1:

It's funny you mentioned the term platform just now. It's funny you mentioned I mean you mentioned the term platform just now. There's quite a big push and a big move towards, especially by the big platforms to do more and more on their platforms. Yeah, microsoft is an obvious example, but that doesn't always work out very well. I think there are a bit too many holes and vulnerabilities in those platforms. It's hard to trust a big tech or a platform like Microsoft.

Speaker 2:

I think there's two questions here. Do you want your security platform to be the same as your base cloud platform, which I think the answer, as we've seen from Microsoft. The answer is no, which I think the answer, as we've seen from Microsoft the answer is no, no Right.

Speaker 2:

Buying a house from somebody and then also buying the fire department when the house is already on fire. Paying them extra for the fire department to put out the fire doesn't feel right, but it's also just not a smart move. And I think what we've seen through Microsoft's, breach after breach they had had the CSRB report, which was about a Chinese breach last year. They have these 8Ks coming out in which it seems like the SVR is still the Russian. Svr is still in Microsoft's network. So they are in an active knife fight with the Russians right now, which they have my sympathy.

Speaker 2:

There's a lot of good people who work at Microsoft too. Yeah, I know who are working on this, but it's not their saying this is going to be the company that has all my identity management, that has all my cloud information, that has all my email and then also the security products that are supposed to be monitoring those things will get from them. It means you're buying into the same blind spots. We actually saw this with Defender, in that there was a privilege escalation bug in Defender that Microsoft marked, because of just kind of their model, as like a do not fix that was being used in the wild no-transcript, because they will end up making the same do not fix decision, the same blind you know they have the same blind spot that then spreads across that.

Speaker 2:

And so I think that's one thing. On platformization and security, I do think there is a natural move towards buying into specific security platforms for a couple reasons. One just so many people are just tired of the integration hassles. Right, lots of work has gone into standardized alert formats, standardized log formats, all this and that's wonderful. It's still kind of a pain to make all your stuff work together and our sales engineers spend plenty of time on these calls with a customer where it is sales engineer, from your SIM and from your EDR manufacturer and your cloud provider, all blaming the other one right a customer where it is sales engineer from your SIM and from your EDR manufacturer and your cloud provider, all blaming the other one.

Speaker 2:

Right and everybody's trying to act out of good faith, but just it's a nationally challenging thing. The second is I do think I mean we're moving towards an AI versus AI world Right, In five years, vast majority of offense is going to be by AI systems, and so defenders are going to have to be using AI that you'll have. An attacker is going to be by AI systems, and so defenders are going to have to be using AI that you'll have. An attacker is going to be supervising a bunch of sub-agents that go scan their victims, find exploits, do the exploitation, do the initial intrusion?

Speaker 2:

do east-west movement, go all the way to the end until they tell a human I'm all done. And defenders are going to have to have a master AI system that is then working with sub AI systems that are watching your logs, watching your networks, watching installs, working on DLP. That then all is into the master system that tells it everything's going okay or oops, something is going on. I need a human to tell me what to do. But here at least is all the data in a place that you can make a decision in a couple of minutes. It doesn't take you two hours of investigation. That is very hard to do unless you have the platform.

Speaker 2:

That's all tied together right, it's just very hard to do the AI work, because you need the data, you need the integration. It's just, and so I think that the AI push does make a natural push towards platforms being better for security. That being said, I just think it's silly to have the thing you're defending and the defense system to be built by the same people because, again, they have the same blind spots on the one hand, yeah, I would agree.

Speaker 2:

On the other hand, microsoft would be way better shape now if they're using somebody else's edr yeah, I'll just say it, right, like the fact that, like you, can compromise them and you can you can neutralize both their security protections and their uh, you know, their product protections and the security features at the same time is a huge weakness for them. Any other you know most vendor, most companies, their production systems would not also be their defensive systems.

Speaker 1:

They would have somebody else. Even if they more or less configured it completely correctly 100% it would still not be the right thing to do.

Speaker 2:

There's a natural risk to the decisions being made. You know, even in a large company, of having the same kind of mindset that goes into them and trusting the same company for. Both.

Speaker 1:

Well, I mean, there's also something to be said for if you have very close ties with one company that does everything for you, you can also speak about maybe a higher level of trust between, because if you really trust Microsoft or someone else, great.

Speaker 2:

So Microsoft's biggest customer is probably the US government, and they clearly don't trust them anymore right what the? Csrb report showed is that Microsoft was lying to people. They told people they'd figured out the Chinese breach. They then went back and were like, oh, we made a mistake. Okay, people make mistakes, I've been in parts of instant response.

Speaker 2:

where the initial communication turns out not to be correct, microsoft decided not to correct it until CSRB confronted them and made them do it. So, yes, you can have a long-term relationship with a company, but Microsoft basically was lying to the US government. They were lying to their customers until the US government uncovered it, so from my perspective that is not a company you can trust.

Speaker 1:

Now, quite recently you had the MITRE kind of thing going on. That was also very weird, the company, I mean, because they were also very, very, very close with the US government as well, right, helping them with all their DOD and all sorts of other departments, yeah, so, and they obviously do the attack framework.

Speaker 2:

So yeah, and Meijer's a very important company, for sure.

Speaker 1:

Yeah, yeah, they do a lot of defense work, yeah, and they didn't really have their security in order as well.

Speaker 2:

Yeah, they also had a problem. I think for all of these folks the transparency is critical. Now, in that case we're talking about a government contractor. That is almost uniquely a government contractor. If they're turning stuff over on the high side to the government or via their contracting mechanisms, then that seems reasonable For a company like a Microsoft or Amazon or Google.

Speaker 2:

If they have a breach, everybody in the planet relies upon them, like those three companies in particular, are critical infrastructure for every human being. They have all of our, every single one of us. There's no way to opt out, like there's so many products that run on them. And so like for the major hyperscale cloud providers. I think they have a challenge of they have to be transparent with everybody, because there's nobody who's effectively not their customer, even if they don't have a direct financial relationship.

Speaker 1:

And then going back to the AI part, right, so is that the biggest threat on trust as well?

Speaker 2:

So, in the big picture, yeah, I would say AI is causing complexity for a couple of different areas of trust, because AI and trust are very closely related as well, right?

Speaker 1:

So you also need to trust the AI that you're using to combat the other AI that's attacking you, right?

Speaker 2:

Yeah, and it's hard because they're such complex. Effectively, I mean technically a lot of them are deterministic, but they don't feel that way. Some of them are deterministic, some aren't, but predicting what they're going to do is quite hard right, which the whole point is what makes them feel magical is you don't know exactly what they're going to do right.

Speaker 2:

It's not a computer program that you say, if I do these three things, it will do the fourth, and so that's incredible. But it also makes it harder to trust these systems because they are beyond human understanding. And I think if anybody you know, we're here at RSA and there's a lot of companies down there at the floor that are basically saying we will solve your AI risk problems for you.

Speaker 2:

And if they say that with any kind of confidence, they're lying to you really, in that we don't know what AI risk problems are. Right now, like we are, it is the very early days of the fundamental research happening in these systems about how do you manipulate them, how do you attack them. To me it's like the 90s in web apps Like if somebody said I can build you the world's most secure web app in 1998, that meant nothing Because a bunch of the attacks that ended up being big deals, for web applications weren't invented yet.

Speaker 2:

And that's where we start with AI. What would it look like in 2030? It'd be like oh my god, we are so silly to think you'd build a quote-unquote secure AI system in 2024.

Speaker 1:

But on the other hand, I think it's quite a well-known fact that lots of attackers and lots of the organization behind all the attackers, they usually have all the. They have the Sentinel-1 EDR, they have all the stuff and they're trying to find the loopholes in there. But if you're using an AI which is not deterministic, or at least you can't really predict what it's going to do, then that would actually help the defenders, right, right, and that's how we use AI.

Speaker 2:

Someone's used AI since the beginning of the company like 12 years ago. Not generative AI, but AI for detection, specifically the theory of instead of doing, signature-based detection, since there's an infinite number of binaries that can do something bad. There's a finite number of choke points on an operating system in which malicious behaviors are found right, like you want to hook the keyboard.

Speaker 2:

There's seven ways to do it and you have to do one of those seven ways. You want to get screen share, you want to, you know, call PowerShell, and so those are the. If you have those choke points, then having the telemetry data and then have something in the cloud that is AI-based, that is watching it, you're right does provide a much less deterministic view for the attacker. This is one of the classic problems with AV is the attackers, like you said, will just buy these products.

Speaker 2:

I remember the FireEye on-prem devices. I remember they had on-prem here's an appliance which is kind of like the first kind of advanced malware product and their success was also their biggest weakness, and that there was probably a dozen of those racked and stacked in Beijing and Moscow.

Speaker 2:

You would be crazy to be a APT to have a piece of malware that'd be caught by FireEye's device, because you could buy them, and so this is, I think, a core part of the Sentinel-1 kind of feature set is the fact that a bunch of the AI happens is being updated continuously in the cloud means that if you do something, even if it's not caught initially, you're running a significant risk of being caught eventually as the. Ai gets smarter and smarter and eventually goes back and sees oh, this anomaly happened.

Speaker 1:

But then you're also talking about, maybe, a different definition of trust, in the sense, what should you trust? Right? And maybe it's not about because trust transparency. You know, all these things are also quite closely related. When it comes to AI, maybe it's not about trusting the complete transparency of, or the transparent kind of display of, how the AI works, but maybe trusting that AI is a better way of defending yourselves against the attackers.

Speaker 2:

It's like trusting another human you don't actually know if another human being, maybe your spouse, maybe your long-term friend. You really don't know what they're going to do when the chips are down. That is the funny thing about AI Not to anthropomorphize it too much. But you do have to treat it a little bit like a person of like. It's not like a computer program where you can expect what's going to happen. You're hoping it does a good job.

Speaker 1:

So you base your trust on perceived outcomes? I would imagine.

Speaker 2:

Perceived outcomes testing right.

Speaker 2:

Like I think for all of these AI systems I'm always a huge fan of purple teaming right Like red teams, with intentional blue team review and very thoughtful review of the entire kill chain is a critical part of security. I think it's a critical part of testing AI systems is doing realistic red teams and seeing what the result is. And none of the products will catch every single one. But if you go through all of the steps in a kill chain and it gets caught at some point there, then you can't trust that 100%, but at least it's going to be marginally useful.

Speaker 1:

And we also know now why purple is the color for a Sentinel-1, maybe I had nothing to do with that, I don't think it's the new AI kind of thing is called Purple AI.

Speaker 2:

It is called Purple AI. Maybe it's based on that. I don't know. I am sure we spent a gazillion dollars on somebody to tell us exactly which Pantone purple we should be using to have people trust us. Not my area, no, not mine either.

Speaker 1:

So what's the next? Just to get near the end of the episode, what's the next thing that you see happening in terms of trust combined with AI? Where does this go? Because you mentioned we don't know what we're going to, what's going to be like in 2028, in 2030, whatever. How can you work on this topic if you don't really know where it's going?

Speaker 2:

I think what's going to happen with AI is it's going to revolutionize different areas of security and in each case, the defenders will have an initial advantage that will be decreased by attacker ingenuity.

Speaker 2:

So right now, a lot of the AI focuses on operations. It's on making your SOC analysts faster, finding better alerts, finding malicious stuff, speeding up response. And defenders have a huge advantage in that we have products like Sentinel-1 that are using AI, that offensive people are generally not using AI for actual attack coordination, but they will get there. And so defenders are first, because we've spent billions of dollars in R&D collectively and thousands of people have been working on it. But the attackers, as the open source models come out, as the basic things that they can then modify come out, the foundational models then they're going to start to catch up and we're going to start to see, I expect, cyber extortionists financially motivated actors will start using AI to automate parts of the kill chain. That so Next, I think, is application security. So a number of companies are doing smart things around using AI to look over your shoulder as a developer, kind of co-pilot models. It looks like you're doing something crazy. Now we've had SaaS, we've had stack analysis, dynamic analysis fuzzing.

Speaker 2:

But often those systems lack kind of the bigger picture context and I think LMs gives us an interesting option of training it. And this is all the other things happening at a company, these are all the other things happening at a company, these are all the other systems you're interacting with.

Speaker 2:

Then it can make an intelligent decision of I think you're doing something stupid, but then defender, so that's defense advantage because, there's companies spending hundreds of millions of dollars of venture capital on that, but then the attackers will come by behind and will be able to drop binaries into AI systems. That will find vulnerabilities and write shell code automatically.

Speaker 1:

Yeah, but you will only know what to do next when those defenders have caught up, or the attackers have caught up with the defender. Yeah, I think so.

Speaker 2:

That's when you know what you have to do, and then I think the real final attack mode is the fully automated malware no command and control, so a piece of malware that's smart enough that runs inference locally. This will be interesting to see. They just announced a new iPad today that has more AI knowledge and iPad.

Speaker 2:

You don't get a lot of malware on it, but it'll be interesting to see as chips have neural engines built in as, like you know uh, you know more gpu power and servers and such, whether or not people build malware that actually does inference using the local ai hardware, but I think it's quite possible. You know what we've seen. A lot of good research is even. You know, with quantization and such, you can run this stuff on CPUs. What I'm really afraid of is a totally, totally smart Stuxnet right that, like you know Stuxnet costs probably $50 million to, you know, it is rumored the American and Israeli taxpayers and it was based upon all this human intelligence and how the Iranian nuclear plant worked?

Speaker 2:

no-transcript. That could be an extremely dangerous weapon.

Speaker 1:

That.

Speaker 2:

I think would be first from the financially motivated actors, because it would be so dangerous that states would not use it unless you had a real shooting war.

Speaker 1:

But that would also imply that we're moving towards a fully automated defense.

Speaker 2:

That's the only way you can protect a concept, because otherwise you're not going to be able to protect yourself.

Speaker 1:

You just would be too slow.

Speaker 2:

Yeah, so I think you'll need fully automated defense to see okay, there's this piece of software that's moving its way through my network. I don't have it in any of my signatures. But I recognize the behavior is malicious. I will intentionally isolate every machine it's touched and do a bunch of work, change a bunch of accounts, rotate a bunch of passwords, shut, microsegment a bunch of networks and then tell my humans hey, I fixed this for you.

Speaker 1:

But then you get into the early cloud kind of mindset again, right, probably a lot of, especially in financial industries, where they don't want to be oh, it's going to be scary, it's going to be scary, it's going to be scary, it's going to be scary to give operational control to AI, but it's going to have to happen and I think that's going to be a fascinating part of the next couple of years is how do you build guardrails around?

Speaker 2:

the AI operates in a mechanism that I feel just like with a tier one SOC analyst. Tier one SOC analysts are not allowed to reset the plug on an entire data center. When I was at Facebook, we had very specific for tier one, tier two and then for the executives who were running an incident. If an incident was a certain level, you'd have an executive incident manager. They had the power up to basically shutting down the data center.

Speaker 1:

But not all of Facebook.

Speaker 2:

You'd have to go to Zuck for that, so figuring out what are the limits of each tier of AI and what can it do without human intervention. It's going to be this terrifying direction.

Speaker 1:

It's going to be very interesting to watch in the next couple of years.

Speaker 2:

I think what will happen is, because the downside risk is so high, we're going to have to have really smart malware. We're going to have to have some really disruptive attacks.

Speaker 1:

It needs to hurt.

Speaker 2:

First it needs to hurt, and then companies will say, okay, well, the risk of this AI shutting down and losing us money is better than the possibility of another piece of smart malware, so in that sense it's not going to change much in how the security works in general.

Speaker 1:

It's always been the case like this.

Speaker 2:

I think we'll have all this AI, but you're going to have human in the loop, my guess. I mean, who knows, maybe I'll be wrong, but I think we'll have human in the loop, in a lot of loops, until companies are afraid enough that they're willing to give AI the ability to shut down thousands of machines.

Speaker 1:

It's going to be interesting to watch. Thanks for joining Nice conversation. Thank you.

The Importance of Trust in Cybersecurity
SaaS Security and Data Lakes
Platformization and Security in AI Age
The Future of AI in Security
Future of AI Security Threats