Techzine Talks on Tour

AI agents have an identity too: how do we secure them?

Coen or Sander Season 2 Episode 12

The security landscape is transforming rapidly as AI agents join our workforce, creating an urgent need to rethink how we approach identity protection. When 80% of breaches already stem from compromised identities, how do we secure virtual employees who can't use traditional multi-factor authentication?

We sat down with David Bradbury, Chief Security Officer at Okta, to discuss this topic. This is a very important discussion to have, because traditional security approaches fall short when protecting non-human identities. 

We ask Bradbury what organizations need to do to protect and secure their modern environments better from an identity perspective. He outlines the essential building blocks organizations need to implement. These have to do with token authentication, fine-grained authorization, and asynchronous workflows. We also talk about the role of the human in this new framework.

Looking toward solutions, Bradbury highlights promising developments like Google's Device-Bound Session Credentials, which cryptographically bind sessions to specific devices. He also emphasizes the need for broader adoption of security standards across the SaaS ecosystem and calls on CISOs to demand better security features from vendors. 

All in all, a lot of work needs to be done when it comes to protecting and securing identity moving forward in an agentic world. The first order of business is to understand what the challenges are that AI agents pose. Listening to this podcast episode is a good start, as we discuss all the fundamentals.

Speaker 1:

Welcome to this new episode of Techzine Talks on Tour. I'm here with David Bradbury. He's the CSO, so Chief Security Officer at Okta. That's correct. Right, that's correct.

Speaker 2:

Great to be here.

Speaker 1:

So, yeah, thanks for joining us. I want to talk a little bit about identity in the broadest sense of the word. Not in the broadest sense of the word, because that would be a bit, that would be a long conversation, I think. But in of a word, because that would be a bit. Would be a long conversation, I think. But in terms of security, I think we've been talking about identity as being one of the most important things and more important developments in cybersecurity for quite some time now. What's what's? What's the? What's the current status of identity in the cybersecurity landscape, according to your view?

Speaker 2:

So, as we continue to see, sadly, more than 80% of the breaches that exist have been a result of a compromised identity. So, when you think about the importance of identity these days, it's never been more important to ensure the security of your identity and make sure that the right people are having the right access at the right time. So it's been interesting to be here at RSA this week and really experience where we're seeing advances around identity, but it would be very difficult to be here and not talk about AI and how that comes into the mix as well.

Speaker 1:

Do you think there's enough focus on identity at the moment? If you walk around, when you walk around, this huge venue with I don't know how many exhibitors, there are but quite a few.

Speaker 2:

What pleases me greatly is that nobody is debating the importance of identity now. Now it's about how do we get it right, but also how do we get it right today and then make sure that we're continuing to maintain that and be able to be secure and safe with our identities over time. So what I really appreciated with the work that many companies have done over the past few years is ensure that identity is central to every Zero Trust security architecture, and everybody understands that, and so we've been really pushing hard over the past 12 months to ensure that our products are able to really help accelerate people's journeys as they're pushing harder toward that zero trust outcome.

Speaker 1:

And it's a good goal to also make sure that it remains at the forefront and it remains very secure and all that. But especially with all the I mean you alluded to earlier as well, with all the rise of AI and agents and all that stuff, is that even possible to architect something now that you're sure is also going to be able to protect customers and the industry as a whole, or the market as a whole tomorrow? Because I don't. I mean, I'm not an AI expert, but I read a lot about it and I don't really have a clue where we're going right. It's, yeah, I'm not an AI expert, but I read a lot about it, and I don't really have a clue where we're going right.

Speaker 2:

It is a concerning development that I've been seeing personally, talking to my peers in the industry and also talking to some of our customers, who are telling me about the huge weight of pressure that they're under from their board and their executive teams to really adopt AI as quickly as possible. You know, with the promise of AI agents solving all sorts of use cases for customers and internal use cases, and so what really concerns me and Okta as a whole is kind of worried that people are pushing so fast that they're forgetting the fundamentals of security and identity as they're actually building out these applications. I recall a time around 20 years ago when developers were really focused on ensuring that we've got all the right security built in, and encryption was one of the key things that they needed to build in, and so we're seeing that people were building their own crypto libraries. What a terrible idea, terrible idea to think that each developer cannot be an expert in every area.

Speaker 1:

We've learned that we rely upon SDKs and libraries that are written by others to be able to accelerate, but there is something to be said for trying to make the building blocks that everybody uses more secure, right?

Speaker 2:

I mean you're not debating that, I hope 100%, and so that's actually where we spent a lot of time.

Speaker 2:

I hope 100%, yeah, and so that's actually where we spent a lot of time. About nine months ago, we sat down. We saw the huge influx of questions around how can we be more secure in building out our own AI agents, and we sat down and we worked through a number of areas, and really the big three for me, which we've now built SDKs around, because we want to make sure that these secure building blocks are there so that security doesn't get bolted on from the side at some later stage we can actually bake that in today. So we have three key areas that we focus upon. The first one which we thought, given that we're an identity company and traditionally people see Opta as an MFA, as really tightly associated with MFA was to ensure that the authentication of these platforms. Let's get that right. So how is that different? How is it different in the non-human world? Well, maybe news to nobody that a robot can't pick up a phone and hit a push prompt. We can't wrap.

Speaker 1:

MFA around the tokens they get issued after authentication. You can't send him a message on his phone. Oh, you could. If you automate that as well. Right, there's been software bots for a long time, so you could potentially automate this.

Speaker 2:

Indeed. So I think about the change in the way that the SAS world has been architected over the past 20 years. In a traditional world with on-premise applications, data centers, we're not in that world anymore. We actually have a very diverse, disconnected world and as part of the interconnect between all of them, they're really connected with these little pieces of strings of numbers and letters, these tokens.

Speaker 2:

When you're a human and you've gone through your multi-factor and you get issued a token, that's the moment where it's a really high, secure, high trust moment, where you've gone through all your checks. Now we've given you a token. On the non-human front, we need to make sure that people are actually thinking through how are they actually going to get to the point where you are at that high trust moment that you've performed all the checks needed. Then you wish you the token and that you're actually ensuring that you know where those tokens are. You've managed to ensure that the the lifetime of these tokens is as short as possible, but also, in the event of something going wrong, you have the ability to hit a button and actually revoke all of those tokens immediately and reissue.

Speaker 1:

Do you see a future not a present, but a future where a non-human employee which is what we're moving towards, if you have to believe that the companies that have a vested interest in actually achieving that but do you see a future where a non-human employee has the same status in the identity realm as a human employee?

Speaker 2:

100% and we need to make sure that, as we are building out those virtual employees, we are building out the appropriate safeguards around them so that they have limited access, not unfettered access, to all the data.

Speaker 1:

but the same thing holds for human employees as well, right?

Speaker 2:

so you won't, absolutely yeah and so, when you think about the way that these agents will be interacting, largely they will be interacting through API's. So a big difference between humans. We're not typically sitting in front of a machine interacting with services through API's, so how do we make sure that we're not typically sitting in front of a machine interacting with services through API's? So how do we make sure that we're actually building the right authentication around those? But also, to your point, this is really the second piece that we focused upon as a company is to ensure that, from a fine-grained authorization perspective, how do we make sure that there is an easy to implement model that can be applied across both the actors themselves, the agents, but also their targets?

Speaker 2:

So imagine the, the entirety of your, your Google suite with all documents. We don't want we want to your point to have them limited in the same way that an employee may be limited, but we also want them to be able to operate with perhaps the same access that an employee has, perhaps that act on their behalf. And so we can implement these models using our fine-grained authorization platform, and we've been building out a custom version of that specifically for Gen AI applications, because, again, you've got to get that authentication, right with tokens, and so a token vault, is what we've created here and we've implemented all of that. But you've got to make sure that there's fine-grained authorization wrapped around that so that, once you've been authenticated, you are constrained in what you can see, and that needs to be able to be changed and adapted to based on the use case and who you're acting on behalf.

Speaker 1:

Yeah, well, that's actually. That sounds interesting, at least I mean. But you're still. But then you're obviously at the employee level, right, so at the identity level. But the entire new kind of layer on top or underneath it, whatever your perspective is, is also very interesting as an attack vector or as an attack target of Surface, especially if you look at agent-to-agent and MCP at the moment, that's us. I've previously said this as well. It's, it's the opposite of zero trust. Right Agreed, because everybody can connect with everybody.

Speaker 2:

Do we need to do something about that layer on top or beneath as well, in terms of identity, I think there's a great debate that is being played out right now around the level of autonomy that you can grant to these virtual employees or agents.

Speaker 2:

At one extreme, we're seeing that there is a very strong push from a number of companies that you can actually empower them to undertake some very, very high risk use cases. At the very opposite end of that, we're seeing that people want these agents to be checking in routinely the idea that there's not just a human in the loop, but there's a constant human oversight. What we're trying to do at Okta is make sure that we can support the full spectrum of tolerance here, but that the key element of this and this is really the third pillar of what we've brought to market is the idea that you have these asynchronous workflows. You imagine that you're sitting in front of a prompt and you've asked the AI agent to go away and do something for you. You don't want to sit there and wait for it to come back.

Speaker 1:

And you can also ask an agent do this for me every Friday at three o'clock, exactly, every Friday at 3 o'clock.

Speaker 2:

Exactly, and so, under certain circumstances, you'll want it to return to you and actually be the human in the loop who's actually processing information, either granting the next step or declining. A couple of ways this could play out is I have a travel agent that is helping me book my flights, needs to reach out to the HR system to pull in some of my personal details, my passport number and other pieces. I don't want this agent to just permanently have access to all of my details in the HR system so it can actually come through and prompt me at some point and say I've got your itinerary, I'm about to book the flights, can you get authorized and I'll then reach out and I'll take the right inputs and then I'll go and book this on your behalf. The flip side of that if I think about a security use case, there are many companies here this week who are pushing the idea that you can extend the value of your security operations center with AI agents. There aren't many CISOs who are fully comfortable allowing an AI agent to act autonomously.

Speaker 1:

Yeah, but it is I think you need to think about it at least and it is moving towards that point that maybe it's inescapable to have some automation in your security right.

Speaker 2:

Couldn't agree more. And so how do we land on the use cases where actually this one I'm fully trusting the AI agent to fully investigate Phishing emails as an example. I think they're a great commodity use case, a simple use case, where I think AI agents can be very successful in an unsupervised mode. But in other instances, if it's a high-profile executive and we've seen malware or we've seen something that's more high-risk nature, now I'm gonna want a human in the loop to take a look before we take an aggressive action like, for example, freezing the access to that asset and preventing that that executive from being able to do their job. So there's a balancing act here of making sure that we have a mechanism by which we can make a decision, make a calculation as to do. We want to actually send this out to have somebody take a look.

Speaker 1:

And you can also give them subtle nudges and hints, right? So just saying, look, you've input the wrong password about three times now. Maybe think about it for a couple of minutes and come back when you think, oh, maybe I think that really helps as well. Give them the friendly nudges, not lock them out for 24 hours, but just for a couple of minutes.

Speaker 2:

That's exactly right. When I think about the building blocks that are needed to safely build an AI agent. Those are the three big ones that we've come to market with, and I think that it addresses the key use cases of before you've actually had the agent take any action. Can we make sure we know who they are and that they're legitimately connected, limit the access, but also make sure that it's very easy for them to be able to involve a human where necessary to actually take actions, and I think if you can get those three pieces from us and not have to think about building them yourself, we then make sure that these agents are actually being built securely from the start. And you know what? What worries me greatly is gartner's prediction in october of last year that by 20, a quarter of all compromises will be the result of a breached agent.

Speaker 1:

Well, on the one hand, usually Goddard's predictions are wrong, so maybe this one will be wrong as well. Let's put a positive spin on it. But it also makes sense right, because if we get more agents, more AI agents, there's going to be more breaches of the AI agents, just like there's going to be more breaches on the human employees. So it is also sort of kicking in an open door. I'm not sure if that translates well, but I mean, for me at least, that makes sense right, even though we don't want it, but I think it's inescapable.

Speaker 2:

What I think you're touching upon there, though, is a fascinating aspect for me, staring at sort of where we are today in the climate of AI. It's one aspect, I think, where we've transitioned from and to Last year, AI very much the buzzword this year, ai has really translated into and it's moved into the battlefield, and so we're very much seeing the potential and the potential for greatness, and, as we've been working with customers, we want them to make sure that they're securing those agencies as they build them. But it's been also interesting to see the advances in how threat actors are using Gen AI over the past 12 months. You know, I was sitting here a year ago and we just had a few examples of some deep fake voicemails from CEOs and CFOs. It was interesting to see that, whilst we were also experimenting, fred Actors were also experimenting with Gen AI. But to see where we've landed today really in two areas and a key aspect of how we've been looking at this at Opta is, we've invested really heavily over the past 12 months in threat research capabilities, and so at RSA, we've been looking at this.

Speaker 2:

At Okta is, we've invested really heavily over the past 12 months in threat research capabilities, and so, at RSA, we've announced the introduction of the Okta threat intelligence group and we published two pieces of research, one specifically around a phishing campaign from poison seed and another one related to the fake IT contractors campaign that's been persisting over the past 12 months, targeting US based companies. And what we've been able to see from our vantage point, as we've been looking at both of these, is that one phishing campaigns still work. They still work, they're highly effective, but they're only getting more effective. So the phishing lures that are being used. Gen AI is being used to improve the language to ensure that the right tone and style so the old Nigerian Prince with a badly word or the sort of letter?

Speaker 2:

that's not so that the phishing awareness training that you would use a be mindful for poor spelling, incorrect grammar can't you can't use these tells anymore. But even to the extent that gen ai is able to actually read your linkedin profile and see who you're connected to and then create lures that appear that they're from people that you know you're closer to, and so we're starting to see the more introduction of not only that right, but also because lots of software that you're using, all the sas platforms and all the co-pilots and all that have you.

Speaker 1:

They do all that integration for you as well, right? So if they get into one piece of that, they will have access to everything that's connected to it as well.

Speaker 2:

And they're making it much more simple for that to happen.

Speaker 1:

So on the one hand, AI and Gen AI and agentic and all that stuff is much more complex AI and Gen AI and the GenDig and all that stuff is much more complex, but on the other hand, it's easier maybe for attackers to get to more places in your organization because everything is probably connected after a while.

Speaker 2:

Those connection points do present new areas of risk.

Speaker 1:

So, looking towards the future I think you started this chat with 80% I think it was a brief identity was the cause of everything, all the bad things that are happening. Do you see that going down? Because if we can get the human to zero which probably not not gonna happen anytime soon and we're left with a 24% for agent, then actually you're that's a positive outcome. Right, it is it is so.

Speaker 2:

I think about just the very brittle nature of internet technologies today and the way that everything is integrated with these tokens and the ease with which these tokens can fall into the hands of threat actors and they can then use these tokens to access applications and data. So the strength of the controls around identity really need to ensure that we have clear visibility over where these tokens exist, and that starts with the security posture management products making sure that you have an identity-focused security posture management product that can tell you where these accounts are, where these tokens live.

Speaker 1:

And especially also where the APIs are and how many APIs you have. When you talk tokens, you usually talk about APIs as well, agreed, and the insights into how many APIs you have and where they are. That's still a very big dark hole for many companies. It is.

Speaker 2:

So where I see the industry is moving to is we're thinking more deeply about this construct, this weakness, and I look at the great work that Google is doing around its DBSC protocols, so device-bound session credentials, so the tokens that they issue to humans will actually be tied to the device, cryptographically bound, and so a threat actor, if they manage to get a hold of that token, it is worthless, they can't actually use that for any purpose, and we're starting to see the introduction of more and more technologies that are focused on securing this foundational element and, again, the very brittle nature of the internet, where these tokens can be misused so easily. That's where I think the, the advances that are coming through, are really going to change the, the way that we're thinking about this.

Speaker 1:

So, in terms of predicting the future, which is quite hard, especially in this era, and the area as well, uh, you are, you are optimistic, and uh in optimistic in that we're starting to be able to get a grip on this I'm extremely optimistic, but I'm also focused on all of our customers, who are very fatigued on having to continually re-engineer their security internally on the basis of these types of attacks.

Speaker 1:

Well, and also I think and that's sort of a criticism towards the security industry that I have is overselling stuff. I mean, mfa once had the name of being. Well, this is the last thing you ever have to do because MFA is so secure. I mean, that was the spin that a lot of companies put on it and then about six months later, well, it actually appears that MFA is not secure. I mean, that was the spin that a lot of companies put on it and then about six months later, well, it actually it appears that MFA is not 100% secure either. And I think that that's also part of the problems with the security industry in general that the companies, that the customers, are getting a bit that they get confused and fed up and they don't, and re-architecting their security stack every now and again. It's just, and then they may get sort of desolate or defeatist and figure out how I would actually broaden what you just described.

Speaker 2:

There is not just specifically applying to the security industry, but to the sas ecosystem in its entirety. If I think about as aISO, when I go to buy a product, I expect certain fundamental security components. I expect support for single sign-on, support for MFA. I expect to see some sort of RESTful API. I want to see decent logs. Those are the fundamental four things that we've been expecting now for many years, and the world has changed since then. I expect more than that.

Speaker 2:

I expect that you're supporting a skim so that I can automatically create and remove accounts. I expect that you're actually implementing some of the newer protocols, like the shared signals framework, so that I can, as Okta, directly connect to your SAS application and I can log someone out and not rely upon a customer having to do that themselves. If I see that malware on the device or some other signal has come to me saying that we can no longer trust that endpoint, I need you to implement that. I also need you to implement the ability for us to gather logs on a real-time basis and not just rely upon customers having to do that on our behalf.

Speaker 2:

I feel that the future of SaaS applications. We know what that future needs to look like and that's actually part of what we've been trying to drive with the OpenID standard around an interoperability profile for secure identity and really trying to push the industry harder.

Speaker 1:

That's one of the hardest things to do, right, to get everybody in the industry to sign up for a, for a, for a standard. I agree, because you know that the old meme that, oh, I'm going to build a new standards and we'd have, we all we're already extra we have 50, now we have 51. That's the uh. So this, this is the opportunity for me.

Speaker 2:

I totally agree. It's very it's very much, uh, sadly the case for so many examples. Yeah, but when I'm at RSA, I get the opportunity to talk to many of my peers and that's where I actually do think from a grassroots perspective. It's the CISOs who are buying these products need to be more constant in their push for better security from their SaaS application vendors. Because when I'm talking to product peers, product managers in other companies, and I ask them the question are people asking for this, are people asking for SSF, are they asking for SCIM? And the answer is no, no one's asking for it, and that, I think, is a really big challenge of they're not going to build it unless someone tells them or asks them to build it.

Speaker 1:

I see some hopeful developments, I mean this week. What springs to mind is the partnership between between Cisco and ServiceNow, for example. They're actually they've made their product teams work closer together. Whatever that means, right, I mean we have to see what it does in real life. But if SaaS organizations, perk teams, are more mindful of what it means to incorporate security or protocols or standards or whatever, and the other side also understand what it means, the impact it has on a SaaS company, I think there are some hopeful signs. I think, in that respect, we're not there yet. I think it's a long way away, but I think we're. I mean, that's my perspective. I mean I'm not sure if you share that perspective.

Speaker 2:

I'm optimistic for the future.

Speaker 1:

Well, that's always good. All right, I think we're out of time. Oh, we're way too late already. I'm going to be late for my next meeting. I'm going to be late for my next meeting. Okay, I'm always late for meetings. Now, it's part of the game.

Speaker 2:

Well, thank you.

Speaker 1:

Thanks for joining us and I'll look forward to catching up with you soon Sounds great Thanks for your time All right, thank you.