Techzine Talks on Tour

Rise of AI transforms CISO's role: from technical to strategic input

Coen or Sander Season 2 Episode 13

AI has in interesting side-effect for the role of the CISO, says Zscaler CISO Sam Curry when we sit down with him during RSAC 2025 Conference in San Francisco earlier this month. Their input becomes more important. That's good news for CISOs. Or is it? 

Security leaders haven long been judged and consulted along the technical axis. The (rapid) rise of AI means that dynamic had to change, Curry says. It had and still has an effect on cybersecurity in general that we haven't seen before, according to him. CISOs contribute to strategic discussions more and more. In fact, they're among the first to be consulted now. 

Besides discussing the role of the CISO in 2025 and beyond, we draw parallels between challenges and pitfalls that beset organizations during the cloud boom, and discuss the implications of them in the AI era.  

There are still so many things to look at from a security perspective when it comes to AI. That became clear again from Zscaler's survey, which showed that AI was simultaneously 2024's most adopted technology (growing over 600%) and most blocked technology. This paradox highlights the complex risk-reward calculations security leaders must navigate.

The last big topic we discuss is more forward-looking. What are the implications of the quantum era on how organizations need to look at cybersecurity? Post-quantum cryptography should be on organizations' radar. Is that the case though? What can organizations do now to prepare for this? And do they even have the resources to tackle this, while still tackling the current big wave of challenges?

Listen to this new episode of Techzine Talks and let us know what you think.

Speaker 1:

Welcome to this episode of Techzine Talks. On Tour With me today is Sam Curry. You're the CISO for Zscaler Yep. Thanks for joining us. Thanks for having me Interesting stuff. Good to be here, hopefully.

Speaker 2:

We're not there yet. We'll see. Put me in the house.

Speaker 1:

So I take it you've been at the event for a couple of days already? Oh yeah, did you get a chance to look around and see what's?

Speaker 2:

happening? Did you get a chance to look around and see what's happening? I try to avoid the show floor as much as possible but yes, I have.

Speaker 1:

I completely understand why you would want to do that. I do that too, by the way. I've been coming to all but four of these in the history of the show yeah, because you had something to do with the yeah, so I was on the program committee and I was also the head of RSA Labs at MIT and I was the CTO for RSA for seven years.

Speaker 2:

But RSA, the conference now, is completely separate.

Speaker 1:

Yeah, and they're very adamant that we don't call them RSA anymore. So we need to call them RSA-C, and then that's the name now.

Speaker 2:

RSA Conference no no no. Rsa-c.

Speaker 1:

It's RSA-C Conference now. Oh really, because last year it was RSA Conference, you had to call them that?

Speaker 2:

Why does that feel like it'll be RSAC squared or something at some?

Speaker 1:

point I can see that happening.

Speaker 2:

We do weird things.

Speaker 1:

Because this is very confusing RSAC conference, so now you actually have to say RSAC 2025 conference. But let's not talk about nomenclature and all that stuff.

Speaker 2:

We just did yes, We'll avoid it now, but what's your?

Speaker 1:

did you get any impressions of what's happening at the conference?

Speaker 2:

There was more energy than I expected. There was a vibrancy to the conference. In some ways, this is the reunion show for our industry. I haven't seen the final sort of headcount tally of how many people are here, but it's good to see the industry in person again. So much of it is remote and online and it's good to see the startups. It's good to see the innovation happening and I feel like we are more relevant than ever as an industry. Now, whether that's reflected in the conference or not remains to be seen. It is extremely expensive to present here and to be present here, but the real show happens around the show in a sense.

Speaker 1:

Well, I have most of my meetings outside of the show. Yeah, we're not at the show here either, right where we have this meeting. So that's how these things go. But let's talk about more in depth about things, because we can talk about the show for 15 minutes, but that doesn't really nobody wants to hear that. Let's get to the real stuff what keeps you up at night, as the CISO.

Speaker 2:

For, for the most part, what keeps me up at night is just, you know, the normal pace of business and my family right yeah after a and my family right, yeah, After a certain while.

Speaker 2:

I mean, being a CISO is about acceptable risk for acceptable return, and the word acceptable is key there right? So you get it to a manageable level, and then we tend to live with risk as CISOs. That's what we're supposed to do for our companies, as our chief legal counsels, as our CFOs right? That's a normal business thing, and I think what I'm seeing now is that CISOs are reporting higher in the organization and they're being seen as a business function. That was not true five years ago. It was certainly not true 10 years ago. So this is an encouraging sign. That's a good development.

Speaker 1:

Right yeah, they get more clout, they get more weight into the discussion.

Speaker 2:

I think it's not that we deserved it, we had to earn it. The biggest problem in cyber has always been, in my opinion, the gap between our function and the business and the fact that now the board cares that it is meaningful to the core business that we're seen as a business and a risk function, not just an IT function. That's a breakthrough.

Speaker 1:

Is it possible to translate that risk into business? It?

Speaker 2:

absolutely is. It doesn't mean we're doing it perfectly yet, it doesn't mean we've solved it, but I'm seeing the motions, I'm seeing the integration.

Speaker 1:

That's a good sign because sometimes, especially when you're asking for a number for coming to the point about quantum and AI and all that stuff, especially when you talk about those things, it's quite hard to make the business case from a CISO perspective right.

Speaker 2:

I think the AI topic was the first time we were asked for our opinion when a new disruptor came onto the scene. In fact, many of us were asked first. That typically didn't happen.

Speaker 1:

You didn't get that when Cloud came up, for example.

Speaker 2:

No, I mean, I remember when Cloud came up and and what not, those decisions were being made by many C-levels but it wasn't the CISO that was asked first. We were fighting to be at the table. Generally, by the time I was at EMC, rsa and security was a topic, but it wasn't the first person that was went to that people went to With AI. You can go to a room full of people who are CISOs and say how many of you were asked first and the hands shoot up. Why do you think that is? I think it has to do with the fact that folks immediately knew that there was something that wasn't understood and it had to do with the sharing of data and information, that we had reached a point societally where they understood that we need to decide if we should be sharing information like this.

Speaker 1:

we did a, we did a, that's also the complexity of the entire.

Speaker 2:

Oh very much so yeah so we did a survey here at at Z scale and found that in 2024, chat GPT, for instance, was the most adopted technology. I I think it grew over 600%. It was also the most blocked concurrently. I think both were true and that says a lot right. And when you surveyed CISOs, they said that LLMs and Gen AI were the most important technology, for 97% said it was the most important technology, but over 50% said it was the most risky technology concurrently.

Speaker 1:

Do you see that businesses have learned anything from the cloud days? Because I see, to a certain extent they're making the same mistakes. Oh yeah, we're going headfirst in way too quickly and then realizing after a year or two maybe it wasn't the smartest thing to do.

Speaker 2:

There's a science fiction author called Corey Doctorow who said that there's a bubble effect. And he said don't worry, there's always a bubble effect. The question is if it leaves any value behind. And he said it's only interesting. In fact, if there's a bubble effect, right, so you expect to see this rush in. He said, enron, there was a bubble effect left no value behind, right, the dot-com era was a bubble effect, left value behind, and so this is perhaps the biggest bubble and it is actually creating value. So we shouldn't expect you should expect there to be people rushing in.

Speaker 1:

Yeah, and we shouldn't expect them to actually to learn from the past mistakes?

Speaker 2:

Not in that way. Not in that way, and in fact, if you don't rush in, you will probably miss out to some degree, is my guess.

Speaker 1:

But there is this sort of other way around this about it, that ideally you would rush in with something that's already secure. Yeah, but that never happens, does it? No, you see the same happening now with MCP, for example. That is, I would say. I would argue that's the opposite of zero trust. Yeah, but everybody starts using it before we're actually thinking about securing it.

Speaker 2:

The fact is that anything that's really interesting is going to hit us in a way that we didn't expect. If we had time to slowly accommodate it and to think about how to fit it in, it probably wouldn't be that interesting. It's the fact that it's interesting. It drives innovation, and I think to some degree we are in business to take risks. Sometimes you rarely get a nice acceptable pace of risk right, and the Harvard Business Review did an article that claimed that even a month or two being slower or faster in adoption of AI had a material impact in all industries as to whether a company would be more or less successful than competitors, and so the question wasn't do you accept it? It was how do you accept it?

Speaker 2:

And so it's a matter of okay, yes, you can do AI, but now we've gotta find the means to do it. You've gotta say you can do it in this way, you can use it in these means or these ways, and to establish the right governance for it. And then, by the way, I think you should be establishing two things. The first is governance for all disruptors. You mentioned quantum computing. Well, you should be getting ahead of that now. Even if it is you and I were talking, it's a bit like the sword of Democles. It sort of hangs over you and you're not sure when it's going to actually fall. You need to be preparing for it now and for all disruptors. There's more, by the way there's nanotechnology, there's synthetic manufacturing, synthetic biology. I mean, there's a long list of potential disruptors that are really going to change society. To get ready for them. The other one is you should generally have a zero-trust approach, by which I mean you should be minimizing your attack surface and trust in your environment, so that the impact of these disruptors is smaller.

Speaker 1:

Sometimes technology is moving at such a fast clip, such a fast pace that it's just….

Speaker 2:

So why are we making these huge…. You can support complexity without it being complicated, so we should be making the simplest infrastructure that gives us flexibility and supports complexity. That means, by default, you should have least privilege, of course, but you should have least functionality as well, so that you can interpret what the impact of these disruptors is when they come in. That's something, by and large, people aren't thinking of. No, what do we do? We build this massive infrastructure and then we put another layer on it and another layer, and another layer.

Speaker 1:

That's the issue.

Speaker 2:

Right Right, so zero trust isn't another layer added to what you have.

Speaker 1:

No Right, it is in fact rethinking the infrastructure and transforming it, but also accepting what's there as well.

Speaker 2:

to a certain extent, yeah, but I believe in taking advancement in baby steps, incremental improvement, and you can. As long as you set the goals right and you set it up correctly, you can get almost anywhere that's a well, yeah, also that, but that's a big condition.

Speaker 1:

That's a big condition. I mean, you can, that's a, you need to need everything. Yeah, but I think that's the basic issue that lots of organizations have right. It's not. Maybe we trust too much in technology.

Speaker 2:

Yeah, and yet the biggest problems are cultural.

Speaker 1:

Yeah, and how do you get there? I mean, it's very easy to say look, before you do anything, you need to establish governance and do all the business, organizational stuff. But I get the impression lots of organizations have no clue how to do this?

Speaker 2:

None, no, so I actually I usually advise people that they think about things in four levels. Now I'm coming at it from a cyber perspective. The first is at the top level of an organization. All these levels are related. You should be having an ethical discussion, right. The board should be thinking do we do these things? Should we do these things? Should we do these things? Below that, the C-level should be thinking what's the business impact? How do we approach this from a logistical perspective?

Speaker 2:

Let's take AI, the top level. What are we worried about from an ethics perspective? The sustainability, diversity, whatever your core values are. And the business says, okay, so how does this impact our core business? The next level, then, is R&D. How does R&D actually engage in this stuff and how does that work? And then the fourth level is IT and cybersecurity. How do we make those policies such that we can say which departments engage in what kinds of, say, ai? And that's not that hard to do, by the way. We're talking about a few committees that meet occasionally, and when you do it right, it's sort of like the review boards in the medical space. You can then speed up your adoption of new things, because AI is changing so quickly. You should be meeting about once a quarter to discuss what your advances are.

Speaker 1:

But it's also establishing more and deeper connection between teams and all that stuff, right? So teamwork, as it used to be for a long time, was the work of a team, yeah, but now it should become the work of multiple teams together on the same issues and on the same project. And I think that's a fundamental thing that if you look at companies that offer that offer, or tech companies that offer platforms to do that, they're not small, they're not as big as you would expect them to be right, and they still are integrating stuff into each other themselves.

Speaker 2:

It's still not Well, we're not yet at the point, like we were with the cloud, where we got to a few providers of public cloud Because, if you remember, there was an interstitial phase of private cloud.

Speaker 1:

Yeah, yeah, yeah, and hybrid cloud, and you had all the big companies also offering cloud V-Blocks and things. And you had the HPs and the Ciscos that wanted to do public clouds as well, and then they thought let's not do that, and they stopped doing it again and I think in terms of organizing your company in a way that you can actually deal with all these big fast-changing things.

Speaker 2:

So, let's assume, by the way, that the evolution of any of these things is going to go similarly to how that went. That's not the lesson. The lesson should have been a post-facto analysis that said well, when something like a cloud comes along, how do we analyze it from those four perspectives? And that's not what most people took away. Now they're looking back and going I guess we need a private AI and hybrid AI and let's wait until three players emerge. That's actually not the answer.

Speaker 1:

No, but that's the learning from past mistakes thing again.

Speaker 2:

Right, I mean you can make some fairly trivial observations about past trends and then just sound like a guru when you predict them for the future. Or you can make some deep things and say, well, how do we take advantage and surf the next wave instead of get run over by?

Speaker 1:

it. Just coming back to the bubble effect, for briefly, I mean it's also about the size of the bubble. Right, always, yeah, I think the AI bubble is substantial. It's a biggie.

Speaker 2:

And it's not over.

Speaker 1:

No, and it's not over. I'm kind of guessing that the quantum one is going to be even bigger. Yeah, I think so. But is there such a thing as too big a bubble? Well, yeah, I mean, we had something in my country, in the Netherlands, we had this. This may sound a bit weird, I'm not sure. Yeah, the Tula bubble. I knew you were going to go there In the 17th century, right?

Speaker 2:

Tulip bubbles, bulbs, were extremely Did you leave value behind is the question.

Speaker 1:

I don't know, I wasn't there in the 17th century no no, I mean, that was our.

Speaker 2:

There was also the South China Sea bubble and there were others like it. Sometimes time, because some bubbles are not not every bubble is the same right, so some of them are useful, some of them aren't, and it's very hard to it.

Speaker 1:

I don't think the tulip one was long-lasting, no, no. On the other hand, we still deliver all the children really affected the the Dutch economy yeah, we still deliver all the tulips to the Vatican for and to to my native Canada. They still, they still arrive in Ottawa maybe that, maybe that's.

Speaker 1:

That's the takeaway. There's the value. There's a very long tail of the bubble, right, right, I just want to, all kidding aside, I want to touch on the quantum bubble here, because that is something that I know the security industry has been preparing for at least a decade already, if not longer.

Speaker 2:

Well preparing? I don't know.

Speaker 1:

The preparations are finally happening.

Speaker 2:

Yeah, the IETF is now a good colleague of mine. Jaroslav Rosomacho is part of the group. He's here at Zscaler, but they're starting to think about how to do this with TLS properly and they're starting to work through the RFCs and you see standards that are now NIST endorsed, for instance, that are coming out new ways of doing quantum doing so we say quantum resilient crypto. However, most companies aren't ready for it.

Speaker 1:

I actually had a chat with somebody this week about this and he said look, one of my former clients or current client, I don't know, but even client Big Bang in the US. So look, I have 30 million endpoints, mm-hmm, divided over all sorts of things, right sort of making them PQC ready. It's gonna take me 10 years to do. That was his estimation. Maybe to me that sounds as long. If he does it alone, absolutely yeah.

Speaker 2:

And then but if TLS advances and operating systems advance and libraries advance, please keep going.

Speaker 1:

Yeah.

Speaker 1:

So if that, if he estimates or he I'm not sure what about that if they estimate, it takes ten years and we're in 2025 now right and there's this, you know the what you mentioned, the sort of democracies and yeah, but there's a little quantum not gonna be around properly until 2030, 2040. So it is that it could be sooner, it will sooner too, and it's very tempting to say, well, we'll just not do it. But if you look at this example that he thinks it will take 10 years, then it might be time to yeah.

Speaker 2:

So this is one of those things where it's a very large project, which means I don't actually trust the estimates, because it won't take that long when it actually comes. If you were to start it now at a relaxed pace yes, however, I think that companies could be if they had to forget. Take the quantum part out. If you had to update your crypto right now, how long would it give you?

Speaker 1:

a very long time, yeah and then, just to be clear by crypto you mean cryptography, right okay?

Speaker 2:

so if you had to update your cryptographic libraries, yeah, it should take a long time, yeah. So what people should be doing is breaking that into tractable projects right now and embedding that in other work. So do an inventory of where your crypto modules are. Understand the size of the problem I bet he hasn't done that yet and then say how do I take it from a XXX project size, xxxl size, t-shirt sizing, how do I bring it down to an XL? And how do I bring it down to an L, how do I make it so it's a swappable module, that I know where all my cryptography is and I know how to do that. How do I also monitor the readiness of cryptographic and how do I do?

Speaker 2:

We can already. It's not just Shaw's algorithm. We can already understand what the attack vectors will be when the technology arrives. So what am I going to do first? All of that can be planned now, and you can get PQC ready now and it doesn't have to be hugely expensive, so you can take it from a 10-year project down to a one-year project.

Speaker 1:

But one of the things that I think also plays a role here is the high degree of uncertainty right for a lot of people, because on the one hand, you can take a positive spin on that right. Maybe in a couple of years' time we will have technology that will enable us to do this much faster than we think you can even use AI to help.

Speaker 2:

by the way, you can use AI to help with modularization of code. You can use it to help undo inventories.

Speaker 1:

But if you take a negative spin on this, put a negative spin on this. You could also say, look, this is a reason to overestimate the time it will take and to get down in the doldrums a little bit. I don't feel we're going to do this very well, or whatever. That's not the right approach.

Speaker 2:

But it's also part of it is that when we talk about future disasters for which we don't yet really have a time frame, it can either be extremely depressing or seem unreal, but what we can do is instead really get specific about it. This is what, if you build those committees that deal with disruptors and let's be clear cryptography and post-quantum cryptography. There's three dimensions to this that I would care about. I care about cryptanalysis. What's it going to do to the confidentiality, integrity, availability and privacy of what we're doing? I care about it from a boosting capacity long-term. How's it going to potentially improve things like cryptographic capability right, and particularly cryptologic? And then I also care about how it might help my business, in the same way that I would care about AI.

Speaker 2:

But right now, the most immediate risk is the first, and so it behooves us to go through and say let's do an inventory of where this stuff is. Let's make sure we understand the magnitude of the project. We can even you probably have to update things like random number generators and cryptographic algorithms long before PQC comes. Things like RSA key links probably have to be longer soon, because just plain old vanilla computing can break them soon. So that kind of conversation is worth it now.

Speaker 1:

PETER LUBBERSON Do you think it's possible to make the quantum business case already as well, if you think of it as the huge quantum business case?

Speaker 2:

no, if you think of it as breaking it into smaller parts. Many of them can be made now as well, as PQC readiness can be made now.

Speaker 1:

So if you're a CISO now and you have to get budget from the already tight budget all budgets are always tight nobody ever says, oh, I have such a huge budget, I can do whatever I want.

Speaker 2:

That's it look it's going to depend on who you are, how big you are, what industry you're in. Certainly in some industries, absolutely. But this is an exercise when we started by saying approximately do the business. We shouldn't be doing this in isolation In your business. Depending on what you do, there should be a risk registry and if this falls out in your top 10 risks, you should be looking at it and doing projects. In fact, you should probably be doing some long-term just like any R&D department some long-term research into this stuff and carve out a few resources for it, and this is probably one of those projects.

Speaker 2:

Yeah, and then the big advice is to cut it up into Cut it up into small parts Manageable parts, tractable parts and make sure and one of the projects should be how would we tackle it and how do we make that a smaller project? Yeah, and, by the way, track the public work going on. There's great public standards work in RFCs happening. You can be fully part of the dialogue.

Speaker 1:

Yeah, Well, that's good Are you optimistic in general about where everything is going, because especially when you're just looking, it's a huge question.

Speaker 2:

Yeah, I know, the answer is yes.

Speaker 1:

It's a huge answer. I'm just going to leave it there.

Speaker 2:

You sound like you're optimistic. I am, so the default answer is always yes. We have to be more specific. Am I optimistic about the industry? I was getting to that.

Speaker 1:

Career or yeah, well, no, because I've seen, if you look at AI, for example, now, right, we are also creating lots of AI technical debt for the future at the moment and I think, because of everything that's so complicated it has to do with the bubble, by the way you have to develop so quickly, yeah, yeah, and then you get. Now we're not even mastering the first kind of Gen AI wave fully and now we're getting into the agentic stuff and agent to agent and MCP and all that stuff. There's a huge potential for creating so much technical debt that you're going to be in so much trouble in five years. That's more or less. Are you positive Future?

Speaker 2:

job security. There's also security debt that you're incurring and privacy debt. But no, I am actually very optimistic. Look, every single wave of technology that we've had, every single bubble that's that we've had. Every single bubble that's come along has felt overwhelming and like it was too much. Have you ever heard the Shepard tone? It's a music thing, it's a sound thing. It's a sound illusion that makes it sound like the tone is always going up, always going up, always going up. We have this everywhere in high tech Threats always going up, oh, always going up, always going up. We have this everywhere in high tech Threats always getting worse. They're always getting tremendous 100% worse, 100%, worse or more right.

Speaker 1:

I always find it interesting when I visit. I go to many events right. Once I was at three consecutive security events in three weeks and they all said Massive numbers. And they all said massive numbers slide on the during the keynote slide. We detect 35% of attacks that nobody else in this industry attacks, and it was consistent every week Not the 35, but everybody said we detect so much more than the others. And then after the third week I said look, the key takeaway for me is you're all detecting way too little. Yeah.

Speaker 2:

Yeah, that's right, that's detecting way too little. Yeah, yeah, that's right, that's all a little slice, yeah. But usually I see those numbers and I go well, the absolute numbers aren't right. The relative numbers are probably right. But my optimism is, I think that security thinking is now pervading more widely. I think we're potentially on the verge of a SecOps revolution, like we had a DevOps revolution, where security is not owned by just one set of people. Do you think it's going to happen?

Speaker 1:

I think, so We've been talking about bridging that gap and building the bridges.

Speaker 2:

for 10 years already, I think AI gives us the ability to start thinking about yellow teaming, which is the ability to build tools for red and blue teams. It's the ability to not have to think about the tools as much and to effectively have security. People think like developers and developers be security people in situ so we can get better foundational security. I think we really are at that point, so I'm optimistic about that. That doesn't mean that we're going to live in a safer world.

Speaker 2:

I think the bad guys are also going to have access to tools, but it's sort of like the shepherd tone makes us see the negative as we hear the pitch go up, but really it's not changing in a way that is continuously more negative. If anything, I think more people want to get into it. We're being recognized as one of the fundamental qualities not just of IT, but IT is tied into human life in so many ways and I think our relevance is clearly seen and understood.

Speaker 1:

And so.

Speaker 2:

I'm very optimistic about it, but I also am very encouraged when I see younger people not only want to get into it, but they're becoming the younger generation, because every generation thinks the younger generation doesn't get it, but the younger generation I actually mentor and teach a lot of younger folks, and not only do they get it, they're very serious about things now like privacy and freedom of expression, and they're starting to realize the importance of things like democracy and politics, and so I'm very encouraged to see them starting to care deeply about digital fluency and literacy and human rights and privacy rights. So, yeah, I'm optimistic.

Speaker 1:

By the way, it's good to have a mission. It's good to be needed. I've seen my kids adopt technology in a way that I think, oh, I wouldn't say they don't get it.

Speaker 2:

No, they get it, and now they're starting to go wait a minute.

Speaker 1:

Do I want to keep doing this? Well, maybe, yeah, it's hope, hope, hope. Yeah, well, that's a it's hope, it's hope, hope, hope. I always learned that hope is sort of a disillusionment waiting to happen, right?

Speaker 2:

That's right, that's right, but that's okay. Once it pops, there's lessons, you keep going. So but, I, think that's how it's been every generation. Yeah, and yeah, we're still here, right?

Speaker 1:

So we're ending on a positive note. That's good. Now people can flame at that. Not a shepherd's note, but a hope.

Speaker 2:

That's right, not a shepherd's tone.

Speaker 1:

A positive tone On a positive tone, all right, thank you for joining us.

Speaker 2:

Thanks for doing this, it's been fun, thank you.