Techzine Talks on Tour

Cisco wants to use AI to defend AI at machine scale

Coen or Sander Season 2 Episode 10

Cisco made a swath of announcements at RSAC 2025 Conference. Unsurprisingly, several of them had something to do with AI. We sat down with Cisco's Chief Product Officer Jeetu Patel to have a conversation about where he and Cisco thinks things are going.

When we spoke to Patel last year, he was quite optimistic. The use of AI in security solutions was going to mean that the defenders would, at least temporarily, get the upper hand. 

Between last year and now, a lot has happened, especially in the field of AI. The big question now is how to secure the use of AI in organizations. That's what many of the companies at RSAC Conference focused on this year. With the agentic wave of AI upon is, and new things like Agent-to-Agent and the MCP protocol that come with it, this is a very important challenge to tackle.

To us, two things stood out in Cisco's announcements this week. One was the deepening of the partnership with ServiceNow, with an integration between Cisco AI Defense and ServiceNow SecOps. The other was the launch of Foundation AI and an open-source AI security model. Patel frames the latter innovation as follows: "You don't go to your dentist for heart surgery. Domain knowledge matters in AI and models."

Apart from the topics above, we also discuss the agentic future of security itself. This is important, because that enables defenders to elevate security from human to machine scale. That is especially important when facing machine-scale attacks. As Patel notes, "The models are going to get smarter... and as they get smarter, you'll find a tremendous amount of autonomy injected into workflows."

This and much more in this latest episode of Techzine Talks on Tour. 

Speaker 1:

Welcome to this new episode of Techzine Talks on Tour. I'm at RSA in San Francisco, the RSA C conference. To be completely correct, I'm here with Giotto Patel. You're the CPO at Cisco Almost a year now, right, or?

Speaker 2:

is it eight? Months Eight nine months, I think.

Speaker 1:

Yeah.

Speaker 2:

August. A lot of stuff has happened since you've got the role.

Speaker 1:

Thank you for having me on the show, by the way, yeah, and thank you for joining us. I know you have a very busy schedule, but you announced some pretty interesting stuff, I think, this week around AI and around how to secure AI. I remember last year we were here and the gist of your message was that the tables are turning towards the defenders. But that was last year. Right, that's right. And in a year, a lot has happened. The tables are turning towards the defenders from the. But that was last year. Right, that's right. And in a year, a lot has happened. Yeah, the agentic and agent to agent and MCP and all that stuff. That's right. Do you still think that the tables have turned? Are we still? Are the defenders still? Do they still have an advantage over the attack?

Speaker 2:

They still have an advantage over the attack. They still have an advantage, but I think there's the highest likelihood of there being a sustained advantage for the defenders now than ever before.

Speaker 1:

What do you mean by that?

Speaker 2:

I mentioned this a little bit in my keynote yesterday, but I didn't attend that one.

Speaker 1:

Oh, you didn't attend that one. No, no, sorry, okay.

Speaker 2:

So, you know, when you start thinking about a machine-scale defense and that machine-scale defense has the benefit and advantage of having bespoke customized reasoning models that are trained on defense data you inherently can actually have a very, very strong advantage in the way in which you actually tackle those attacks, and so I feel like I'm very optimistic. Now, by the way, you have to balance that we are the farthest behind in the use of AI of any industry. Yeah, that's striking right, which is kind of interesting. You know, like if you look at healthcare, manufacturing, you know financial services, retail, the use of AI and the transformation potential over there. Why do you think that?

Speaker 1:

is. Is that a trust thing?

Speaker 2:

No, I think it's two reasons the efficacy is low with the AI models on specifically security, and the costs tend to be prohibitively high and you don't have the skills, yeah, and in the security industry the efficacy needs to be higher than in the other.

Speaker 2:

and in the other sector and, by the way, in the models. The way I think about this is you don't go to your dentist for doing a heart surgery. I wouldn't. Domain knowledge matters also in AI and in models, and so you should have a model that is bespoke for security, and that model that's bespoke for security should be trained on security data.

Speaker 1:

I mean, first question would be why did it take so long? Right? I mean, I know some other companies are doing it as well. I think Trend has a model out that they also open sourced it, but I think it's slightly different, but it's it took. It took quite some time to actually get there, was it? Was it so hard to do?

Speaker 2:

it's not a matter of hard to do. It was not obvious that that was actually the thing, that that needed to be done. But let me take a step back because I think it's important to understand like what's going on in the industry at large. First, you and I both talk to a lot of customers and every time I talk to a security practitioner, there are three things that they highlight as challenges almost without fail. Number one is that they actually are experiencing a pretty massive skill shortage.

Speaker 2:

I've been coming to RSA for four years, I think the year before I started keynoting that was five years ago Chuck had done the keynote and he had that as his core point that he had made in his keynote. So for five years that's been the narrative. It's not changing. There's a massive skill shortage. Unlike other professions where people are worried that AI is going to take their job in security, security practitioners are less worried about AI taking the job. They're more worried about getting the job done without AI. So that's the first one is a skill shortage. The second big challenge that they see is the inundation that they get through alerts and the alert fatigue is a huge problem. It's very hard to decipher the signal from the noise. And then the third area is that the sheer complexity which you and I have been talking about now for a while moving to a security platform rather than just a bunch of point solutions has actually consumed a lot of mental bandwidth and cycles from companies. So those problems have to get solved.

Speaker 2:

And then, on the other hand, you have the pace of change in the industry at large, exactly, and so these constraints get exacerbated because of the pace of the industry and how quickly it's moving. And so, if you think about what needs to happen, firstly, what needs to happen right now is two distinct dimensions where focus needs to be placed to solve these problems or to make sure that you can offset and mitigate the risk from these problems. Number one you have to secure AI itself Mm-hmm and number two, you have to use AI.

Speaker 1:

But that's a moving target right. So you launched AI defense in January, I think.

Speaker 2:

Mid-January. We launched it in January. It shipped like a few weeks ago.

Speaker 1:

But then what we thought about as AI in January already is rather outdated. No, but the core thesis hasn't changed.

Speaker 2:

So the core thesis was you needed three things that you needed to solve for AI defense, to secure AI. You want to make sure that you have the level of visibility, because if you don't see the data, you can't protect it. You can't protect something you can't see. Number two you want to make sure that these models, by definition, are unpredictable, and you need to create predictable outcomes out of something that's unpredictable, and so you need to validate the models. And right now, if you look at you know like I was talking to a financial services company recently, they said it takes them nine months to validate a model. When you talk to a model provider, it takes them seven to eight weeks to validate a model. With AI defense and the validation routines, we can do it in one to three minutes. Yeah, that really helps.

Speaker 2:

It's a massive compression of time, and so that's the second big thing, and then, once you've done the validation, the third thing you have to focus on is the runtime guardrails that need to get enforced dynamically in an know for, in an automated fashion. If you do these things right and you have the infrastructure working well, and then you believe that agentic workflows will take over in 25 and will take off in 25, I should say, in a pretty big way. I think there's a lot of promise for security. What we announced was three things at the event that are tied to this. Number one, we announced a full multi-agent agentic workflow for XDR. Number two, we announced a deep partnership with ServiceNow with AI Defense, deep partnership with ServiceNow with AI Defense. And number three, we announced the launching of a security-based model that is open-weight.

Speaker 1:

Why is that so important that it's open source and open-weight? Why is that so important for Cisco?

Speaker 2:

Because you want the community to contribute. You don't want to have anything that's proprietary right now, you just want to have the community to contribute. We want the community to contribute. You don't want to have anything that's proprietary right now, you just want to have the community to contribute. And, firstly, the cost of these things can tend to be pretty high. And so what we've done is we've increased the efficacy dramatically, but we've actually lowered the cost.

Speaker 1:

And it might be worthwhile for us to just spend a minute on that. Lower the cost for you to run it or lower the cost for the industry to the industry.

Speaker 2:

For the industry okay, industry to run it right, you can run this on one a100. Yeah, right, I mean, you don't need a cluster of 32 h100 gpus to run, you can run it on one a100. And that means that the cost curve goes down precipitously and when the cost curve goes down, you can download the model and you can, theoretically, you can, run it on a very high-powered lake, do you?

Speaker 1:

see any challenges into keeping this fresh, this model, because, moving forward, obviously things are going to happen. You may need to update your tokens or whatever you want to do with it. Do you see any challenges in that?

Speaker 2:

So the reason that we don't is we're thinking about this in a very methodical way. The reason we didn't launch this six months ago is I didn't believe that I could have built an AI research team of the caliber and quality that we've actually built now.

Speaker 1:

And that's for a large part due to the robust intelligence.

Speaker 2:

That's right, the robust intelligence team acted as a core nucleus. We've built a team called Foundation AI that is the AI research team, and this was their first project that they launched, and our goal over here is to continue to keep enhancing the model and making sure that we can keep up. By the way, this will not be the only model in cybersecurity. This will not be the only model that we will use in our applications. Cybersecurity, this will not be the only model that we will use in our applications. I think there will be multiple models, and you might have a router that dynamically routes an agent to which model you use for every prompt and every request and every query, because that's how this world is going to evolve.

Speaker 1:

You're going to grow the MCP route as well. Right, Absolutely.

Speaker 2:

And MCP is a very important kind of introduction of a standard.

Speaker 1:

But MCP itself is inherently not secure. It's not part of the protocol right, so that must challenge securing it as well, I would imagine.

Speaker 2:

So that's why model security is so important and that's why we need to make sure that. And, by the way, if you think about where we are today and I'm kind of jumping all over the place for a second so, number one we've talked about the three challenges. Number two we need to make sure that we secure AI and we make sure that we're using AI for cybersecurity defenses in a way that you can have defenses at machine scale, not just at human, human scale, because your attacks are at machine scale. Number three as part of doing the defenses in the right way, you have to make sure that you've got the right underlying infrastructure on the model that everyone can use. And number four the world is going to go agentic, and so you're going to need to make sure that you're actually driving very, very specific automation of workflows that are agentic in nature.

Speaker 2:

That can go multi-agent, which means that when you farm out a job, there might be multiple agents that will take a portion of the job. They'll go out and conduct those pieces. They will coordinate with each other. They might disagree with each other. When they do, there'll be an orchestrator agent that says resolve your difference, come back to me with a recommendation. They come back with a recommendation and then the orchestrator might take that recommendation and provide it to a human to say do you agree? I'm providing a human in the loop. That is super intelligent. Security in my mind.

Speaker 1:

What do organizations need to do to prepare for using this right? Because if the basic foundation or if the fundamentals aren't in order, if they don't, for example, if they don't know how many APIs they have, or whatever how much of the heavy lifting are you doing? Our goal is that we would do a lot of heavy lifting. Heavy lifting are you doing? Our goal is that we would do a lot of heavy lifting. Yeah, I would imagine, right, and how far along are you at the moment? I mean, what do companies have to do to prepare for actually deploying your solution.

Speaker 2:

For example, if you buy XDR in the next couple of months, this will be GA what we announced, but you will literally have a multi-agent agentic workflow baked into XDR which will look at the alerts coming in, ingest those alerts as an input reason on those alerts, figure out with confidence whether or not that alert is in fact something that is going to have a severity level that's high enough to pay attention to make sure that an investigation gets started, concludes the investigation, generates a report, pulls data during the investigation from other sources if needed, and then recommends a set of actions that, to me, is now fully autonomously capable of being done.

Speaker 1:

And how you can. Also because of your observability kind of angle, you can also see all the assets and all the APIs that are running and all that stuff.

Speaker 2:

And XDR now integrates into Splunk. I think this whole category of observing AI itself is going to be a very important category as well to think about and you will hear more about that from us at Cisco Live. I'll be there too. Spoiler alert.

Speaker 1:

But there are limits to what you want to do or what you can do. Probably because I want to talk about the ServiceNow partnership. I think that's a very interesting and very necessary partnership as well, because I wrote a story on it yesterday on the partnership, and I always like to tell if larger organizations have a short list of vendors or types of services and products they need. Companies like Cisco and ServiceNow are always in the top five. So having two of those combined forces joint forces and do stuff together is potentially extremely powerful.

Speaker 2:

You know we, if you just think about the role of partnerships we've announced right, we announced two very distinct partnerships with Nvidia. Our silicon is part of their reference architecture. Our switching is part of their reference architecture. We are actually. We announced Secure AI Factory, which has our entire stack in their reference architecture. So we at NVIDIA, we announced ServiceNow. I think you will continue to see an ecosystem play with us because, frankly, I feel like it's a very myopic mentality to think that it's a zero-sum game. I just don't care about competition. I care about making sure that we are doing what's right by the customer and what's right by the industry, and I think the competition typically resolves itself because the market's big enough for all of us.

Speaker 1:

And there's potentially a huge, huge potential in the partnership with ServiceNow as well. Right, because you the first thing you announce now is AI defense and and ServiceNow SecOps, but that you can think of I don't know a hundred things more in terms of AI, ops or whatever you want to do with with all the telemetry and all the stuff that you can there was never a shortage of ideas.

Speaker 2:

The shortage was can we get the right level of commitment from both engineering teams who are excited about something that say you know what we're going to go, commit to it. And Amit and I, I mean, we're good friends, We've been. He's their chief product officer, so we've talked about this many times. But when we shared this idea with the team, the thing that was interesting was the team jumped on it and then, within a very short time period, came back to us and said hey, here's what we think we should do. They're there.

Speaker 1:

Yeah, because in the official press release it said that the teams are working closer together than they used to do. What does that mean?

Speaker 2:

That means that two engineering teams are collaborating together to make sure that one plus one equals 11. And I think that doesn't always happen in these partnerships, and what we are trying to do is get very, very closely tied to the engineering, to engineering team, interlock right. And so, with NVIDIA, the same thing. With ServiceNow, the same thing With Microsoft. When we did the partnership withIA, the same thing With ServiceNow, the same thing With Microsoft.

Speaker 1:

When we did the partnership with them, the same thing Will you be selective in these partnerships, because I can imagine, even though I said, companies like Cisco and ServiceNow are in the top five on the short list for things to consider not everybody will have both.

Speaker 2:

I think it takes two to tango. We're going to be very open to a partnership where the use case makes sense and it's customer driven. So you start from the customer, work backwards and if the value is there for the customer, for the use case, we should not not do the partnership because we are worried that that partnership might affect our competitive position. Might affect our competitive position Because, the way I think about it, if the customer is invested in vendor A, customer is invested in vendor B, it is the responsibility of both those vendors to actually work well together so they can enhance the investment that the customer has made.

Speaker 2:

Yeah, but that's not a mindset, it's not a mindset that typically exists and this is a change, but this is a change that I think the SaaS ecosystem actually drove this change quite a bit. So this is a change that I think the SAS ecosystem actually drove this change quite a bit. So this is not a new phenomenon. At this point, I think, with cybersecurity and what's happening with cybersecurity, this is gonna get even more important, because who's the true enemy? It's the adversary. Yeah, oh yeah. I know I'm. I all these CEOs of all these companies. I have a lot of respect for them. They're good people. They're trying to do the right thing. I like Nikesh at Palo Alto, I like George Kurtz, I like Jay Chaudhry. Congratulations to them on building a great business. We should make sure that we partner if we can.

Speaker 1:

I mean what you said earlier. The market is large enough anyway, unfortunately. Basically, I mean look, we're going to be very competitive.

Speaker 2:

We're not going to not be competitive, but what I do think you have to keep in mind is it's not a zero-sum game, like we can expand the market and we can make sure. Here's the thing about cybersecurity. That's really important. Cybersecurity, up until now, and every single technology adoption was always the impediment to adoption. You would say well, well, do I want productivity or do I want security? You can't have both. Now you look at AI. It is an accelerator and a prerequisite for adoption. Most people that don't trust AI are saying I want to make sure it's safe before I do something.

Speaker 1:

Yeah, you know, I've been saying for years that we're not going to solve the security issue. If the secure route is always is not. It's not the fastest route as well're not going to solve the security issue. If the secure route is always is not the fastest route as well. It has to be the fastest, and absolutely, and that's a great way to position it and I don't think we're there yet completely, but we are getting in, we are heading in the right direction.

Speaker 2:

We're heading in the right direction. We're a long ways from there. Yeah, I think we have to make sure that the cybersecurity industry is accelerating adoption. Ai defense was a huge step forward from our side, and we got to make sure that we actually keep compounding it and don't underestimate the different layers of getting the fastest route out right, because you have the development.

Speaker 1:

You have network security. You have all these things. You have the security admins. You have all these things. You have the security admins. You have all the admins in your company. They all need to be able to work fastest, most securely.

Speaker 2:

I also feel like most people feel like, hey, this technology is going to make me irrelevant because I have a technical depth. And that's actually not the problem in cybersecurity right now. The complexity is too high, the shortage is too stark and the volume of incoming attacks coming in is too dangerous that we have to have an automated augmentation for the soul.

Speaker 1:

The only thing I am a bit worried about with AI is that we are creating complex new AI technical debt for the future, and I think that's always hard to not do right, because it's part of what we are. But I am curious what it will look like ten years from now, looking back and saying how many extra complexities did we actually create and the technical debt that we create?

Speaker 2:

I mean, what I will tell you is going to be a predictable trend, which is you can take to the bank, is the autonomy where the models get smarter and smarter. You know, like Sam Altman says this really well, which is this is the dumbest you'll see the models be ever. Yeah, you know, and he's right. It's like the models are going to get smarter and smarter. Yeah, and as they get smarter and smarter, you will find a tremendous amount of autonomy that gets injected in the workflows and these agentic systems.

Speaker 2:

Up until now, we've been able to do single-threaded tasks I ask a question, I get back an answer. I don't like that answer. I ask another question, I get back an answer. Now, what you're doing is you're giving like, full chunks of tasks to someone else to say, go, do this. But then what will happen is, initially it was just a question and an answer. Then it became go automate tasks and I think it's going to become automate jobs. Yeah, now the automation of jobs. To be clear, in cybersecurity, you will always need to have a human in the loop. I just think that you have a lot of autonomy, but the human can go out and focus on higher order bits.

Speaker 1:

And I think that's a very interesting, interesting dynamic, keeping the human in the loop and making sure that the machine and the human understand each other to a certain extent, right, yeah, and making sure that the machine is trained on the on the best knowledge also of the human, and actually incorporating everything into one thing you know, what they typically say is the the human in the loop.

Speaker 2:

Historically has been a pretty conventional wisdom, but the one area that I think is going to be different is the common question that gets asked is what's the job that doesn't exist today that'll exist tomorrow? I think the job that's going to exist is AI oversight, and it's going to be a massive job for humans when you will be able to actually operate at a very different frequency and very different throughput level. If you had an agentic workflow doing a lot of things because you just up-leveled yourself, and so most humans right now are not seeing the ability to up-level themselves as a result of AI, and I think humans right now are not seeing the ability to up level themselves as a result of AI, and I think that that's a huge opportunity.

Speaker 1:

I think the only thing we need to be careful of as an industry is that we don't create too many layers of complexity on top of it. Right, totally agree. I've seen, for example, if you look at databases, you can do lots of AI stuff in databases already, but all companies still have DBAs and they will probably have DBAs for the foreseeable future. And as long as you keep building on layers and layers on top of an organizational kind of stack and a tech stack, you get to this technical debt thing that we talked about earlier, and that's something that you need to be very careful of, I think, as Cisco, but also as an industry and also I think that you have to keep it simple, and I also feel like in 2025, a very large percentage of the code will start getting written by AI agents.

Speaker 1:

Yeah, I think it's already happening. A lot of easy forms. You're not going to write the same form 15 times. Why would you do that? Because it's always the same anyway, and yeah, but I think we're almost out of time because time flies usually. Oh, yeah, we're way over time. Sorry, apologies for that, but thanks again for joining. Thank you for having me, and I look forward to meeting you at Cisco Live and chatting more about it.

Speaker 2:

Yes, looking forward to it.