Techzine Talks on Tour

Agentic AI is about much more than "sprinkling LLM fairy dust"

Coen or Sander

During PegaWorld, the annual conference where Pegasystems outlines its vision and has customers share their stories, we sat down with Pega CTO Don Schuerman and talked about what agentic AI means in the fields they're active in.

The Pega CTO sees a tendency in the industry to label anything with "a sprinkle of large language model fairy dust" as agentic AI. That's not how he and Pega look at it. He talks about a more comprehensive vision: true agentic AI designs, automates, executes, and optimizes the workflows and decisions that businesses depend on. This isn't about deploying thousands of agents indiscriminately—it's about implementing a few strategic, pragmatic agents that drive substantive business outcomes.

In this episode of Techzine Talks, we discuss the various GenAI and agentic components of Pega's platform, and how they interact to make agentic AI appealing to its customers. Pega Blueprint, a workflow designer, is an important part of it, but Pega will also add agentic capabilities to its appdev environment, and will introduce an Agentic Process Fabric to tie all of it together in an orderly and most importantly well-governed fashion.

According to Schuerman, it's a mistake to assume that underlying architecture is irrelevant in an AI-driven world. Listen to the podcast episode to hear why he thinks that is, and to hear much more about Pega's agentic vision, but more broadly also about agentic AI in general and how this relates to deployments in more challenging environments.

Speaker 1:

Welcome to Techzine Talks, and I'm with Pega CTO Don Sherman. Here we're at PegaWorld. It's not called PegaWorld Inspire anymore, but it still tries to be inspiring. It still tries to be inspiring, but welcome to the show. And just like last year, I think last year we talked a lot about preparing for a future you can't entirely predict. Yes, I remember that quite vividly.

Speaker 2:

It was timely because we're in a future that we can't entirely predict. Yes, I remember that quite vividly. It was timely because we're in a future that we didn't entirely predict.

Speaker 1:

No, and I think that really shows now, right, because last year, when we sat down for the podcast, you couldn't have guessed where we were at the moment, right? No, so I just want to talk a little bit about the stuff that you announced this week, about the importance of the architecture underneath it, and also about process fabric. You know all that, all the all the good stuff. But let's first just take a step back and talk when we talk about agentic, because that's a lot of uh, what we're going to talk about today. Right, yes, how do you define agentic? Because I think that's something that a lot of listeners might have a have an interest in hearing yeah.

Speaker 2:

So, um, you know, I think I think that there, there, there are a couple of ways to define it, because I think right now a lot of the way the industry defines agentic is, uh, anything that has a little bit of large language model attached to it is it's agentic now, yeah, sort of an extension of what we were talking about.

Speaker 1:

Yeah, it's an extension.

Speaker 2:

Whatever we got, we sprinkle a little large language model fairy dust on it and ta-da, agentic. I think the way I look at agentic is. Agentic is an AI that is capable of designing, automating and then actually executing and maybe even optimizing the workflows and decisions that a business needs to run. So an agent has the reasoning capability to design a workflow. It can then actually go do things we can go take some actions. It can actually then figure out what potentially could be done better so that it can optimize it and fit that back in and actually build a little bit of a loop. So so so that that's sort of the if I were to define it.

Speaker 1:

It would be that, but when we talk about a genetic, we're actually talking about that. For this way, a lot of what I've been hearing when they talk about a genetic is actually talking about a lot of agents yes, working together in perfect harmony hopefully perfect harmony or not and then you're going to have another discussion. But, um, that is the case. Right, so it is, it is, it's not so. The concept entails many agents, but not too many. Yeah, I think. I think we want to be.

Speaker 2:

I think there will be agents all across our business. You know, as I talk to clients that are still working through, how they get GPT use integrated past their compliance officers, get a couple of agents spun up. You know, I think the goal shouldn't be to how do we get to a thousand agents about ten thousand agents. I think the goal should be how do we get to 1,000 agents, 10,000 agents? I think the goal should be how do we get to a couple of really pragmatic agents that drive meaningful things for our business, drive meaningful impacts to the processes we run, the experiences we deliver?

Speaker 1:

Yeah, so have you defined the number of agents that you currently have in your?

Speaker 2:

In our arsenal. I think we've defined the taxonomy of agents that we have in our arsenal right. So the first and perhaps most valuable set of agents that we have in our arsenal, or maybe the most, I think, distinctive set of agents we have in our arsenal, is what we're calling design agents, because one of the places where I think we're under-clubbing the impact that agents can have is in helping businesses redesign the very way that they work, because what we're entering is a period of technological and business transformation that are going to have to go hand in hand for businesses to actually reap the value. No business is running around looking to implement AI so that it can check off a box and go present at a conference and talk about how they implemented.

Speaker 1:

AI. Even though, from the top down, sometimes there's this thing we need to do something with AI.

Speaker 2:

And I would argue, when I talk to CEOs, when I talk to boards, that's the wrong way to approach the problem. I think the right way to approach the problem is we need to deliver better customer experiences. We need to be massively more efficient and automated in the work that we do, and AI is a mechanism, frankly, one tool among many that allow us to get there. But we have to be able to drive this sort of business and technology transformation that happen hand in hand, and that means rethinking lots of the established processes we have today, lots of the established strategies we have for marketing to and engaging our customers.

Speaker 1:

And yeah, but that's not necessarily easy to do, right it's not right.

Speaker 2:

In the past, that's been like we'll go hire McKinsey and Bain and we'll spend months and we'll like consultants and analysts. Where I think there's a real power of agents is to actually act as sort of the design thinkers in our business, and a way that we're looking at Blueprint is. It's a collection of design agents that are there to help a business both the IT and the business professionals rethink what does my process need to look like right? And so inside those design agents, we've got agents that can interpret legacy documentation. That's one of the things that we announced that Karim demonstrated. Can I import documentation, videos, specifications from my legacy systems and extract what's in?

Speaker 1:

there. How much do you need to understand about the capabilities of those agents as a user, capabilities of those agents as a user right? You don't. I actually had a chat with Karim on this about how much influence you have on the output of Blueprint, which is your workflow designer right, based on what you feed it. Yes, so if you feed it more, does it also create a different output and does it also help you actually build the app that you could build based on that quicker? I think that is something that you need to be be aware of it as a user right.

Speaker 2:

I think. I think it's something you need to be aware of as a user. I think, though, though I think one of the the the reasons why I like and I think businesses, our clients, have really been drawn to this idea of the design agent. I'm channeling the creativity and, in some cases, the randomness and unpredictability of the gentic and large language models, but that's actually fine. If I ask a design agent to design a business process for me and it gives me one answer, and I turn around and ask it the same question again and it gives me a different answer, well, if the whole purpose of me interacting with that design agent is to brainstorm new ways of doing this business process, that's actually a great thing.

Speaker 1:

So, but down the line, you want to have predictable AI, but down the line.

Speaker 2:

I want to, but what I want to then do is now, once that agent has suggested a couple of approaches for a workflow, well, now, as a human, I might go in and do some tweaks myself. I might say, yeah, that's great, but, agent, you suggested this step and I know I need to call a regulator here. So I'm actually going to put a check in. And, by the way you suggested, this happens before that, and I happen to know in my business it happens the other way around. So I'm going to flip them Great.

Speaker 2:

But now, having kind of collaborated with my workflow design agents, eventually I'm going to get to a version of that workflow that, hopefully, is leaps and bounds better and different than what I've always traditionally done. Hopefully I got there massively faster than if I did a traditional process reengineering. I would hope so. Yeah, Right, Like Vodafone talks about going from Blueprint to app in 40 days, hopefully, or 40 hours, oh, 40 hours, yeah. So hopefully I'm moving pretty fast right. Once then I lock that down. Now I have a workflow that I can take into production and start running in a predictable way. So now we start talking about the other sets of taxonomies, of agents. Right Once that workflow exists, I still want to have conversational interfaces. I want my employees and my customers to be able to chat with and find. As an enterprise, I may have hundreds of workflows.

Speaker 1:

Well, how do I help my employees and customers find the right one and also, I think what you mentioned a couple of times as well is that you want to integrate with the existing stuff that you already have. Right, yes, but that also implies that you could also include non-Pega agents into this equation at the end of the day. Right, absolutely. How does that impact your promises or your SLAs towards your customers? Because you're giving them the assurance or the whatever you want to call it, the promises about the quality and the outcomes and all that stuff, but as soon as you start incorporating foreign agents, yeah, foreign agents Interesting term these days, but yes, you might not be in full control anymore.

Speaker 2:

So that's one of the reasons why we think being able to call what we've been calling now this taxonomy of automation agents, these are agents that I might want to call Once I've got my workflow designed. There might be individual steps that I want to have another agent do, right, you know. So I just we've got a partner from Pega here, a company called Fisent. They're really really good at content automation, right. So I may want to call Fisent's agents at a step of the process to say, hey, I just got this insurance claim. It's come within a court document. I've got pictures of the car crash, I've got an accident report from the police. I want to go call FISA and to like one go extract all the information you can from this to fill out the rest of my claim information, validate that that's an actual police report from the actual place where the. You know, there's things that I can ask that agent to go do right, but things that I can ask that agent to go do right.

Speaker 2:

But I'm asking that agent to do a very specific piece of work, the same way I would ask a human to go do a very specific piece of work, and we've over the years built very good workflow controls that allow us to assign work to a human, make sure the human has the right skills, make sure they get it done in the right time frame, put quality checks around it to make sure we validate the work that they did was correct. And I think we can benefit, especially as companies are still getting comfort with how these agents work, in actually managing them the same way we manage our humans. I'm going to assign you a piece of work. I'm going to hold you accountable to a timeframe to get it done. I'm going to only give it to you timeframe to get it done. I'm going to only give it to you if you cross the certain skill and security thresholds that I have and after you finish it, depending on your experience level, I'm going to capture 20%, 30%, 40% of your output to do a quality check.

Speaker 1:

So, for example, are there minimal system requirements for agents, so to speak? Will you ever say, look, this agent that we didn't, that's not our agent. We don't want to include this in our platform because it cannot answer this question correctly.

Speaker 2:

Well, keep in mind, I'm not really including the agent in the platform. I'm orchestrating the agent that may be living on somebody else's platform, right, and ultimately it's not my decision as a software vendor to call the agent. It's the decision of the customer who then designed the workflow and said I want to call the agent at that step and I want to give our clients really good tools to manage that and do that effectively. I can give them guidance on here's what. Here's some good agents that I work with, that I've worked with. But there's also, I think, a lot of principles that, as new, as exciting as this is, still come into play.

Speaker 2:

Like calling an agent isn't that different from calling a third-party service? Right, all the time, if I'm processing a credit card increase, I call some service from a third party to get somebody's credit score. It's a third-party service that provides me that I trust them, but I also probably put some checks and some verifications around them. I expect that they put some trust and I have contractual relationship with them. I think we're to do the same thing for some of the agents that we call. We're going to treat them in the same way. We're going to treat them in the same way we treat some of the humans If the agent doesn't do a good job. If I'm doing quality checks on the agent and I'm finding that 60% of the time they need to go back for rework, I'm probably going to stop assigning work to that agent and that agent may get fired right. But I can only detect and do that if I have good workflow and management wrapped around the agent to begin with.

Speaker 1:

What makes Pega ideally situated to actually be that orchestrator of agents, because, I mean, a lot of companies are saying that at the moment, right, yeah, I think everybody is kind of pushing into.

Speaker 2:

I think everybody realized that agents are going to need to be orchestrated and I think everybody is kind of pushing to do that.

Speaker 2:

I think what gives Pega the opportunity to do that is one.

Speaker 2:

We've been orchestrating humans for a pretty darn long time and doing it in a pretty sophisticated way, and Rob Walker was talking today about the fact that, as much as we talk about LLM and agent hallucination rates, humans also make mistakes, sometimes in an equivalent way. So we've gotten pretty good at wrapping rules, controls, best practices, quality checks, escalations, service levels around the work we orchestrate for humans doing, and I think a lot of those tools and capability and know-how that's already inherent in our platform is really, really useful when you call an agent Right, and so I think that gives us a little bit of a I think, a strong set of capabilities. You know I also come back to the way in which we orchestrate these agents. Is is going to mean we are rethinking so many of our core business processes, and that's not just a technical challenge, that is a business and strategy decision and it's gonna require a level of integrated business and IT thinking and creativity and comfort, confidence in order to bravery in order to make that happen.

Speaker 2:

and creativity and comfort confidence in order to bravery in order to make that happen. And I think we're, with the Blueprint tool that we have. We're uniquely positioned to help our customers think through what that transformation looks like, and yourself, maybe as well, right yeah, because you're using it too, I would imagine.

Speaker 2:

We're using it and we're also continually injecting our best practices, and now we've opened up the ability for our partners, who have massive amounts of domain expertise and best practices in lots of industries and lots of use case areas. So we're going to be able to provide to our clients a pretty massive library of best practice understanding about how to rethink their business processes and then be able to make sure that they're not just orchestrating agents, but they're actually orchestrating them in the right process, in the right way.

Speaker 1:

Yeah, and so the partner story. I think that's interesting as well, because that could also potentially increase the reach of Pega in the market right.

Speaker 2:

I think that's a large portion of the intent, right, yeah?

Speaker 1:

maybe? Yeah, but because you've always been very, very, very clear on who your target audience is and you want to go for these types of companies. Nothing but this actually makes it more accessible, maybe from more companies.

Speaker 2:

I think there is, I think we've. You know, pega has always had very powerful technology. I think with some of the advances in Blueprint and some other areas, we've dramatically lowered the bar and plus being able to run it on Pega Cloud, like there's lots of things that make it much easier for a client to consume Pega. And I think that has allowed us to think about how do we open the aperture right and then, and the way we're gonna do that and we're gonna succeed in it at scale is by working through our partners and by having a go-to-market motion that leverages what our partners do and know.

Speaker 1:

How do you so? A completely different question, but how do you deal with the potential of going out of step from the blueprint agentic to the infinity agentic that you, for example, this year, you're going to? So the blueprint agentic has been available since PegaWorld Last year, yeah, but now the new stuff that you announced this week is already immediately available, because that's what Blueprint is. It's continuously updating, but Infinity isn't. That only updates once a year. Yeah, how do you see that? How do you handle potentially going out of step between the two?

Speaker 2:

Well, so I think that's actually kind of been one of the powers of allowing us, from Blueprint, to create an ideation and design engine and have Infinity be like an implementation and execution engine, and ideation and design generally needs to move faster anyway. We can throw more ideas at that and, by the way, if we don't like them we can toss them back out again, whereas our deployment and execution engine anything we put in that has got to go work because it's running mission critical processes.

Speaker 1:

But it needs to understand the input coming from the other side.

Speaker 2:

But it needs to understand the input coming from the other side. But I think we are comfortable with the idea that the interface point between the two, the input structure, that's pretty established. That's something we've been refining for upwards of 30 years. What are the components of a workflow? What does a data model need to look like?

Speaker 2:

We're pretty comfortable of, once we get the blueprint done, the structure we need to put it in, to get it into infinity, and that now, once that, because we feel so confident in that right and again. This is why I think pega's got an advantage here, because we have that structure so mature. That gives us a lot of flexibility now to continue to play with blueprint, to now say, great, what are other ways we can build that structure? Well, we can. We can import doc. We can take a video from your legacy system and import it. We can connect IP from a partner in and use it to inform that structure. So I think we're gonna continue to churn pretty fast on how do we allow Blueprint to do that more effectively? And to us it's been really refreshing to have that, which is again an ideation and design and creation machine, completely unbound from the traditional sort of software release cycle that our clients actually need to consume.

Speaker 1:

Yeah, and the fact that it's unbound also made it possible to white label it for want of a better word for partners, for partners that's right, otherwise it wouldn't have been possible.

Speaker 2:

Right and, frankly, to make it broadly accessible to clients for free to use as much as they want.

Speaker 1:

So, and basically it all boils down to the underlying architecture of everything that you do, I would imagine, right. So what makes that? I think you made a point of that architecture still matters, I think during your keynote, because nobody really seems to, especially with all the lofty AI and agentics stuff going around nobody really thinks about architecture.

Speaker 2:

I think there's an extreme position that I've heard articulated sometimes, which is well, all software goes away and there's basically just an AI layer and some core data storage.

Speaker 1:

Wasn't it Satya Nadella who said that, yeah, microsoft.

Speaker 2:

Satya Nadella said that, but Satya Nadella is still happily selling you Dynamics and Power.

Speaker 1:

Automate and all the software. He's not going to stop selling you stuff.

Speaker 2:

So you know and this is where that's the stuff that I kind of think is almost like it's the flying car vision of like well, maybe, but I don't know if we need that or want that what I actually think is there's going to be agentic infused throughout so much of every app we do. In order to have that work, you've got to have the architecture right. You've got to know how to call different agents. You've got to know when is it okay to call a model that's entirely non-deterministic, when do you have to call something that is deterministic?

Speaker 2:

I thought Rob made an interesting point this morning and one of the things that I think everybody is kind of dancing past, which is we're starting to use large language models to do, sometimes to do the exact same calculations that we could have done without a large language model, with a classical AI model.

Speaker 2:

But what that means is we're using significantly more CPU cycles, significantly more cost and also we're making significantly larger impacts on our energy right. And it feels to me like we, as you know, computer professionals, owe it to ourselves, our clients and, frankly, future generations to not just immediately rotate into the most resource intensive way of doing anything. And, frankly, I think there's a business reason why Rob made the point. I could ask a large language model to figure out maybe the next best action for a client, but one. It's going to take about two seconds, which isn't fast enough, it's going to take a lot of CPU and it's going to do so in a way that's not traceable or predictable, whereas what I could do is ask that large language model to design the statistical model for figuring out the next best action for my client.

Speaker 1:

Well, the biggest issue I have with everything that has to do with large language model is that it continuously starts from scratch. Right, and that's not necessary because you already have a lot of stuff.

Speaker 2:

It continually starts from scratch, and for use cases where I wanted to create things, for use cases where I wanted to be generative, I wanted to do language operations. It's really good and, frankly, it can do things that no other model we had can do. Yeah Right, but, as Alan demonstrated yesterday, if I have structured data, like the structured data of how to play chess, and if a large portion of my decision-making is based either on historical structured data or business rules, like there's only so many places a knight can move on the board and I don't need a large language model to scan through millions and billions of pieces of content to figure that out. There's one document. It's very clear what the rules are and you have to follow them. Well, I can build a model that I could run on a laptop that can beat any chess player in the world, and so the which also uses existing technology already?

Speaker 1:

Which?

Speaker 2:

uses. It uses classical AI and business, but it's really really good and it's fast, and so the reason why I think architecture becomes somewhat so important is it's not going to be the organizations that succeed with AI are not gonna have just a giant LLM hammer. They swing at everything. They're going to have a sophisticated tool set of classical AI models business rule, orchestration patterns, large language models, new reasoning models that are starting to emerge, mcp-based tools that they can give to agents, and if I'm going to connect that all together and then align it against stuff that actually delivers better outcome for my customer, I'd better have a good architecture that allows me to make that stuff work.

Speaker 1:

Yeah, and I especially like the fact that you're using a lot of existing stuff, because you've invested in that and it works, so why?

Speaker 2:

Yeah, you've invested in it. And I also think sometimes in technology we love to just assume. The metaphor one CIO was talking to that I kind of liked was like, when Airbnb showed up's like well, that's it, that's the end of hotels yeah no, it's not like.

Speaker 2:

Sometimes I actually just want to stay in a hotel. I want to show up, I want to get to the desk, I want my key, I want there to be a restaurant in the first floor with a bar and I want to be able to accumulate my loyalty points right. Sometimes I want to be able to travel with my family and stay in a place that can house all Great. Airbnb introduces a new aspect of traveling. That's great, but that doesn't mean that the problems I've already solved. I need to go re-solve them for an Airbnb world.

Speaker 2:

You just need to integrate it into the existing.

Speaker 1:

You need to integrate it into the existing ecosystem. So Agentica is more or less the Airbnb of the yeah, but we're still going to have hotels. Yeah, I think so too. All right, I think we're out of time.

Speaker 2:

Thanks again, it's always fun, always a pleasure yeah.

Speaker 1:

And we'll look forward to doing this again next year. All right, we'll see you then.