Techzine Talks on Tour

How do you innovate for a future you can't entirely predict?

Coen or Sander Season 1 Episode 16

The title of this episode of Techzine Talks on Tour is a (perhaps somewhat rhetorical) question Don Schuerman, the CTO of #Pegasystems asks the audience during #PegaWorld Inspire, Pega's annual flagship event. Sander sat down with him at the show to discuss how statistical AI and GenAI can help find an answer.

During this conversation, Don and Sander discuss many facets of what drives #innovation. For example, the outcome of some research suggests that GenAI not only helps generate hype for #GenAI, but also causes an increase in adoption for 'good old-fashioned' statistical AI. In other words, innovation in one part can have unintended but good side-effects on other parts.

Other topics that they discuss is how companies like Pega, but also others in the industry can and should help organizations build innovation around use-cases that haven't even been properly defined yet. That's one of the key issues that needs to be addressed today. We're at a stage where a lot of it is about pragmatic and practical use of AI, but that's only possible if you have a good idea of what is pragmatic and practical. 

Apart from the general AI discussion, Don and Sander also get into some of the new developments at Pega. They talk about things like KnowledgeBuddy, an example of #RAG, or Retrieval-Augmented Generation), but also Pega GenAI #Blueprint. The latter of the two has the potential to up-end the development of specific apps in organizations. You just tell it to build you an application for, say, loan applications, and it does so in a couple of steps. 

Give this new episode a listen right now, and get valuable insights into innovations in software development in general, and in Pega's platform in particular. 




Speaker 1:

Welcome to this new episode of Taxiing Talks on Tour. I'm Sander and I'm at PegaWorld Inspire this week and I'm with Don Sherman, the CTO of PegaSystems. Well, welcome to the show. First of all, great to be here. Well, there's a lot of Gen EI going on at the moment. Right, I heard somebody on stage saying it's Gen EI, gen EI, gen EI, gen EI. That's the future for development, at least, right? What's your impression of where we are at the moment when it comes to AI or Gen EI more specifically?

Speaker 2:

Yeah, I think we're at a really interesting kind of inflection point, right? I think one of the things that we are constantly reminding our clients is, in some ways, ai is not new. No, it's in the 50s or something. There have been predictive models and statistical data models.

Speaker 1:

We are constantly reminding our clients is in some ways, ai is not new. No, they're ounces of fifties or something.

Speaker 2:

There there have been predictive models and statistical data models what we often call AI decisioning that you know in Pegas world we've been working with clients with for over a decade and, um, I think sometimes a lot of the hype around gen AI and we're going to use you know gen, add around Gen AI and we're going to use you know Gen how to do this, and that ultimately comes down to well, actually, no, you probably want us a really good predictive model that you can build on your own data set and construct it relatively quickly.

Speaker 2:

You don't need a whole bunch of NVIDIA chips in order to build this thing. It's transparent, it can explain what it's doing and it actually can add real value to your business. So what I'm seeing is clients really starting to step back and think about how they're going to apply Gen AI, what are the use cases and challenges it can address, and then realizing that it's not a one-size-fits-all solution. You need to apply different types of AI and different approaches depending on the problem you're trying to solve and the regulations and data world you're running in.

Speaker 1:

Would a cover term maybe pragmatism? Maybe look at it from a more pragmatic point of view than last year when we all did all the big stuff.

Speaker 2:

Yeah, I mean, I think that AI is a transformational technology in much the same way right that the internet was a transformational technology, much of the same way right that the internet was a transformational technology. But we get to that transformation by taking pragmatic steps, yeah, right. And I think what I'm starting to see organizations do is begin to look for and identify what are those pragmatic steps so that we can actually prove value, we can prove trust and we can begin to sort of build that groundswell needed to drive the transformation we want to see.

Speaker 1:

It's one of those things that people always say about this is that new technology, the impact of new technology, is overstated in the short run, but understated in the long run.

Speaker 2:

Yeah, and I think we're exactly in that, and I don't know if generative AI is going to directly follow the Gartner hype cycle. It feels like the hype cycle is moving so fast right now. It feels like a bunch of loop-de-loops as opposed to like a single kind of trough of disillusionment. But I think we will move through that and we will see use cases and usages that kind of pop out into that plateau of productivity.

Speaker 1:

Yeah, and do you see, are there learning or learning points that you can take with you from the AI, the old-fashioned AI? Maybe we should call it now into the Gen AI world that we don't make the same mistakes that we made in the past.

Speaker 2:

Yeah, Again, I think part of the use case, part of the lessons come back to identify the use case. Right At the end of the day, the goal of an organization shouldn't be to use Gen AI, no, no. In the same way, the goal of the organization shouldn't be to use Microsoft Word or Microsoft Excel. The goal of the organization should be to drive their business. And if Gen AI can be a tool that adds productivity to their employees, that helps them tackle problems they couldn't tackle before, that helps them deliver better experiences to customers, great. But you have to put it in the context of what's the actual problem I'm trying to solve? What's the actual outcome? I'm trying to drive.

Speaker 1:

But that's actually. That sounds very logical, right, and I think you made it a point on stage also this week that you said we don't we build it around the case, right, but especially with this kind of technology, the case is also a bit of a fluid kind of concept, right. So how do you do that? How can you build around the case if the case isn't yet very well defined? If that makes any sense?

Speaker 2:

Yeah, I think so. I think we've been trying to look at Gen AI through two lenses. One is there's a set of productivity use cases that I think we're all going to want to adopt, right? A simple example of that is summarization. We know Gen AI is pretty good at summarization.

Speaker 2:

Of text yes, Of text Not necessarily of voice, not necessarily of voice. Well, if you can get the voice to text, then it can summarize it right. We have technology that can turn voice into text and it's pretty good at summarization. And especially, it's pretty good at summarization if the end consumer of that summarization is an employee who you're still going to hold accountable for.

Speaker 2:

to make sure the summarization is correct and valid. Make sure the summarization is correct and valid, and so I think we're going to see summarization in lots of places. I see it when I do a WebEx meeting. I get it in my email. We've already put summarization into the contact center app that we have. That allows you to summarize customer interactions, or in case management at Pega, so that you can summarize what's happening in a case. That's a nice productivity hands-on right.

Speaker 1:

That's quite generic.

Speaker 2:

I mean, I think everybody would like to have that right, right and I think, over time, it's going to be something that we just expect, like spell-checking we just expect to have as part of the various apps and tools that we use, and I think it's important for us to do that because, frankly, I think it's going to become table stakes. But I also think enterprises need to understand that they're not going to differentiate their bank from another bank because they have summarization tools available to their employees.

Speaker 1:

But do you think it will remain at that point of just the relatively low-level, very tightly defined kind of use cases? Productivity tools yeah.

Speaker 2:

So I think those represent a great place to start, because they keep human in the loop, they're safe, there's some measurable business value attached to them. So by all means, deploy them, test them, validate them, but I think the enterprises that are able to push up into more transformational uses of it right. So, like one of the one of the ways that we see that happening and and again, I think part of what we need to have in this time period is a little bit of an openness to experimentation we see what works and see what doesn't.

Speaker 2:

But one of the places where we've we see real opportunity is helping organizations think differently about some of the core aspects of their business. You know pega, we are a workflow and decision company, so we tend to think about workflows a lot. We tend to think about decisions a lot and we know from our experience and probably it's the experience of anybody who's ever been through a process reengineering effort that getting people to align on what a workflow should actually be can be pretty painstaking. If you're not careful you can end up sort of repaving the cow path right Like there are suboptimal outcomes that can happen. Well, if I can use Gen AI in partnership with some best practice documentation to recommend a starting point if I want to implement a new workflow, that can both accelerate that kind of requirements design process, so that's good, but it also can push the business to think in new and different ways.

Speaker 1:

It's funny you mentioned push, because that was one of my follow-up questions. It's like do you experience pushback in that? Because this is also a human kind of issue that you need to solve to actually start working with Gen AI. Right, Because there needs to be trust, there needs to be all the things that you need as an employee or as an organization as a whole to say, well, okay, this sounds good, but this is not. Is it good for us?

Speaker 2:

And that's why, for example, the tool that I'm talking about here is called Pegagen AI Blueprint, and we just made it available free on our website, and part of the reason for that is we want people to be able to touch it, see it, play with it, respond to it and start to understand. This isn't a magic trick. We're here in Las Vegas. This isn't like a magic show where I'm trying to astound you and amaze you. I'm just trying to give you a tool and I'm trying to help you see that. Hey, I can think of, in this instance, Gen AI as like a really smart sounding board that I can throw an idea off and it might come back with a different suggestion.

Speaker 1:

So in that respect, it's good to realize that this is just another tool that you can use. I mean, we shouldn't make too much of it, right.

Speaker 2:

I mean, I think in general right as a software industry, we like to hype everything right and everything that happens is going to transform everything. That's what I always try to de-hype.

Speaker 1:

Right, that's what I always try to de-hype.

Speaker 2:

Right, and I think to your point earlier. I really do think in the long term we're going to look back on this sort of cusp and flip that we've had with Gen AI and say, wow, that changed a lot of things. Yeah, but I think the way you get to that is almost, as you say, by de-hyping it. Yeah, by saying it's technology, it's a tool, there's always going to be new technology. How do I apply this to the really meaningful problems that I have? How do I test that, validate it, learn from it, get experience with it? And if I do that enough in a continuous cycle, I'm going to look back and realize, oh my gosh, we completely shifted the way we've been working.

Speaker 1:

But that's the way of the world anyway, right, it's also with your own knowledge. I mean it also builds up over time and after all I know quite a bit about this now. I didn't know anything 10 years ago.

Speaker 2:

That's more or less the same. It's like, look, if you want to train and run a half marathon, you go out and you run a mile and then you go to run a mile and a half and then you run two and next thing you know it's like, wow, I can run 10 miles. I didn't know I could do that. It's the same kind of idea. So there is a.

Speaker 1:

I mean just yes. No question Is Gen AI something for every type of company, or is it yes?

Speaker 2:

unless or yes, I think. Again, I keep coming back to the analogy of Excel. I really do think in many ways this is going to become a prevalent tool. I don't think any company exists without using some sort of spreadsheet or office program.

Speaker 1:

If you use that analogy, then also there's the lack of adoption of underusing. Excel is also the problem. Because Excel can do so many things that most people don't even know. You don't want to have that, to repeat that mistake, but you also don't want to overuse Excel, right?

Speaker 2:

I'm sure we've all opened up Excel workbooks that have so many macros going on inside them that the thing can't even open on your computer, right? So, as with any tech it's, how do we apply it? How do we use it for what it's really good for? How do we attach it to a business use case, and how do we not shove so much into it that we're overwhelming our capacity to appreciate?

Speaker 1:

it, so that's sort of a process-oriented kind of thing. Right, you say, well, how do we want to use it? So you need to think about how you want to use it. Right, you say, well, how do we want to use it, so you need to think about how you want to use it. But are there also technical challenges for companies thinking about starting to use Gen AI in their daily lives, for example, data access, or you know?

Speaker 2:

Well, I think one of the important things to understand, or at least in my mind, is the distinction between this decisioning or statistical AI, which companies can and will train on their own data. We heard from the International Australia Bank this morning using Customer Decision Hub to improve customer decisioning based on their own data about customers. Generative AI is very different. Enterprises, even really, really big enterprises, do not have enough data to train their own gen ai models. They don't have the data, they don't have the chips, they don't have the talent. So gen ai is more going to be like a power source that we plug into. The models are already built, we just use them them. So the question, I think, for the enterprise becomes how do you use those in a way that interact with your data?

Speaker 2:

but do it in a way that is safe.

Speaker 1:

Yeah, and also especially when it comes to RAG, which is, I think, the most popular use case of Gen AI ever since it was introduced last year somewhere last year.

Speaker 2:

Yeah, so that RAG pattern retrieval, augmented generation, where you basically apply your own content into GenI?

Speaker 1:

Yeah, but then you need to decide what is the data that this RAG is going to use, right? I mean you need to think about that as well as a company you do.

Speaker 2:

And again, that to me comes back to use case right. So that RAG may be a set of knowledge content that I take and I use that to drive prompts into the Gen AI model. To summarize for me and that might be really good if I'm trying to solve the access of my customer service agents to content and information, that RAG may be large amounts of customer history because I want to be able to ask questions and inquiry the gen AI about what customer history patterns are happening, right. So so pulling that data into the rag and I think you know companies are they're gonna need to decide do they build some skills around doing that using technology like embeddings and vector databases?

Speaker 2:

and all the pieces that are needed to make that work, or do they sometimes look at cloud-based services, like that's what we have with what we call Pega, knowledge Buddy, which kind of packages that all up? So you focus more instead and say, well, what's the content I want to feed into that and how do I make sure that I can govern that content in an ongoing way?

Speaker 1:

But then you still need to be able to trust what it gives back, right, obviously, you have this sense of this is my own data, so the data points are correct. You have this sense of this is my own data, so the data points are correct, because you probably curated it before you say, look there for the answer, but it can still make wrong connections between it can right.

Speaker 2:

We've seen that. I think there have been a couple of publicized incidents where even RAG models have recommended wrong things, and that's why I do think, even with stuff, stuff like that, it's going to be an employee in the loop for a period of time because we want to know. I I think most enterprises are fine with giving an employee as a rag model that's giving some answers. Will you train the employee to say hey, by the way, you're going to get this answer. You should validate it. You're going to get, as part of the rag link back to the source content. So you have the tools you need to validate it, but you need to feel accountable for validating that, which is very, very different than just saying, oh, we're going to put a RAG-based chatbot on the website and let customers ask it questions.

Speaker 1:

So I think those are some things that we have to build up a maturity curve to be able to get to, and also maybe you need to define best practices and all these things around it as well, how to do it properly.

Speaker 2:

Absolutely. And what are the guardrails? What are the audits that need to be put in place? That's why we've spent a lot of time thinking about okay, if you want to do RAG at an enterprise scale, what governance do you need to wrap around it? What auditing do you need to wrap around it? What security protocols do you need to wrap around it? Because to me, that's where the challenge of making this stuff work is.

Speaker 1:

Yeah, I totally agree, and that's obviously the other part of AI in general and I think that it also plays a role in in gen. Ai is that I remember five years ago was in. I was here as well and it was all about the. It was back then.

Speaker 1:

It was a lot statistical, yeah yeah and it was all about explainability and predictability and and that was very important and and it was all the craze and it's still important, but gen AI is very hard to explain and to predict right, and I think one of the things that's really important again.

Speaker 2:

This is why it kind of is almost fingernails on the blackboard sometimes when I hear organizations talk about their AI strategy, because what you really need is an AI's strategy, because there are multiple AIs that you apply in different ways. The statistical AI that we were just talking about can be very explainable. The other thing that's also really really important about it is it's deterministic, so in other words, it's making probabilistic models.

Speaker 1:

And if you do it three times you.

Speaker 2:

But if I feed the same data into the model, the model will give me the same answer three times. Generative AI is not deterministic, so you have to be comfortable leveraging it in use cases where the fact that it's not going to give you the same answer maybe even is a good thing. That's why I think using generative AI, for example, to suggest workflow designs, is great, because you can ask it, throw an idea off If you don't like it, ask it again and it'll come back and give you a different variation Again. It's like a brainstorming partner that you can work with.

Speaker 1:

What I really like about that basically, actually, is that it also can help prevent bias Right, because it can give you stuff that maybe from yourself, looking at it from your perspective, you wouldn't think of.

Speaker 2:

Exactly.

Speaker 1:

No-transcript, exactly they can actually give you something that you didn't expect.

Speaker 2:

I was saying this morning that I think inclusivity and innovation go hand in hand, and the reason that is is the more voices, more perspectives you bring in, the more ideas that you might not have available in your perspective set are available to you, and GMI can open that a little bit.

Speaker 1:

But then it also muddies the water a little bit. Open that a little bit, but then it also gets. It also muddies the water a little bit, in a sense. Because what do you leave out? Because you probably don't like. For example, if you do the Gen AI blueprint, you ask it to come up with parts of your application that you may want to build, but then you need to decide which parts you want to keep, which parts don't you want to keep, which parts don't you want to keep. But that's still a trial and error kind of thing.

Speaker 2:

But that's still a trial and error, but that's stuff that you're going to have to do anyway if you wanted to build an enterprise application.

Speaker 1:

Yeah, that's true.

Speaker 2:

And I'd much rather you start with an idea, a starting point that feels optimal, that Gen AI and some of our best practices can help you provide, and then have that conversation about what to put in, what to leave out, rather than spending like nine weeks to get to the point that you're ready to have that conversation.

Speaker 1:

But doesn't the lack of predictability and explainability prevent Gen AI to be part of really the core processes and the core?

Speaker 2:

of companies. Well, I think.

Speaker 1:

Because back in the day explainability was sort of a must, especially for a highly regulated kind of area.

Speaker 2:

Again, I think it really, really depends on the use case. Like we're talking about using Gen AI design time to help you design your business process. Once you run it, I actually want to use statistical AI to predict what might happen next. Where you have explainability, you have determinism.

Speaker 1:

You get back to the AIs Right exactly.

Speaker 2:

There are places like I might use generative AI during a process to draft an email that I want to send. But I want to keep a human in the loop to say, hey, we've drafted this email, you need to validate it and approve it and the process is going to audit that. You did it so that nobody can say Gen AI is responsible. No, you got to audit it and you did it. So I think that's why it really comes down to the use case, the user and the controls you wrap around it.

Speaker 1:

So in that sense, statistical AI may have its own identity to a certain extent, but Gen AI won't, because there's always somebody who has to sign off. I think in the near term, right Gen AI is always going to have that sort of sign-off wrapped around it All right, it's funny you mention it because we're almost out of time, so maybe on stage this week you were talking about the future of AI, which is completely unpredictable.

Speaker 2:

Completely unpredictable. I think my whole point was that nobody really knows.

Speaker 1:

You already mentioned some stuff. About 10 years from now, we're going to look back at this point and say look how far we've come. Do you see that? The Because that's why I wanted to ask this question is based on, I think, research you commissioned? One of the key findings was that Gen AI also made AI so the old-fashioned AI more popular or being adopted more Exactly right. So do you see this as sort of a step up for proper AI? Maybe that's the wrong word.

Speaker 2:

Statistical.

Speaker 2:

AI or decisioning AI. So those of us who have been around AI for a while Proparator maybe that's the wrong word Statistical AI or decisioning AI those of us who have been around AI for a while will know that we've been through a lot of AI cycles. Ai was a hype in the 60s and 70s when it was expert systems. Then it went away. Then it came back and it was predictive models. Then we entered the AI winner and it went away. And then Google. Initially AI was back again with Google, and then Salesforce, einstein, and then it disappeared. And then Google. Initially AI was back again with sort of Google and then Salesforce.

Speaker 2:

Einstein and then it disappeared again and then it came back with J8, right? So I think we go through these hype cycles and these waves, as we do with, frankly, every technology.

Speaker 1:

Like 3D movies. Right Exactly 3D movies.

Speaker 2:

3d movies I was thinking about looking back, right. So one of the great we talk about this AI being almost like the internet. One of the great examples of the internet sort of bubble bursting was petscom, you remember petscom. That's a symbol of the bubble bursting. I order all of my dog food on chewycom, so it's like we end up back there. We end up at the transformational point. Eventually it just we have to make a supple. Just takes a couple of false steps, false runs to get to the end point.

Speaker 1:

So no big predictions about where we're going in terms of AI, I mean look, I think we're going to see it in the enterprise.

Speaker 2:

I think we're going to see AI injected into transformational thinking and help us kind of push transformational thinking, and I do think there's a huge opportunity as we've been talking a little bit about in this conference especially for enterprises that struggle with things like legacy modernization and legacy transformation, to plug AI in as a tool to help accelerate that.

Speaker 2:

So I wouldn't be surprised if, in a year from now, we look back and say, boy, we accelerated a lot of our legacy transformation in ways that we didn't think were possible, because you can pull in a lot of your legacy stuff. You can pull in a lot of your legacy stuff and help transform it into something different.

Speaker 1:

That's a good note to end this conversation on, I think, because it's a positive one.

Speaker 2:

I always like that.

Speaker 1:

So, thank you for for being on the on the show. Thank you very much. Pleasure to see you looking forward to next year and see what, how much, how much, how big the impact has actually been of what we've been talking about this week yeah, it'll be, it'll be exciting. I hope so.