Techzine Talks on Tour

The best architecture for the AI Economy uses private and public AI (Mike Beckley, Appian)

May 22, 2024 Coen or Sander
The best architecture for the AI Economy uses private and public AI (Mike Beckley, Appian)
Techzine Talks on Tour
More Info
Techzine Talks on Tour
The best architecture for the AI Economy uses private and public AI (Mike Beckley, Appian)
May 22, 2024
Coen or Sander
This brand new episode of Techzine Talks on Tour features a discussion with Mike Beckley, the CTO of Appian. Beckley's company has a very clear and in general common sense approach to AI. Preparing your company for the AI Economy isn't necessarily hard, he argues. The key components are all available. You just need to implement them correctly.

During our conversation with Beckley, we talk about how real the AI Economy is already. More importantly, though, we discuss what is needed for organizations to be part of it. 

Appian plays an interesting role in this process. A champion of what it calls private AI, the company also recognizes that public AI has merit in several use-cases. So companies like Appian work hard on tying the two 'types' of AI together. Something along the lines of the collaboration agreement that Appian has with AWS (Amazon Bedrock, to be precise) is where the AI industry as a whole gravitates towards. 

Another topic in the AI discussion is how you ensure that data is being made available to workloads that utilize it. In other words, how do companies get their data foundation in order? Some sort of data fabric (or something along those lines) is part of the solution, Beckley argues.

Finally, Appian itself also needs to change its internal architecture to a certain extent in order to keep up with AI advances in general. We conclude our conversation with Beckley by talking about EPEx, or Elastic Process Execution. That should give significant performance improvements to Appian's own processes.

Listen to this new episode of Techzine Talks on Tour now!  

Show Notes Transcript Chapter Markers
This brand new episode of Techzine Talks on Tour features a discussion with Mike Beckley, the CTO of Appian. Beckley's company has a very clear and in general common sense approach to AI. Preparing your company for the AI Economy isn't necessarily hard, he argues. The key components are all available. You just need to implement them correctly.

During our conversation with Beckley, we talk about how real the AI Economy is already. More importantly, though, we discuss what is needed for organizations to be part of it. 

Appian plays an interesting role in this process. A champion of what it calls private AI, the company also recognizes that public AI has merit in several use-cases. So companies like Appian work hard on tying the two 'types' of AI together. Something along the lines of the collaboration agreement that Appian has with AWS (Amazon Bedrock, to be precise) is where the AI industry as a whole gravitates towards. 

Another topic in the AI discussion is how you ensure that data is being made available to workloads that utilize it. In other words, how do companies get their data foundation in order? Some sort of data fabric (or something along those lines) is part of the solution, Beckley argues.

Finally, Appian itself also needs to change its internal architecture to a certain extent in order to keep up with AI advances in general. We conclude our conversation with Beckley by talking about EPEx, or Elastic Process Execution. That should give significant performance improvements to Appian's own processes.

Listen to this new episode of Techzine Talks on Tour now!  

Speaker 1:

I'm here with Mike Beckley. He's the CTO and co-founder of Appian. Mike, welcome. Great to see you again, sander. There's quite a bit of talk about what we call the AI economy, or what you call the AI economy, in a recent press release. I would like to talk a bit more about that and especially about how organizations can adapt to that or have to adapt to it, or basically the underlying architecture that they need to adopt. So just first things first. How real is the AI economy? What is it right?

Speaker 2:

Well, the AI economy is building on the digital economy that has become very real, so you think about the Internet of Things. You think about concepts like behavior based pricing, already changing the insurance market. You think about that based on all of these sensors that are starting to permeate our life, as Internet connectivity is now everywhere and our phones are everywhere, and so now there's sensors in your car, there's sensors in your home and your thermostat, and all this is generating this whole avalanche of data that is being used to deliver better services and better experiences, and also generating a lot of new workloads and opportunities that far exceed humans' ability to comprehend, and so this is where AI is really being injected, and what does it mean, this AI economy, for us as a society or as enterprises?

Speaker 2:

Well, what the AI economy means is that there is a tremendous opportunity to improve our operations, improve our decision-making, improve the way we take care of our customers and improve the way we take care of our customers and improve the way we take care of our employees. And that's because now a lot of information which was previously thrown away can actually be used, because it can be used in real time. And that data is oftentimes worthless if it can't be used quickly and there's too much of it for humans to ever read. And that's the real truth of the AI economy today is that AI allows us to read vast amounts of information and get value from it right away. So, for example, in a simple case would be in universities and colleges.

Speaker 2:

You have a 50,000 student university system, a public university here in the United States, that's using Appian AI to help student advisors. We have a huge problem in this country and maybe in Europe, with students dropping out of university at higher rates than we've seen in a long time. We haven't really recovered as a society from COVID, and so the small group of student advisors, these adults whose job it is to help students succeed, have to go through mass amounts of information on these student files and an ever-increasing number of student problems, and so Appian's data fabric can connect to more systems than ever to look at financial data and academic data and social data and behavior data, but now they can summarize all that information with AI and so actually make an engaging outreach to the student that suggests that they actually might have something useful to say and a grown-up that they might actually trust to help solve a problem, and so that's part. That's example of how the AI economy will work.

Speaker 1:

I think you make an interesting point by referencing data fabric. Obviously you would, because you work for Appian, but I mean. That leads me into the next question, because this all sounds very good and very plausible, but I mean obviously you need to be the next question, because this all sounds very good and very plausible, but I mean obviously you need to be able to, as an organization or as an institution or whatever you are, you need to be able to prepare yourself for this right, to be able to use it. So what are the key components?

Speaker 1:

I mean not necessarily only from your portfolio products Sure, sure but more in general, what are the key components for organizations to adopt this AI?

Speaker 2:

Well, let me say two things about that. The first is I think there's less preparation than people realize necessary because of some other technology advances. But there is some. And the central point is AI is worthless without data and it's not very useful without a process to operationalize how you use those insights. Because AI on its own can do some parlor tricks. It can do some fun things. It can even improve your life a little bit. It certainly will help students write term papers faster, whether or not that's a good thing or not.

Speaker 1:

I'm not sure if the contents of those term papers are very good, but okay.

Speaker 2:

But what it can't do without human intervention is really deliver a lot of value, and that's why, for example, in my student advising example, which is real it's not making the decisions, it's summarizing the information and finding the information so that a human can make the right decision and give the right advice, and that's the state of AI today. But to do that, well, you need to have the data prepared, and so some people argue that businesses aren't getting the value they need from AI and can't get the value they need from AI until they move all of their data into the cloud, until they modernize all their systems and adopt new ERP systems, for example, and then they think there'll be this massive value delivered from AI. Well, it would certainly help, but I would argue technologies like data fabric and there are other companies claiming to have data fabrics too, so I assume they've decided this is a necessary approach as well allow you to bring together data without waiting for it to be migrated into new systems, and so that's why I see AI able to deliver value today.

Speaker 1:

Obviously there's also some sort of a self-serving part in this right For companies to say you have to move your data to wherever.

Speaker 2:

Yes, of course, if your whole business is being paid to modernize ERP systems, you're going to say you've got to do that first before you get AI. And I'm saying, well, you know, it wouldn't hurt, except for the fact that it's.

Speaker 1:

When somebody says something to you that may not be entirely true, maybe it's not the right word, but I mean.

Speaker 2:

Yeah, it's hard to sell in a very noisy marketplace with a lot of ideas, and the good news here is there is so much immediate value for some AI use cases that these early pilot programs can be very promising. The problem is really more about how do you go from pilot to production, how do you scale up and how do you see consistent results.

Speaker 1:

Well, that reminds me of what we saw with big data right.

Speaker 2:

Lots of interesting pilots, but very little production Right and a lot of the early vendors in big data struggled, because it turns out that when you just pour lots of data into a data lake, you don't necessarily know how the data relates to each other and you have a lot of data that doesn't work well with each other, and so that will happen again to some degree.

Speaker 2:

But that's why we focus on trying to tell a story here about how AI is built on a pillar of data and process. And if you as an organization say, all right, let's figure out what data we need, because we have a certain hypothesis, we think we can use AI for predictive maintenance, Well then we need some sensor data on the things that might break, and once we have that data, we can then have a model trained. And then the question is what do you do with that data? Okay, now you know your airplane is going to break. Now you know your car is going to break, but what do you do with that insight? Because actually there's many different types of breakage, and some are serious and some are not, and some should wait and some should be not waited on, and so that's why you need process to operationalize how a human will take these inputs from the AI and decide what the right action is, and then, in this example, you can take a sort of a risk-based approach to this right.

Speaker 2:

Precisely, yes, exactly.

Speaker 1:

I mean listening to Appian in the past couple of years talking about AI. There's also this distinction between public and private AI that you make quite a lot. I think I'm not exaggerating, but I've heard it many times from you and from Matt as well. Why is that so important to make that distinction?

Speaker 2:

Well, I think that AI itself is so exciting, and that especially this new generative AI ChatGPT. It's the first time in a long time people have suddenly immediately been able to experience something valuable from AI, and something that feels like it could be even more valuable with a little bit more work, and so there's tremendous excitement, but that excitement can sometimes overcome the rational, reasonable and legal requirements to not spill data about your customers and spill your secrets. When you ask a question, are you actually asking a question which reveals sensitive data about your organization, and your organization has contractual obligations to other organizations, and so private AI is about enterprise AI. It's about saying okay, how do we, as a society, structure ourselves such that we inject AI into our daily lives and our daily work in a way which protects us more as opposed to less? How do we not just comply with the EU's new privacy regulations, but how do we just do a better job of ensuring that people's personal information is protected and it's not?

Speaker 1:

abused. I think that's a key point there. Right, not just comply, but do the best you can. Right Exactly.

Speaker 2:

And there's no reason why we can't take this moment of reflection and say the public AI is actually doing some real harm to creators, to artists, to musicians, to the people who make life interesting and magical, and we need to be careful we don't let that run amok and that the enterprises where we really profit from things are the key place to draw that hard line and say hey, for enterprise AI, it absolutely must be private.

Speaker 1:

But aren't you limiting the use cases of AI to a certain extent, or in other words, is it possible to extend private AI beyond your own four walls, and maybe referencing the I think it was called a strategic collaboration agreement with AWS?

Speaker 2:

So AWS has partnered I'm sorry, aws. The strategic collaboration agreement with Amazon is a great way to actually facilitate private AI, because what Amazon does for us with their AI services is take many of different models that you can find in a public way, like the Claude models from Anthropic, as well as Amazon's own Titan models.

Speaker 1:

They have 23 now, I think, as Amazon's own Titan models. They have 23 now.

Speaker 2:

I think foundation models Right, and there's a growing library there and that will change over time and we're able to deploy them privately for our clients, and so this becomes the client's own model that they can fine-tune, that they can work with and put within their own security boundaries so that their data never leaves their control.

Speaker 1:

So in that respect you could argue that the boundaries between public and private, you can make them disappear in this way to a certain extent.

Speaker 2:

What I can say is that you can argue over where is the line of what is private. You can argue over where is the line of what is private and when a company creates a private cloud with AWS. I think we've come as a society to accept that that is a mostly secure, compliant, well-regarded barrier and that there are further levels you could go. You could say I don't trust cloud vendors at all and I'm going to run everything on my laptop and never connect to the internet. But in our modern world, what used to be considered on-premise is now considered self-managed.

Speaker 1:

Because I'm trying to make the connection with the AI economy that we talked about earlier. If you adopt that and if you embrace that, you need to also. You can't say I'm going to keep everything on premise. I mean, it's maybe in some niche use cases, but in general you need to embrace.

Speaker 2:

Well, you need to embrace access to the information. You don't need to embrace sharing your information with the rest of the world. And that's the difference. And the AWS Strategic Collaboration Agreement makes it possible for us to bring private AI to a much bigger audience.

Speaker 1:

Yeah, I think that's good, because when I listened to the talk, to all the talk about private AI last year, that didn't really seem feasible back then yet because it was so public, bad, private, good. Right, obviously that's part of a dialectic and you know well.

Speaker 2:

What's happened is that it it's been quite feasible and it's forced many of the previously public AI vendors to invest heavily in security and privacy controls, and everyone's had to come to terms with this fact that the public AI is really limited. Even OpenAI has an enterprise offering where they're making many of the similar-sounding private AI promises now. So I think that this advocacy of private AI has been nothing but good. Now it's still on the consumer buyer beware how much privacy am I getting? Where is my data going? Validating the security boundaries and the security controls? Certainly.

Speaker 1:

So it's not necessarily a technical discussion, because if, for example, if I were to ask you are we there yet? We're probably not right. We're not there yet. With extending private AI all the way to the end-to-end processes, Maybe technically it's possible.

Speaker 2:

We're definitely extending private AI to end-to-end processes. Today that's in production and real. The question is I think you were getting at is what other use cases that might require public AI are we not able to address? And there I don't know. I would have to be presented with something that I haven't seen yet to say and I'm sure there are, I'm sure there's something out there.

Speaker 2:

There's significant interest in training public AIs on the world's data, interest in training public AIs on the world's data, and those are issues where, if you get past copyright which is a big issue to get past, and I don't know if you can, but if you could there would be some value in combining public AI with private AI. But you'd want to go from the direction of less security to more security, not the other direction. So just because there's a bridge doesn't mean we can't bring in insights from the public world into a private world. Combine it with a private insight and keep that new insight private.

Speaker 1:

Yeah, so, looking a little bit forward, we already talked about the architecture or the components that you need, and you mentioned that it's not necessarily very difficult.

Speaker 2:

No.

Speaker 1:

It's a human element that needs to address these things.

Speaker 2:

Well, this is what all of the low-code vendors like Appian are trying to do for you. We're trying to eliminate the need for you to have to know how to program a database. We're trying to eliminate the need for you to have to know how to program iOS and Android to build mobile apps. We're trying to eliminate the need for you to write HTML, css and JavaScript for web apps. And now we are going a long way.

Speaker 2:

We have seen the industry in trying to eliminate the need for you to be a data scientist and an AI expert, because each new large language model comes with a book-length set of instructions for how to prompt it, how to ask questions of it. What prompts does it accept? Does it understand XML tags? Which XML tags does it understand? What will it do with those tags? What output can you expect? And therefore, what output can you program into your application? And so that changes all the time and every new version, even from the same vendor, changes so quickly. This industry is moving so fast that, relying on just an Appian's example, I can't speak for the other low codes. Now you can embed our private LLM and trust that we will keep enhancing it, we'll keep changing it and your app will keep working. That's our job to translate your question in natural language into prompts that work, and so before we upgrade it, we have to upgrade our prompts no-transcript.

Speaker 1:

Maybe I haven't thought it through enough, but, for example, if you do the connection with Bedrock, right? So it was the Amazon Bedrock, Obviously, if you use Claude Sonnet or the other one. So there's a big one and a small one.

Speaker 2:

I forgot the other name yeah, it's Claude Instant.

Speaker 1:

That doesn't matter. How do you make sure that your customers aren't going overboard in terms of cost? Because you can say, well, if you use this model you can do lots of interesting stuff, but then at the end of the month they will get the bill, and that's not really what I signed up for.

Speaker 2:

Yes, yes, of course. So Appian has for the moment, provided different pricing tiers which include significant consumable there, and so that we're really taking away the risk from the customer in many ways and we'll see how that evolves. But we're also providing the tooling Again when you, instead of just having your data scientists guess what the users will do. Appian's whole methodology is to make it low-code so you can have the users as a part of the design and you can have much better forecasting of what kind of demand you'll have. And we're giving you the tools to better monitor the usage so that you can make more intelligent decisions more quickly about how much consumption of these services are you willing to bear and what value.

Speaker 2:

And, more importantly, with process mining, with now our new process HQ. Sorry for the commercial here, but again there's a reason for this. We talk about the AI economy. There's a velocity here that's increasing where people are going to use more of this AI service, the more valuable it is. Use more of this AI service, the more valuable it is, and we see that creating an exponential increase in consumption and an increase in processes that are flowing through your system.

Speaker 2:

So we're giving you the tools to automatically mine those processes and measure their value and their cost so that you can make better predictions about what do you want to invest in and prioritize, what automations you're willing to spend money on and which ones you want to retire because they're not returning the ROI, the return on investment you expect.

Speaker 1:

To a certain extent, this sounds a lot like what's now very hip and happening in the open source world is platform engineering, right? Yes, it very much is platform engineering. So this is basically what you do, and you've been doing it for a long time, apparently.

Speaker 2:

Yes, and it's low-code platform engineering. You know, there's the expensive way to do platform engineering, which is hire a bunch of brilliant developers, build a platform and maintain it and then create services that your, you know customers and dev teams and app teams can use.

Speaker 1:

Or you can adopt a platform and there are competitors to Appian and they're all doing this platform engineering like we are, especially if you do it like this I mean from where I'm standing, at least then the cost issue is also something that you can actually harness in the platform itself. You can say well, if you use it like this, you'll never go overboard with your cost.

Speaker 2:

You can meter and cap things eventually and have true technology business management. That would be the direction I would see us heading.

Speaker 1:

yeah, yeah well. And then finally, what? Yeah Well. And then finally, I've heard that you're quite fond of a new technology. That's the next step, and it also has a very important impact on AI right, it's called Elastic Process Execution.

Speaker 2:

EPIX, yes, as in elastic process execution. This is a new architecture and I like to talk about it because it doesn't rely on AI, and I want people to remember that there's still a lot of innovation happening in the economy that is not explicitly AI, but the demand for epics is, in no small part, being created by AI and the Internet of Things and digital twins and the massive increase in data that we're drowning in so briefly.

Speaker 1:

what does it do?

Speaker 2:

So the Epics architecture allows Appian to go from supporting millions of processes a day to billions. It's a complete reimagining of how we actually manage tasks, task queues and execute processes in an all-new design that is very lightweight, very efficient. So the processes you're running today could run with a lot less compute and therefore save energy, save the earth and, as you have to scale up your compute demands to support all these new ai generated queries and ai generated responses and ai generated workloads that, uh, you'll be able to keep up with and stay ahead of that demand without having to hire an army of people and and the goal is to build it on top of your existing architecture.

Speaker 1:

But on top of data data data fabric combined with process hq, precisely and then you can have generated AI services.

Speaker 2:

But the important thing here is that customers who already use Appian don't need to have any problem with backwards compatibility. They use the same tools, they're familiar with the same modeling environment and now Epix is a switch. You throw it and your process can scale up and down elastically.

Speaker 1:

Okay, well, it should be interesting. I will look forward to. So when are you going to do this? So it's already been done.

Speaker 2:

It's in beta now. We've been beta testing it with customers all year and you can contact us to sign up for the beta program yourself and it is incredibly powerful. We were showing off some great feedback from customers who are already working with it and have already seen results of up to 15 times more scalability.

Speaker 1:

I look forward to seeing it in action someday. And you ended with a commercial pitch again, so maybe people will call you. You never know.

Speaker 2:

Sorry about that. No, it's fine.

Speaker 1:

So thank you for the nice conversation. I think the listener is much wiser now about the AI. No, it's fine. It's fine. So thank you for the nice conversation. I think the listener is much wiser now about the AI economy, hopefully if he's listened to well or she. So thanks again.

Speaker 2:

Yeah, thanks for coming to Appian World Center. Look forward to seeing you again soon.

Speaker 1:

Yeah, looking forward to next time.

AI Economy and Data Fabric Implementation
Advancing Private AI With AWS Collaboration