Techzine Talks on Tour

How to balance cybersecurity and innovation at an acceptable risk

Coen or Sander Season 2 Episode 14

The security landscape is transforming rapidly as AI becomes embedded throughout enterprise technology stacks. Organizations (and security vendors) need to fundamentally rethink how they approach things like security governance and risk management. That's one of the pieces of advice Jonathan Trull, CISO at Qualys, has during the conversation we had at RSAC 2025 Conference. Listen to this new episode of Techzine Talks to learn more about what he had to say.

Trull draws from his experience overseeing both corporate security and product security engineering when he highlights the gap between AI implementation and security considerations. While organizations race to adopt generative AI tools, few have developed comprehensive frameworks for securing them properly.

According to Trull, many conversations tend to focus on one thing when it comes to securing AI. "Everyone tends to focus on how to prevent sensitive data going into SaaS, AI-enabled products," he says. "But what about when you're building your own LLM models? When do you do data masking? How do you incorporate security in the engineering lifecycle?" These architectural security questions deserve answers too.

At the end of the day, cybersecurity is about creating a balance. A balance between innovation and cybersecurity, and the risks that organizations are willing to take. This conversations gives you some good insights into how to tackle this. Tune in now!

Speaker 1:

Welcome to the new episode of Techzine Talks on Tour. I'm at the RSA C conference you have to add the C nowadays and I'm here with Jonathan Troll. You're the CISO at Qualys, if I'm not mistaken. Welcome to the show. Thanks for having me. So what is it? Well, I know what a CISO does, but I mean just for people listening, that I think, well, what does a CISO at a security company do? So what is your job description in general?

Speaker 2:

Yeah, sure, yeah, I'm responsible for in every CISO role is different, right? I mean honestly. So I'm responsible for corporate security, product security, so obviously making sure that our product meets all of the highest standards of security that our customers would expect and privacy and compliance. And then I think something that's a little bit unique is I also own the engineering component of what you would consider the traditional security components of the product. So identity, access, management, logging, logging, infrastructure, things that customers would expect, native, built mfa, secure by design the components that you would want to have in a, in a product that you're when you say you own that?

Speaker 2:

do you also get your hands dirty and actually work on the, on the actual products, or yes, yeah, yeah, yeah, that's part of it is, you know, have product managers that you know their focus is again on designing, and then an engineering group that then does the build components.

Speaker 1:

Oh, that could? I mean, that makes sense, because then you actually know what you're talking about, right?

Speaker 2:

I hope so.

Speaker 1:

No, but you're not just an executive who can manage stuff, but not in the trenches, basically.

Speaker 2:

Yeah, I tend to be more hands-on, for sure. I mean, I think everyone approaches their role a little bit differently. And then I think one of the other unique parts is being in a security company. Obviously we want to model, using our own technology, in the best, most mature way possible. So we do a lot of just work to make sure that as we release new products, we onboard them, we mature them, operationalize it so that we're running them. Do you speak to customers a lot as well? Yeah, I do. Yeah, I talk to customers quite a bit, especially peers, ciso peers, and I do that as part of running our strategic advisory board. So that's kind of an additional job that I get, which is kind of groups of customer CISOs that we talk to. What are their problems? What are they struggling to solve? What does that mean from a Qualys perspective?

Speaker 1:

So that's another part of the job Looks like you have enough on your hands to fill your days and your weeks right.

Speaker 2:

Yeah, keeps me busy.

Speaker 1:

So just walking around the conference here, what strikes you most when you look at? I mean, I don't know if you had the chance to actually go walk around the expo floor and talk to people and get the general gist of what we're doing here. Yeah, but what?

Speaker 2:

strikes you most on the expo floor and talk to people and get the general gist of what we're doing here. Yeah, but what strikes you most? Yeah, I think you know, obviously AI, you know plays, you know.

Speaker 1:

I think last year it was pretty prominent, even more so I think it's more prominent this year, but it's completely different, right, because I was here last year as well, yeah, and then it was a lot about using AI in tooling to actually protect against attackers, and now it's shifted a lot to the agentic and to the other side of using AI in applications in your environments, right?

Speaker 2:

Yeah, very much so. Very agentic AI embedded within all of your applications. I feel like there's a little bit more focus, too, in terms of people trying to understand how to help CISOs and security teams control the use of AI, use AI safely within, like a business enterprise. Do you think there is enough?

Speaker 1:

I think there is, but do you think there's enough focus on running AI securely inside organizations, Because there's also this trend of using new technology as quickly as possible without really thinking about the ramifications on the security side. How do you see that in the market in general?

Speaker 2:

Yeah, I definitely don't think there's enough focus on that. See that in the market in general, yeah, I definitely don't think there's enough focus on that. Everyone tends to focus, I would say, more on how do I prevent sensitive data going into SAS, ai-enabled SAS products or a DLP component. But the reality is I don't think RSA necessarily, as a conference, does a great job. In my opinion of this, anyway, is when you're actually building, like you're building your own LLM models and you're considering how do I use it? You know when do I do data masking to train the model? You know how do I incorporate all of that in the engineering lifecycle?

Speaker 2:

You know, unfortunately, when I looked at like the sessions that are out there, and when I look at you know kind of the vendors, you know we cover a lot of that. You know it is missing because a lot of the work is around. Listen, how do you design it? You know, and if you think about like a SaaSas model and this is part of my role in like product security and you know, obviously we're starting to use ai but you know, do you ship the data outside of a secure enclave into the model? Do you push the model into the secure enclave.

Speaker 1:

But, like there's a lot of like decisions that have to be done and you're adding a lot of new fundamentals to the entire equation, right, right, and I and I I always hear from the security industry, and from other customers as well, that getting the fundamentals right has always been a bit of a struggle, to put it mildly. And basic hygiene is still not there, and now we're actually adding more basic hygiene that we have to get right on top of it and it really adds a lot of complexity and I think there's not enough focus on that, at least from my perspective.

Speaker 2:

I 100% agree. I mean, in fact, I was talking to some other colleagues like do we need almost a new conference? To a certain extent, that is really focused just on how do we build and engineer smart security and like collaboration? Because again, you know, you know we talked about the big, either really really high level, like okay, is AI safe or not, and I think, like I'm not interested in that conversation.

Speaker 1:

What I'm interested in is how do I make it safe in my product and how do I make it safe for my company but then you, then you see that, and that's not only the the responsibility of security companies, right's not only the responsibility of security companies, right, it's also the responsibility of the companies that come up with new technology. And, for example, if you look at MCP, which is all the rage now, right, everybody has MCP servers, everybody does all that. But MCP as a protocol, inherently it's not secure, right, why didn't? Isn't it possible to make a protocol that's secure from the get-go? Or why do we have to continuously add security to stuff that's being recently built? Right, right, right, right.

Speaker 2:

I don't understand that. Yeah, I mean, I think you know there's been attempts to discourage companies right from doing releasing protocols or whatever that are insecure. I think, at the end of the day, probably the incentive to move fast and have feature rich, versus the disincentive of maybe designing it the right way, just isn't there and that is very listen, that's just my opinion at the macro level, right, you know. I think you know obviously I don't want to get into politics or any of that. You know, I think in the past there were some executive orders that were, you know, at least thought by the previous administration. Hey, what are we going to do? How do we incentivize building secure by design? But you know, ideally should the market do that, you know, does a free market economy kind of create?

Speaker 1:

that incentive. That's sort of the open source approach, right. So if the community wants to make it secure, it will be secure. But then at the end of the day, if you look at something like MCP, which is an anthropic kind of thing, right. So if they think, look, we just build the basics, the people that actually are shipping it in their solutions, they need to make sure it's secure. They're responsible for that.

Speaker 1:

They're responsible for that and I think I get that. But why don't you just go the extra mile and we're going to make this work from the get-go right?

Speaker 2:

Oh yeah, I mean, that's been the consistent story since the founding of the internet, right, like hey. The first protocols were like like well, okay, this, this is really cool, we can communicate across the world. Hey, what about security? Oh, then you layer on like a new layer in the network packet stack and you add the security component. You know and you know. Unfortunately, it just seems like you know that that's the trend, right is? We tend to release these things and then say, hey, either security companies are the people using that protocol to build on it and have to add the layered security.

Speaker 2:

And that is tough, it's tough on a CISO, right, because then you're like you know, if you think about all the complexities in that right, then you have to say, well, listen, is that protocol being used? Who built on top of that? Okay, did they secure it? If they didn't, now what do I do? You know, and and kind of gets in the third-party risk management to you know again, all this AI that's embedded in whatever Gong or Salesforce or you know whatever ERP. You know it's great, but like that's a third party that built that and you're trying to validate. You know, did they secure it? Did they build it properly?

Speaker 1:

And the lack of visibility into all of this is also quite astounding.

Speaker 2:

It gets much more difficult. Yeah, and I think you know, even on that topic, you know there's been the work around. You know we've had kind of SOC 2 reports for a long time but they don't necessarily get to the technical depth. And we looked at like software bill of materials that you could, you know, take from someone and really get it deeper, you know, into the tech stack. But then I think you know there's there's definitely some, some flaws there and that one didn't take off right. You know, I mean I think if you looked at it was either last RSA or the RSA before that. You know it's like software bill of materials was big thing, right. It's like, oh man, it's gonna revolutionize how we, how we do.

Speaker 1:

Third, party risk management. That's how these things go right. S-boms were the bomb. Yeah, for a couple of years and now and now and it's, it's also a bit of a US kind of thing, right it is yeah, it's a very yeah.

Speaker 2:

I think it was kind of lead. It was huge in the open source as well.

Speaker 1:

I went to a couple of KubeCons. It was huge there as well. Everybody was talking talking about SBOMs.

Speaker 2:

It's a new big thing. It's going to revolutionize third-party risk management and now you don't hear about it a whole lot, even though it is very important.

Speaker 1:

I mean, you know, in Europe where I'm from, you had the all the new regulations. This too, dora, all that stuff. Especially DORA has lots of third-party kind of implications as well. That's right. You need to make sure your third parties that you use are also compliant with DORA, so I would imagine that something like this would be very important.

Speaker 2:

Yeah, yeah, I mean both I as a practitioner, think it's very important.

Speaker 2:

I mean we as Qualys still think it's important, and especially when you look at, like, where we're going as just like a technology company which aligns very much with me philosophically, which is, you know, we were vulnerability management focused, and then I think vulnerability management is just, it's super important, but it's one risk component, right, like there's a lot of other risks, misconfigurations, users that do stupid things.

Speaker 2:

That's a risk, right, like there's all these other components, and it doesn't make sense to look at it in isolation, because attackers don't look at it in isolation. Right, they think if I can get to this identity, get the password they didn't enforce mfa here I can get. You know, they think very network connected and and so, you know, I think both my view and you know us as a company is we're saying, well, let's, let's expand how we think about a vulnerability, you know, an exposure, you know let's start really looking at these pathways. And again, whether it's AI, flaws in AI, whether it's flaws in an API, a traditional API, whether it's flaws in a cloud instance, it doesn't matter, it's still a risk that has to be addressed.

Speaker 1:

But don't you think to a certain extent that looking at vulnerabilities alone, or attack paths or whatever you want to call them, is, at the end of the day, sort of a fool's game, because there are so many that you lose sight of which ones are important and which ones aren't?

Speaker 2:

Yeah, it's a great point. I think, also, one of the things that we're really focused on is we as an industry haven't done a good job prioritizing what we focus on. You know, and again I think, listen, I've done the same thing. It used to be just an SLA game, right? We would say, well, if it's this level of vulnerability, they all have to be patched at this cadence and it's impossible at a certain point, right? And so I think you know we're providing a lot more mechanisms to say, well, that's not true, right, like, it's not being exploited. There's not even a known exploit kit for that vulnerability. There's no activity that we see from a threat perspective. So why are you worried about it? You know, like, why are we trying to deal with that right now?

Speaker 1:

I actually had a chat with a customer with a customer of I don't know whose customer it was, but I had a chat with somebody this week and he said, look, even though if you work in a company and you see a vulnerability which has been ranked in whatever CV CFSS score, whatever ranking you use, you cannot unsee it right, it's true. So as soon as you see it, you want to do something about it. And also, the second point is that you don't know, I mean, it may not be critical today, but it may very well be critical in a company. So you can't just forget it. Right, and as a practitioner, as a security practitioner, you want to solve it. That's your job. Yeah, solve it, that's your job. So I think it's inherently very hard for security teams and for people that actually decide which ones they go for, which ones they don't For sure, to say, look, I'm just going to ignore this.

Speaker 2:

Yeah, yeah, yeah, yeah. And the way that we say it, we continuously track it and we continuously update the score. So that's something else that we've recently done, because you're right, like, listen, you know it's been proven that it can be exploit. There's a kit that's out. We see threat indicators that we know actors are using it. Score goes up. Listen, your priority has changed.

Speaker 2:

I think the thing that we've seen that is different. Like on the reactive side, we've always worked like that, you know, in like a SOC or an IR team. We've always been like okay, we have an incident, let's get on it, fix it, how important is it? Okay, escalate it, you know, or you know, decrease based on what we know when. I think we haven't done that always good, on the proactive side. It's always just been like well, let's just treat it all the same, let's try to get to it, or not. And security people do have and I understand why I totally get it right. It's that mindset sometimes that I'm going to eliminate all risk. My job is to eliminate all risk and that is an impossible like if that and I used to have that mindset.

Speaker 2:

It's a good mindset to a certain extent and it would drive me crazy I'd be so stressed out, right, because I'm like my job as CISO is to eliminate all risk. But then when I looked at my other C-level peers even though they have other risks they're dealing with, like financial misreporting or lack of control and procurement they had a much more rational way of viewing it. They're like well, listen, there's some risk, we'll just tolerate it because it's not material to us. Where I feel like in the security world, we've always just been this oh my gosh, I'm holding the fort, I'm the person on the castle.

Speaker 2:

If one stone gets through, then I'm not doing my job and I think that contributes to a lot of the burnout in the industry and a lot of the stuff helps.

Speaker 2:

You're fighting a losing battle anyway, right, yeah, you'll never win that battle if you're trying to remove all risk. And that gets us back to AI too right, I think maybe I have too much of a rational perspective at times. But when I'm sitting down with teams and saying, and I have heard some security people say I'm going to block all use, you know, of AI in my environment and I'm like, have you talked to your CIO? Like, have you talked to your CEO? Because you know they're finding some pretty significant like business value, like there's a reason it's so popular, and even from us right when I look at, I look at engineering capacity, is it bad to have AI write code for you, yes or no? And I think it depends on the quality of the code.

Speaker 2:

It depends on the quality of the code and there's bugs in it right. Or you're doing functional testing, but we still see there's tremendous productivity, gain Tremendous.

Speaker 1:

You still have to have security controls.

Speaker 2:

Though right Like is it, we still have to test the code, we still have to look for bugs in the code. You know it's not like fully auto. You know like we're not gonna take that code and just push it?

Speaker 1:

No, but I mean I think it's the same with everything, right. So the truth is somewhere in the middle, right, exactly, it's never. I mean, we do our own, we maintain and we build our own websites and all that stuff for our publication. And if you have to, I mean a simple form. You're not going to type or code the entire form at a time. Why would you do that?

Speaker 2:

Yeah, yeah, exactly.

Speaker 1:

It's the same form all over the place, so why would you?

Speaker 2:

There's nothing sensitive about it.

Speaker 1:

There's no right yeah, it makes no sense to not use AI to do that right. The trick is to know when to use it, where to use it and use it securely, and that's the big issue at the moment.

Speaker 2:

And then you get in, you know, can you, and listen. We have completely a lot of focus on governance, what I would call governance of AI, right, and so you know, and what control do I have over it?

Speaker 1:

right, like you know, you have I think that's a very hard point, because or a hard question to ask, especially with this moving target that that ai is at the moment, it's very hard to definitively say so I'm gonna, this is my governance, I'm gonna do this in that way, and I want to do it because you never know what's gonna, what it's gonna be like in a couple of months time. Right, and now you need to be constantly moving along with that target as well, and it makes it just quite complicated.

Speaker 2:

Yeah, it does make it challenging because it is evolving so fast and I think with AI too, you really have to get what is the use case and I think most companies and even products you have to really know what we're trying to solve from our product that we've developed is very much. If you're developing internally with your own LLMs and AI, we need to help you develop it securely. But then you still have these other components, which are do I allow users to use AI if we find a lot of value? Should we do an enterprise like generative AI solution that gives us more control that we can do, blocking a proprietary data that maybe we don't want to go into that? Are they training their model on it and the legal you know restrictions you can put in place?

Speaker 1:

you do need some central control over, over the adoption and the use of AI inside your organization. Right, very much, that's. That's a no-brainer. Yeah, I think. Yeah, yeah, but who's doing the control? Is that a? Are you going to get AI guardians or whatever you want to call them, or is that the security team, or is that somebody else? Who does it? Yeah, does it. Whose responsibility is that? Yeah, yeah.

Speaker 2:

Yeah, I think you know. Obviously it depends on the organization, but I would say the security governance, the privacy governance, should fall on the security team, right? Like, listen, your job is to digitally protect data, right, and there's data being used in these AI. So I think you have, and I think what I found, too, is internally. If I didn't take that initiative, I'm not sure anyone else would. The CIO is like well, listen, I'm figuring out how we can really use it and maximize productivity. That's their primary focus. It's really I'm the only role that's like okay, but how do we protect that data? How do we retain it? Can we delete it if we need to? I'm the one that's like okay, but how do we protect that data? How do we retain it? Can we delete it if we need to? I'm the one that's responding. How do I do DORA compliance if we start to use this? What are the third-party regulations that we have to map into?

Speaker 1:

But if you're thinking about stuff like that and having it centrally but also in a coherent kind of platform or whatever you want to call it and as a one big thing makes a lot of sense and not not not be too fragmented in how you and how you want to secure or govern or whatever you want to call it, uh ai, in your organization, right?

Speaker 2:

yeah, and I think I listen young workers I got two girls in college and so I see what their friends do. Like AI, it's just going to be so expected in how they work. Like, if we don't start to govern this and as this workforce gets in there, like they just it's just natural to them. It's embedded in Instagram. It's embedded in, you know, the apps that they're using. Actually, instagram dates me right.

Speaker 1:

They're not using it fully now, but now they're Snapchat or whatever. Yeah, exactly.

Speaker 2:

But when I see them, they're like if they want to send a note to their friends. They take a picture of it and then say, hey, ai, make this look pretty and add this and send it to so ever. They don't even type anymore, right? And so they pretty and add this and send it to so ever, you know, like they don't even type anymore, right, you know? And so they're expecting to work like that. You know, and I think you know we're seeing it, as we're getting college graduates coming into the workforce are like, hey, where's my generative AI to help me do my job? So we have to, you know, we have to govern it and we have to do it so that they can start to use it.

Speaker 1:

How do you so? You mentioned the point how do you do that inside Qualys? I mean, how do you enforce, or maybe the wrong word, but you know. Whatever, how do you roll it out?

Speaker 2:

Yeah, yeah, it was actually so. It was quite the journey, right. So I would say, you know, we first started just by saying are people really using it? And I think we're looking at network traffic. Who's using what you see like, okay, people are using a lot of generative AI, that they're obviously finding use for it. And then I think the next step that happened is every SaaS product and we're heavy consumer of SaaS products. Right, we don't want to build our own stuff, we want to consume. You know, if you have an ERP, great, we'll use it, you know. And so suddenly they're all like, hey, turn this switch and you can turn on the AI agent that helps you do whatever. So now, suddenly it's exploding in SaaS, and so I think we got our arms around where is it showing up? And then we crafted a policy right, we're legal counsel, cio and we said, okay, here's.

Speaker 1:

Do you see fundamental differences between the different types of AI that you use, for example, in SaaS or as a big LLM that you address, or maybe on-prem? Do you see fundamental differences in how you should address these various types of AIUs?

Speaker 2:

You know what I mean. Not necessarily. I mean, I think on all of them at least as a commercial enterprise, you have what I would say contractual. You want contractual control, usually Like I want the ability at an enterprise to say I'm going to enforce my security terms. So what I don't want people doing is going to buy their own license for ChatGPT and using that Because, again, I have no control over it. You bought the license, your relationship is with them. It's not with Qualys, right. So I need an enterprise-level agreement. And the good news is, with the SaaS companies you usually have it right. You have some existing enterprise agreement and so, as they add that, you can start to embed it. Now I will say, the level of control that the vendors give you varies widely.

Speaker 2:

And so I'll give you examples. There are some cases where we'll say this is a great use case, but I want to be able to filter so that no customer data could ever wind up in your AI model. Do I have the ability to do the filtering? No, we haven't really thought about that. That hasn't been important to us. But then we have others that have released and said oh, absolutely, We've got complete filtering. You can write regex rules, it's all built in. And so you talk about secure by design. It's showing up in this AI world now, right?

Speaker 1:

And also it's also the mindset of the company that is actually trying to sell you the SaaS platform. So if you look at, for example, the big CRM company from Salesforce, they're focusing so heavily on they're calling themselves agent us at the moment so they're focusing so heavily on AI. They will. I mean, it makes sense for them to try and push as much AI towards the customer as possible and maybe say look, we didn't know, you can't have control over this, because then this feature won't work anymore or whatever. I'm not sure if that's the case, but that is the mindset that can be the mindset of companies trying to sell you SaaS solution. Right?

Speaker 2:

Absolutely yeah. Yeah, I think again. I think earlier we were talking about what's the incentive to do things out of security? And listen, I think we as a security community that is in terms of like, I think there's some big things that you know we can always help influence. You know, one of those is truly secure by design, right. And then I think, if you even look at the JPMC, ciso wrote an article recently about putting a lot more- yeah, the Morgan.

Speaker 2:

Chase the open letter he sent, yeah, the open letter about SaaS. And listen, while AI wasn't necessarily the topic of that, it is that same thing, right? The more of this is getting delivered through SaaS, you know, the more we as a community of security people have to say here's our expectation of what would be in a feature that I would use, what kind of control I need in a feature that I would use, what kind of control I need, you know? And if we don't do that, you know, then they're gonna deliver again things that are inadequate and then you know, then we're lopping on products. Right, then I am looking for something on the floor here that theoretically plugs into all of that and does some type of DLP thing. But do I wanna do that? Of course not. I don't wanna to do that, like, I just want you to include that in your AI component, right, I want you to make that available for me.

Speaker 2:

And it is making third party risk management a more important area too, I would say, because now we are having to do a lot more due diligence and saying, okay, you're adding this new AI thing, let's talk about how you've architected that. Let's talk about what are the data retention components around this. How do I know, if there's an audit, that I have sufficient logs to determine what the AI did versus what a human did? And so now we're trying to work through just to figure out, like again, what is our risk in this scenario?

Speaker 1:

And those are very interesting questions and good questions to ask as a customer.

Speaker 2:

Yeah.

Speaker 1:

From the vendor from the supplier? Right, absolutely. And I think it's also a lack of knowing what to ask on the customer side. Right, because that's one of my questions is always what should customers ask you when they not only about AI, but about other stuff as well? Right, what should the questions be? That they should ask their third party suppliers, yeah, yeah, and that should go beyond are you secure? Right, because, oh, nobody, you know you're not gonna go get a good answer to that question. Right, right, right, right. What are the? What are some of the questions customers could ask, especially then focusing on AI, from their SAS vendors or other vendors that that put AI in their hands?

Speaker 2:

JOHN MCCUTCHAN yeah, I think some are truly kind of the table stakes, which is do you use my data to train your model, yes or no? I mean that is super important because obviously at that point, potentially, your data could leak out of that model and, listen, they're using your IP to enrich their IP, right? So there is even from a legal like hey, monetization view. You know, it's like you know how are you doing that? And then I think you get into the components of really looking at the architecture of the AI and again this gets into asking questions about their SaaS model, right? Because there's very like, listen, is all the customer data sitting in one database? And then you use code or do you use encryption to segment tenants or do you create VPCs?

Speaker 2:

And then the reason that's important is because then where does that model sit? Where does that LL model? Because then are you sending VPCs. And then the reason that's important is because then where does that model sit? Where does that LLM model? Because then are you sending data somewhere else? Is that model like if you're an EU customer and you want your data to stay in the EU? Well, what if that LLM model is running in an AWS tenant in the US.

Speaker 2:

Well, now the data just left right and so I think you really have to understand kind of the pathway of the data so you're meeting your compliance requirements. And then it's important to understand what controls do you have? You know again, like, if there are certain parts, if there are certain sensitive data that you would never want to go into an LLM model, especially into a third party, can you actually control that? You know if it's credit card numbers, if it's banking routing numbers, you know. You know for us. You know there's certain customer data that we just say this is too sensitive, we would never, ever have this in unencrypted format, number one, and we would never ship it anywhere else. Like I've got to have a good sense that that data can never get into. You know this model that I would ever want it to be ingested or consumed there.

Speaker 1:

Maybe naively. I kind of hope that that's already the case, that you can actually indicate that some data should never leave.

Speaker 2:

Should be.

Speaker 1:

Maybe I'm a bit too optimistic about the world, but it is a very important thing to get right.

Speaker 2:

Well, data is. I think the more I've been in this business, right like, I mean, data is a. You know there's so much wealth and data and I think you know people, the more data knowledge you have in this world in our day and age, the more powerful that is right, there's natural. There's a natural tendency of people to want to add more power to that, to add more processing to that, you know, and that makes perfect sense, because, at the end of the day, AI is not about models.

Speaker 1:

It's about the data that you use, right?

Speaker 2:

That's right Can it do better reasoning than I can, and you know I think I used to be a skeptic on it, but I do actually believe we are getting to an inflection point where you know AI Maybe it's because I'm getting older and you know my mental capacity and jet lag you know like that does, you know that plays a role and you know these things aren't tired.

Speaker 1:

They don't sleep. I think it depends on what you expect from as an output, right? If you expect fresh new perspectives on things, I think you're going to be disappointed quite often. Yeah, I agree. But if you expect sort of nice, good summaries or nice insights into some data, I think we're quite a long way away, a long way on the road to getting there.

Speaker 2:

Agree. Yeah, I think it's those use cases where you're saying you know, listen, here's a weighty, technically deep topic. Go look at all of these articles, Consume it all. Give me bullet points so I can understand it as an executive. It's really good at that, absolutely. Is it new content? Not necessarily.

Speaker 1:

One of the things that I always try to tell people, if they want to know whether something is good at this type of thing, is ask it to summarize a long, a very long, long white paper or whatever, and our perception a concept of summarizing is that you get there, the highlights, right, but for for most models, summarizing is shortening. Yeah, so sometimes you may even miss a very key component of the entire report, but it looks good, right, right, right, but you still don't know, so you still have to read the entire thing anyway.

Speaker 2:

Yeah, yeah it's definitely like a it feels very much like you get it about. It gets it about 80% right on some of that stuff. You know, and you're right, If there's a nugget in that 20%, you can miss something. You can look like a fool if you're saying something.

Speaker 2:

Yeah, some of the stories there on like improper citations, oh yeah, Well, like I tell my security team or whoever you know if we are using it in some capacity, it's like, at the end of the day, you're the human responsible for this activity, right? So you know, All right, yeah, yeah.

Speaker 1:

I think we're already over 30 minutes in Wow, Time flies Well. Thank you. I think it was an interesting chat. I hope you got to say what you wanted to say and if not, well, I'm sorry about that.

Speaker 2:

Yeah, it was a good conversation. Thanks for joining.

Speaker 1:

I hope the listeners found it as insightful as I found it.

Speaker 2:

Yeah.