Techzine Talks on Tour

Humans aren't the weakest link, they're a critical security layer

Coen or Sander Season 2 Episode 9

Security awareness is a thing of the past. Rather, the term and what it stands for are things of the past. It just doesn't cover the actual challenge organizations need to address. We need to be talking about Human Risk. At least, that's what Perry Carpenter, Chief Human Risk Management Strategist at KnowBe4, thinks we should do. We were happy to oblige and learn more about it and ask him all kinds of questions about it too.  

When Carpenter says "awareness doesn't change behavior," he's not being controversial. He's sharing an insight that has reshaped how security professionals approach human risk. We meet him at RSAC Conference in San Francisco, where we discuss why we must move beyond traditional security awareness toward a comprehensive approach to human behavior.

Drawing from over a decade of experience, Carpenter challenges the notion that humans are the "weakest link" in security. Instead, he positions people as "a critical layer within the security stack". It's a layer that deserves proper support rather than blame when things go wrong. When a phishing attack succeeds, it means multiple technical controls have already failed, from secure email gateways to endpoint protection. The problem isn't human weakness but insufficient defense-in-depth.

Impact of AI on Human Risk

This being 2025, we talk about AI too of course. We examine how AI is transforming the threat landscape. While deepfakes and AI-generated content grab headlines, Carpenter emphasizes that we should continue to focus on the underlying psychology: "Every deceptive attack is a narrative attack" designed to manipulate emotions and beliefs. The difference now is scale and personalization. Attackers can create "individualized cognitive malware" targeting specific psychological vulnerabilities of each victim.

Looking toward the future, we discuss with Carpenter how agentic AI will transform both offense and defense in cybersecurity. These AI systems, functioning like team members with specialized capabilities, promise to deliver just-in-time training and personalized security guidance. However, they also introduce new risks. That is, AI agents themselves can be manipulated, compromised, or turned into insider threats. Organizations will need thoughtful control mechanisms and safety valves to navigate this new frontier.

This discussion is not only meant to warn security leaders, or a vision of a very bleak future. It also aims to provide them with a roadmap and some food for thought to avert such a bleak future. By combining technical controls with behavioral science, organizations can build more resilient security cultures that recognize human complexity without sacrificing protection. Listen now to get insights into how your security program must evolve to address the full spectrum of human risk, also when it comes to AI.

Speaker 1:

Welcome to Techzine Talks On Tour. I'm at an RSAC conference in San Francisco and I'm here with Perry Carpenter. Welcome to the show, perry. You're Chief Human Risk Management Strategist. I am.

Speaker 2:

It's very important. You trust me to call you that right. Yes, that's very important.

Speaker 1:

What does it?

Speaker 2:

actually mean, oh okay. So yeah, I mean it does sound like very pie-in-the-sky, and some of it is right, because human risk management, I think, is a concept that the market that know before is in, which is the company that I work for, has been in for years. It's the the natural evolution of the security awareness market and it recognizes at its core that this is a human behavior type of discipline that we're thinking about. It's not about the number of eyeballs that can be put on screens or the amount of butts that can be put on seats.

Speaker 2:

It's about actually driving behavioral change, which should be focused on reducing risk in an organization. Yeah, so, yeah, so so organization.

Speaker 1:

How is that going in general in the past, let's say year? I spoke to your former CEO, stu, last year at the RSA conference. What's happened?

Speaker 2:

Well, I, like any vendor and anybody that's kind of in this evangelist role that I'm in, media relations role that I'm in, there's a lot of education that has to happen for potential buyers because, like a CISO, will look at a report and say, oh my God, 92% of data breaches trace back to some kind of human error and they've been conditioned over decades to go to Google and the phrase they want to type in is security awareness.

Speaker 2:

But we know after decades, of course, and we've known this for a long time awareness doesn't change behavior. I'm very aware of speed limit signs. I may or may not decide to follow them and I can even be very aware and intend to do something Like I know the kinds of things I should eat and not eat, and I may do a New Year's resolution to say I'm going to work out X amount of days a week and I can write that on a list and really intend to do it. But in the moment I make all these little decision behavioral trade-offs. So awareness has never been the answer. These little decision behavioral trade-offs, yeah, so awareness has never been the answer.

Speaker 1:

So we actually once recorded a podcast a long time ago that had the title was Security Awareness Training is Nonsense or maybe a little bit stronger word. Yeah, if we're not going to repeat here, but that was more to trigger because we saw lots of stuff around, because awareness alone is not going to save us, right.

Speaker 2:

Well, and I even had a presentation God, I think it's back 2012-ish, so 13 years ago called Move from Security Awareness to Security Culture Management, because I was realizing that there's behavior, there's social pressures, there's organizational dynamics. It's way more than just education, and I also had a quote back from 2015,. I think that said, the future of security awareness is real time, learner aware, context dependent, kind of moving the needle towards what we think about when we think about agentic know, agentic AI, human risk management platform, and now we're finally seeing that being realized within the way that the technology is entering the market yeah, I'm gonna come back to that later, but yeah, at first I I understand why you call it human risk, right, but there are also voices in the in the industry that say, look, we should, should move away from seeing the human as the weakest link in the security chain.

Speaker 1:

This kind of reinforces that image of human risk, doesn't it?

Speaker 2:

Yeah. So yes and no. I can see why somebody would make that argument. I can see why somebody would make that argument. At the same time, we have to realize that humans are a. What I say is humans are a critical layer within a security stack, and so I've never said humans are the weakest link I know some people in this market have. Humans are a critical layer and what it recognizes is that humans are always going to be part of the security stack. And I would say if a human clicking on a link is the end of a company's security, then the company's not done a good job creating a security stack, because by the time the human is in the position to click the link, secure email gateways failed If they click the link and it causes damage. Endpoint protection platforms have failed. Application sandboxing platforms have failed. Dlp has failed. There's tons of other things. So the human is not the weakest link unless all those other things are also the weakest link.

Speaker 2:

The other problem is if a human clicking on something results in that human being penalized or seen as stupid or something else, that's, that's absolutely wrong or yeah, let me say one more thing to finish the thought if somebody, if a human, clicking on a link means that somebody in the organization will say awareness failed, that's, that's also wrong, because they're not questioning the value in their firewall, endpoint protection, secure email gateway or anything else. They're only questioning the value in their firewall, endpoint protection, secure email gateway or anything else. They're only questioning the value of that one email, which is stupid.

Speaker 1:

In a different way. It's the same thing as relying 100% on the security of your backups yes, Because I always say or the security of your printers Right, right. If people can get there, yes, Then something else hasn't gone very well either.

Speaker 2:

Right, yeah, and so it's just a human risk management recognizes the value of a human within the security stack, and it also recognizes the fact that every security program exists to reduce risk for an organization to an acceptable level for whatever the risk tolerance is, and so this is understanding where the human sits in that and then being able to vector around that using a set of technologies and practices.

Speaker 1:

Let us try and break down the different kinds of human risk, right, because when you talk about humans being fooled by something or being, there are different kind of things. You mentioned the phishing link, but there's deepfakes. There are many other kinds of things that may trick a user into doing something he or she shouldn't have. So what are the main components of human risk? Yeah, I mean.

Speaker 2:

So what I would say is that the main component that I think about when it comes to human risk is just the behavior If a human doing something or not doing something impacts the risk or security of an organization. That is what we have to talk about figuring out the intersection between the human and the technology. And there's tons of ways that that could happen. That could be as simple as dealing with a phishing email or a deepfake, or it could be whether that person decides to put their document in the shredder or their trash can.

Speaker 1:

It's a full spectrum and we have to think about that the full way around as well think about that the full way around as well, but then you're not really actually not really thinking or looking at the technology that causes the risk. So I would imagine, looking at it from a relative outsider perspective, I would imagine that something like deepfakes is much easier to fall into the trap. I'm just making up examples here, right? So there's no, because you're focusing it solely on the human part. You're not really looking at the reason why. Right, is that a? Is that on purpose?

Speaker 2:

The way I think about it is, the technology behind the attack, or the technology behind the deception, is going to change and evolve constantly. Years ago, we were thinking about phishing as kind of the primary attack vector, and now we've got a lot of SMS-based phishing, we've got voice-based phishing and then now we have deep fakes that are taking the place of the humans in that. For me, the AI nests is really, really interesting, but it and it does change the game in some ways. Don't get me wrong, but I've gotten to the point where I don't care if something's real or fake. What I care about is is it deceptive in some way? Is it?

Speaker 1:

Is it?

Speaker 2:

trying to tell a story or get somebody to do or believe something, Because there's tons of. If I only focus on one, then I'm going to miss the other. If I only focus on deep fakes, I might miss the human-based type of attack.

Speaker 1:

I also wanted to point that out as well. So it's expanded into stuff like I also wanted to point that out as well. So it's expanded into stuff like disinformation and other types of deception, which is not necessarily I don't think a lot of people would consider that in the same realm as phishing links and deep fakes.

Speaker 2:

Yeah, I just wrote a whole book about that recently. It's called Fake. It's spelled F-A-I-K because it's fake and there's AI in the middle.

Speaker 1:

You know, trying to be clever it's only so clever if you have to explain it well you didn't give me a chance to understand it.

Speaker 2:

Well, I did it for you but so when I come back to in that over and over and over, is that the the AI nests is really cool and there's lots of interesting things that that opens up, but at the core of it, the thing that it opens up most is just the ability for the attacker to work more at scale and more personally, and so if I can much more easily become your CEO, then yeah, that's fine.

Speaker 2:

That's a piece of technology that allows a piece of synthetic media to be created and then lobbed over where the human has to deal with it. The human, at the end of the day, is still dealing with the same set of emotions and cognitive levers that they've always dealt with, cognitive levers that they've always dealt with, and for them, what it comes back to is the training piece of this or the cognitive defense piece of this looks a lot like traditional fishing training. It is realizing. If there's an emotional backlash to the thing that I'm looking at, you know, if it makes me really want to engage with it, like get angry with it or get afraid, or feel rushed or something like that, well then at that point I need to stop and take a breath and go. Why is this? What story is this piece of media or this thing that landed in front of me trying to tell, and what does it want me to do or believe?

Speaker 1:

Don't you think that different types are more subtle ways, or AI-ness, or? Whatever you are using AI to, do this in a much more subtle way than the Nigerian prince trying to give you money. Yeah, everything. Don't you think that the thresholds inside a human's head are different for different kinds of, and that AI has lowered the threshold in? A sense Absolutely.

Speaker 2:

The way that I think about it is that every deceptive attack is a narrative attack.

Speaker 2:

So everybody has their own world view, their own system of beliefs, their own things that will set their emotions off, and AI gives attackers who exist across this entire spectrum of sophistication.

Speaker 2:

So a merely curious attacker that wants to try something out now can do stuff that a nation state would salivate at five years ago. That's kind of where we are. At the same time, we do have to realize that cognitively, people are kind of in the same space, but that the reason that we respond to any kind of deception is because the piece of media that lands in front of us completes a narrative or is helping to tell a story. It's an artifact that fits in and goes oh, because you've seen this, now you should click this thing, or you should transfer money, or you should, in the case of a piece of disinformation, you should vote for this certain person, or you should start a riot, or things like that. So the subtleness of it is that, at scale now, and with the right amount of research and AI makes this possible too you can create very weaponizable attacks that are very individual for each person and hit their cognitive levers in the right way. I mean you're creating individualized cognitive malware with the right kind of AI based attack.

Speaker 1:

On the one hand it doesn't change all that much fundamentally, you still need to do the same things. But on the other hand it does change a lot as well. You need to be more. Maybe we shouldn't use the word aware anymore, but you need to be more aware of what's what and when to. Because the example you gave, like if you have a certain reaction to something, realize why am I doing this, but when something is very subtle, you may not even realize that you have this reaction, right?

Speaker 2:

True, true, especially when it comes to things like disinformation, right. So it's one thing to think. In the corporate context, we typically get hung up on thinking about the email ants in the inbox. Or the deep fake video comes in asking you to make a wire transfer. That's sometimes a one-time, not subtle ask of somebody and it's using urgency, authority, fear or something like that it is. The other thing is the more persistent, smaller things that are meant to shift somebody's view in a different direction and on social media, algorithms reinforce that quite a bit. You could weaponize that at the organizational level as well and with one part of your attack slipping in and understanding what somebody's email patterns are and then starting to build subtle messaging, taking on the persona of somebody and then subtly shifting their behavior patterns in a certain direction so that they might give away information.

Speaker 1:

They're more like, if you use the traditional nomenclature in security, those things are more like APT sort of persistent threats. Right, they move gradually and maybe they take six months to actually weaponize.

Speaker 2:

Yeah, it's kind of the long con approach right. It's the subtle building of trust or understanding the behavior patterns or thought patterns of somebody. But that's extremely hard to.

Speaker 1:

If you look at it from a business perspective, from an organizational perspective, do you see stuff like that happening in there as well?

Speaker 2:

I've not seen it a lot. I mean, if I'm in the cybercrime industry, I'm building towards that right.

Speaker 2:

Because you know the two of us think about security all the time and we've talked about that already.

Speaker 2:

I'm sure crime syndicates are thinking about and doing that right now. Sure, crime syndicates are thinking about and doing that right now. The thing is is there's two pieces of this that we can do, or maybe three or more pieces that we can do as an organization? One is and as much as some parts of the world don't like it if you're doing email monitoring, sentiment analysis and things like that, and you're doing behavioral profiling and understanding when a disruption happens, you have a good technical indicator. They could then also alert into like a human risk management platform and say, based on this, we have, we have we don't know for sure, but maybe this person needs a reminder about X and then maybe you also send something to the HR team. They can redo a background check or something to see if there's some indicator in that person's profile that means that they're more likely to do something, and that's also one of the reasons why I know before, so went into the email space as well, right?

Speaker 2:

So then the other thing is, if you're pushing near time or just in time awareness to somebody, you can give them that reminder of oh well, the policy is this, and you can create, you can kind of activate the cognitive dissonance a little bit more to where they feel that they feel more friction when they do something. And then, lastly again, because not everything is a single layer, you can activate either another piece of technology that might do some blocking and tackling, or you can advise the organization to put in a disruptive process like a, you know, a dual key mechanism or a secondary factor that's needed to. You know, we have, before we do this transfer, we've got to clear this through this department of this person.

Speaker 1:

So it comes back again back to some pretty old school security concepts yeah, but then you also get to the old school friction for people working for companies with a lot of they're not particularly fond of stuff like this no sometimes you've got to do it.

Speaker 2:

Yeah, but that's also part of the and it's risk-based right.

Speaker 1:

Yeah, but that's also part of the behavioral stuff right. So they need to understand that this is necessary.

Speaker 1:

And I think for some things it's very clear, right, if you want to do something with personal information in some industries, you have to go to the office. You cannot do it and nothing can go via electronic. Whatever it is, some things are very clear. But if you run the risk I think, looking at it from my perspective, you run the risk. I think, looking at it from my perspective, you run the risk of overdoing this and then a lot of people will be not very happy.

Speaker 2:

Yeah, and they could get frustrated and they could also build side channels to do things.

Speaker 1:

Yeah, you've got shadow innovation Shadow IT, shadow AI.

Speaker 2:

All of that starts to become an issue. But if for the critical things and I think every organization should think about and model the critical behaviors and the critical systems that are important, because if, through a side channel or a piece of shadow tech, you can do something that materially impacts the business, then you have a gap that you need to close and so you can threat, model a lot of that stuff out as much as possible and you have to decide where friction is appropriate and where it's it's not appropriate.

Speaker 1:

You could also subtly introduce friction based on behavioral indicators that you're seeing you can also use the gradual approach, yeah, and do one step, one small step at a time, exactly. Not not in a malicious or pernicious way, but I mean you get to get people to exactly.

Speaker 2:

I mean mean if you type your password wrong a few times, many systems will say wait, we're going to give you a cool down period so that you can start to think rationally again.

Speaker 1:

Well, that annoys me no end.

Speaker 2:

Oh, it annoys me to no end too but, I, understand the behavioral insight.

Speaker 1:

Yeah, I understand why as well, but sometimes you're 100% sure you typed the correct password and it still doesn't work. And then, oh, your account is blocked for 24 hours. But I need it now.

Speaker 2:

Yeah, a 24-hour block is pretty extreme, but like a two-minute block or something like that, something enough to say huh, I wonder why that's happening. That's a good cognitive defense, and if sometimes you need a piece of technology to get somebody to that point where they're not just reflexively hitting their head against the wall, yeah, because when you do something like that, you need to also support them in finding the solution.

Speaker 2:

Because if you're blocking somebody for 24 hours, you need to also give them an escape hatch. You need to say if you need help with this, here's a button, here's a phone number, here's something, and you can get that fixed right away. But sometimes you're just also trying to save somebody from themselves, and that's why those extreme measures exist. I don't think that those are necessary most of the time, but I think they're a good example of things that we've seen, that we can sort of model other things after.

Speaker 1:

Yeah Well, I mean, it's certainly interesting, but how do you go about doing this as an organization? I mean, do you go? You need to integrate it into your bigger stack, right Otherwise?

Speaker 2:

it doesn't really make a lot of sense.

Speaker 1:

But then what are some of the first steps? Bigger stack right, otherwise it doesn't really make a lot of sense. But then what are some of the first steps that you can take to actually get this concept and this behavioral change into the people at your company?

Speaker 2:

so multiple ways to do this and you know, realizing that I work for a vendor, I'm gonna try to abstract from from the vendor that I'm with and talk very kind of open.

Speaker 2:

So the vendor community, us and others will build and do have platforms that try to enable this and that's through API integrations with third-party bits of a security or IT stack so that you have behavioral you know leading and trailing behavioral indicators about what's going on. You can start to make micro adjustments or even larger adjustments based on that so that I think will be the table stakes within this market. The other thing is there's always a last mile problem between what a platform can tell a user or tell an admin and some of those things happening I mentioned before, like how do you enforce behavior for using the shredder when you should? Some of that comes out of basic nudge theory and behavior science of trying to put those in convenient places or put a sign up above the trash can that reminds people what should be shredded, and then also put the shredder right next to the trash can so people won't just be lazy.

Speaker 1:

Where's the shredder again?

Speaker 2:

Yeah, so there's always ways to think about some of the disconnects between the cyber world and the physical world, and we have to enable people to do that. The other thing is agentic AI is going to continue to be a real thing, and we have to approach that in a smart way and in a way that doesn't give into the hype but creates a real future that's going to be valuable for organizations. And so KnowBefore and I'm sure many of our other folks in this market are on that path right now, and we have a number of agents that we've already created that will do just-in-time training or do like policy quizzes that are very well refined and in sophisticated all the way through kind of real-time content creation. So basically, any anything that you can think that somebody might want to do from awareness or human risk management perspective, us and others are thinking about and will be bringing to market either right now or very soon and then who should have the responsibility of?

Speaker 1:

we specifically we talk about a genetic AI and agents are doing this stuff that are interacting basically with your people who should be responsible for providing the guidance or having the oversight maybe an AI oversight kind of function or whatever.

Speaker 2:

Yeah, I mean, when you look at the agentic AI world, there's this concept of an orchestrator. An orchestrator is kind of like the manager that sits over it and says, alright, based on what I'm seeing here, I believe we need to get this expert agent involved, and then there's back and forth communication between that. It works very much like a team, and then a human in the loop is going to be really important, so somebody to make sure that the orchestrator is actually doing the right things, that they're able to guide that. There's also another interesting thing that's going to happen is Anthropic, which is one of the large AI players talked about the fact that we're going to see full-time AI employees within the next year. Full-time AI employees within the next year. Those AI employees can do the right thing and they can do the wrong thing. Like anybody else, they can be steered wrong. We know, with large language models, you can social engineer the heck out of those as well, oh yeah yeah.

Speaker 2:

So how do you do security awareness training for an AI agent and that's questions that we're asking as well. Do you have an agent or a set of agents that will be kind of scanning, understanding the threat landscape and then either through retrieval, augmented generation, creation of documents that another agent can ingest and make more informed decisions? There's lots of pivoting around this new agentic ecosystem.

Speaker 1:

And how much room are you going to give them, how much leeway are you going to give them? Right, exactly, they shouldn't become micromanagers, right?

Speaker 2:

Because then people are going to hate them and they're just going to have the adverse effect.

Speaker 1:

You don't want that.

Speaker 2:

Exactly. Yeah, that's fine-tuning. I think it's a tightrope that us, as a vendor, and a lot of other vendors, are going to have to walk and figure out.

Speaker 1:

I'm sure that some people are going to make mistakes, but would you say it is necessary, I mean it is crucial to do it, because otherwise, you're going to be able to keep up with how fast everything is going?

Speaker 2:

And you can't have. I mean, you can build a solid agent that is doing the right thing for a certain time. At the same time, it's really hard for a threat model to encompass every type of thing that could go wrong, and so we're going to have to be thinking about how do you sandbox an agent that's doing inappropriate behavior, how do you rehab those things, or how do you spin up a new one, that's, you know, the 2.0 and 3.0 of those. Those are all things that the market is figuring out right now. I know, if we're thinking about those, then, uh, others are as well.

Speaker 1:

100 and um, yeah, it's, it's a brave new world well, I mean, and then you get into the automation part of things, I mean that that's, that's, that's adding another level of trust issues.

Speaker 2:

Yeah, and different organizations have different levels of trust for any type of automation. There are some organizations that literally just want a button where you press that button and then a system will go off and do everything for them and will create nice fancy reports. There are other organizations usually large multinational organizations with lots of regulations, that want a human in the loop and human oversight over everything, to kind of be able to look over it and say, yes, I approve these types of things to happen. Looks like things are going right, and then they're just going to put a finger on the pulse from that point on.

Speaker 1:

Well, I mean, obviously the big question in that respect is if we will always have the luxury of keeping a human in the loop If things are going faster and faster and faster. One of the logical conclusions of such a thought process would be that there comes a time when it will go too fast, and then you need full autonomy in your.

Speaker 2:

but on the other hand, you can say, look, but you could also approach it from a different angle I know, but I mean, yeah, I mean there, there's at least in the near term, um and by near term I'm talking within the next couple years or well, the world is figuring out the true path for agent genetic AI.

Speaker 2:

A control level safety valve is going to be critical, because you can get into race conditions as well. You know, you look back at the the early stages of the self-driving car industry and there was a situation where Waymo and there's tons of Waymo cabs out here in San Francisco right now they're doing really well Early in that. One of the tragedies that happened is a Waymo car hit a race condition where somebody was walking a bicycle across the road and the computer vision in the car knew what a person was, knew what a bicycle was, knew what a person riding a bicycle looked like, but had never been trained on somebody walking a bicycle. And so it was in this condition where it's going. That's a person, that's a bicycle, that's a person, that's a bicycle and it hit analysis, paralysis and it also hit the person and killed them.

Speaker 2:

And so there have to be ways and safety valves that will allow an organization using any type of agentic AI to say there's this condition that, for whatever reason, nobody planned for and now we got to sandbox that thing. We got to turn it off, we got to do a new version.

Speaker 1:

But especially if you use it for. If you use agentic AI for security purposes, I mean you should be able to guarantee that it's secure Right. Otherwise, I mean I mean there will be agent to guarantee that it's secure right.

Speaker 2:

Otherwise I mean there will be agentic AIs that get flipped and turned into insider threats.

Speaker 1:

Oh yeah.

Speaker 2:

Everything that we can think that would happen with a person right now will happen with an automated AI agent.

Speaker 1:

That may not be the most uplifting end to this discussion, but it is right.

Speaker 2:

I mean, it's the truth that we're going to have to deal with as a society, and this is bigger than selling security awareness or anything else, because agentic AI is going to touch every part of everything that we do.

Speaker 1:

Yeah, I agree, it's not the easiest problem to solve. No, I mean, deep fakes are small in comparison, for sure, oh yeah, I think so too. All right, well, thank you very much. Thank you, for sure. Oh yeah, I think so too. All right, well, thank you very much. All right, thank you, I really enjoyed this and, well, I hope to speak to you another time, absolutely.