Ep. 3: This is BS!
Notes
Ever wrestled with the fact that people often make horrible security decisions even though their employers have security awareness programs in place? It's often because we assume that being aware of something should naturally result in better behavior. Well... that's not the case. This episode takes a deep dive into the knowledge-intention-behavior gap where we are confronted with three realities of security awareness. And those realities lead us to the realization that we need to focus on behavior.
Guests for this episode are all leaders in the fields of Behavioral Science. They are, BJ Fogg, Ph.D., author of Tiny Habits: the Small Changes that Change Everything, Matt Wallaert, author of Start at the End: How to Build Products That Create Change, and Alexandra Alhadeff, co-author of Deep Thought: A Cybersecurity Story.
Guests:
BJ Fogg, Ph.D.. -- Behavior Scientist & Innovator at Stanford University. (Personal website) Author of Tiny Habits: The Small Changes That Change Everything. (Amazon link)
Matt Wallaert -- Head of Behavioral Science at frog (a Capgemini company). Author of Start at the End: How to Build Products That Create Change (Amazon link)
Alexandra Alhadeff -- Behavioral Scientist & Product Manager at The Fabulous. (Personal website)
Notes & Resources:
BJ Fogg testimony to the 2006 US Federal Trade Commission about the dangers of persuasive technology.
Fogg Behavior Model
About Nudge Theory
Great catalog of Dark Patterns
Ideas42 cybersecurity-related behavioral science research.
Deep Thought: A Cybersecurity Story, by Ideas42.
Recommended Books (Amazon affiliate links):
Tiny Habits: The Small Changes That Change Everything, by BJ Fogg, Ph.D.
Start at the End: How to Build Products That Create Change, by Matt Wallaert
Nudge: Improving Decisions About Health, Wealth, and Happiness, by Richard Thaler and Cass Sunstein
Inside the Nudge Unit: How Small Changes Can Make a Big Difference, by David Halpern
Evil by Design: Interaction Design to Lead Us into Temptation by Chris Nodder
Thinking, Fast and Slow by Daniel Kahneman.
Transformational Security Awareness: What Neuroscientists, Storytellers, and Marketers Can Teach Us About Driving Secure Behaviors by Perry Carpenter.
Music and Sound Effects by Blue Dot Sessions & Storyblocks.
Artwork by Chris Machowski.
-
Perry Carpenter: Hi. I'm Perry Carpenter and you're listening 8th Layer Insights.
Perry Carpenter: This is the show where we talk about cybersecurity, humanity and all the fun bits in between. And today, we're talking about BS. That's right. Those of you that know me know that I'm a huge fan of the fields of behavior science and behavior design and so, on today's show, I've invited three BS experts to join us and serve as guides as we wade into this topic, so that we can better understand how these fields apply to cybersecurity and how an appreciation of humans, specifically working with rather than against human nature, can alternately help us reduce human risk in our organizations and help people make better decisions. Let's dive in.
Alexandra Alhadeff: If you want a security program to succeed you need to work with, rather than against, human nature.
Dr. BJ Fogg: Helping people do what they already want to do, I call that maxim number one. And so, in a corporate setting, if you're helping an employee do what she or he already wants to do, that's the best path. Now, sometimes we need to get people to do things they don't really want to do, especially in the security space.
Matt Wallaert: That's a conversation I think people almost never have in phishing. Why are these emails evocative in the first place? What are the promoting pressures? Why am I responding? There's a realm of phishing that is about making it seem that someone that you care about is in distress. Could we just make ways that make it easier for us to reassure ourselves that our people are not in distress?
Alexandra Alhadeff: Stories about cyber events can have a stronger impact on how people think about security, than standard awareness training, and this is something you talk about as well Perry with what you call the 'Trojan Horses of the Mind.'
Matt Wallaert: Rather than, like, dealing with a phishing piece, let's deal with the originating promoting pressure. When I was a kid my parents, like, were, like, "Okay, here's the family codeword and if there's an emergency and we need somebody to pick you up at school, we'll make sure that they know this word so they can tell you." That is a great affordance for the promoting pressure.
Dr. BJ Fogg: I think the graphical version of my behavior model is the thing that should be put on my headstone. That is, not just motivation, ability, prompt, but there's an action line and that action line shows there's a relationship between motivation ability and nobody has ever unearthed that or described that before. I think anything I might be remembered for would grow out of that.
Perry Carpenter: Let's be honest, people are weird. They're unpredictable and they don't always do the right things. They make decisions that lead to security incidents and data breaches and they make these in the moment trade offs that hurt their future selves and they're stubborn. But, continuing this them of honesty, we're all people. Welcome to the human condition. The human condition is us. And that's important to realize because it's really easy for us in the cybersecurity profession to play the blame game. We tend to look down our collective noses at the unwashed masses of end users, saying to ourselves, if they would only behave themselves, the world would be better. If they would only do better, security would be so much easier.
Perry Carpenter: But I'm not going to let us get away with that. We do the same things. We make these in the moment risky trade offs and we all have some bad habits. And there might even be some areas where we just don't care enough to do the right thing. Welcome to the human race. We're all ******. Or are we? Actually, there is some hope and that's what this episode is all about. In today's episode we'll learn the fundamentals of shaping human behavior and habits, we'll explore multiple behavior models and we'll get into the nitty gritty of how to successfully inject BS into your cybersecurity program and life. All of that coming up. Stay with us.
Perry Carpenter: Hi there. My name is Perry Carpenter. Join me for a deep dive into what cybersecurity professionals refer to as the 8th Layer Of Security. Humans. This podcast is a multidisciplinary exploration into the complexities of human nature and how those complexities impact everything from why we think the things that we think to why we do the things that we do, and how we can all make better decisions everyday. Welcome to 8th Layer Insights. I'm your host, Perry Carpenter.
Perry Carpenter: Welcome back. This topic of behavior science is really important for cybersecurity professionals and technologists to consider because, well, the reason is simple. Behavior equates to action. I know, profound right? Not really, but let me explain. When it comes to security we talk a lot about security awareness and that's great, but simple awareness of something isn't really the end game. Awareness of something doesn't naturally lead to behavior based on that awareness. What we need to do is affect the decisions and actions that people take and awareness is really just having head knowledge of something and head knowledge alone isn't enough. So, security awareness does not equate to secure behavior. And the reason for that comes down to what I refer to as the knowledge intention behavior gap. Here's the short version.
Perry Carpenter: There's a gap between knowing something, having information and intending to act on that information. In other words, there are a lot of things that we know but we don't really care or have the intention to act on that knowledge in any meaningful way. And so there's a gap between knowledge and intention. There's also a gap between intention and behavior. So, even when we know something and have the best intentions to act on that knowledge, we don't always do so. Think of this as the New Year's Resolution phenomena. Many people around the world make these lists of New Year's Resolutions based on the knowledge of things that will make our lives better. We might want to eat better or lose weight or re-prioritize the way that we spend our time and money and these lists are an expression of our intent to act on that knowledge. But the sad fact is that the vast majority of us don't follow through.
Perry Carpenter: We might try for a week or two weeks or maybe even a month, but ultimately, old patterns, habits and in the moment trade offs override both knowledge and intention and, before we know it, another year has passed. Um, I can see Carl over in the sound booth weeping now. Carl, can you shut the door? Alright, thanks. Okay, where were we? Oh, that's right, before we all spiral into the same depression that Carl is in, let's talk about how this relates to security awareness. Out of this knowledge intention behavior gap, flow three realities of security awareness. Here they are. Number one, just because I'm aware doesn't mean that I care. And if you don't believe me, think about the last speed limit sign that you whizzed past and took as a suggestion or the stop sign that you slowed down for and rolled through the intersection as you looked all around to make sure that nobody else was coming and that there weren't any police vehicles around.
Perry Carpenter: Number two, if we try to work against human nature we will fail. And we'll talk about this a lot more throughout this episode. So, let's go to number three. Number three is that what our employees do is way more important than what they know. And I'll say this as bluntly as I can, knowledge alone has never stopped a breach. It's always an in the moment action that is the cause. Knowledge can be part of that, but someone can do the wrong thing even with the right knowledge and someone might do the right thing even if they don't know why. So, in the end, it all comes down to behavior. How our people behave is the key.
Perry Carpenter: Carl! Okay. I think Carl is back to his normal self now. Whatever we call that. And I also think it's time for us to bring in our guests to lay a bit more groundwork and then we'll get into how we can use that in our security programs and our lives. We'll first hear from Matt Wallaert. Matt has been at the forefront of the behavior design efforts at companies like Thrive, which was bought by LendingTree, Microsoft, Clover Health and currently at Frog. He's also the author of a great book called 'Start At The End: How To Build Products That Create Change.'
Matt Wallaert: Behavioral science is really two parts. Behavior is an outcome, science is a process, and so it's the active definition of everything we do as behavior changing and then the use of the scientific method and experimentation to understand what interventions we build to change other behaviors.
Perry Carpenter: Okay, so behavior is an outcome and science is a process that gets us there. Let's flush that out a little bit and bring in another expert.
Alexandra Alhadeff: Behavioral design is about understanding how people make decisions and engage in behaviors that benefit them and oftentimes society as a whole. We don't eat healthily, we smoke, we don't save enough for retirement and these are persistent behavioral problems. So, behavioral design seeks to understand behavioral problems. And then we design solutions to overcome them.
Perry Carpenter: That's Alex Alhadeff.
Alexandra Alhadeff: I am a Product Manager and Lead Behavioral Scientist at Fabulous. A self- care app that applies behavioral insights to improve health and wellness outcomes.
Perry Carpenter: So, Alex, I'm wondering if you can talk a little bit about the application of behavioral science and behavioral design as they apply in real life? Like, what are some examples of the ways that we can see this in the applications that we use or the things that we interact with on a daily basis?
Alexandra Alhadeff: The beauty of behavioral design is that the solution to a behavioral bottleneck that at first seems insurmountable, can be relatively simple and cheap. Text message reminders have increased patient medication adherence. Countries that have people opt out instead of opt in to organ donations have 30% higher organ donation rates on average and postcards prompting people to plan and specify when they will get their flu shots, date and time, increases the likelihood that people will get their vaccines by 11%. So, behaviorial design has been applied in a wide range of contexts from health wellness sustainability, financial capability, cybersecurity and currently, in my job, I apply behavioral science to help people live their best lives.
Perry Carpenter: So, in essence, designing for behavior is about helping people make better decisions and making things that seem difficult easier to do. It's also about helping people bridge that gap between intending to do something and actually doing it. And that comes back to the knowledge intention behavior gap that I described earlier.
Perry Carpenter: A funny side note here. When I first started using that phrase "knowledge, intention, behavior gap" I didn't realize that there was actually a similar phrase used by behavioral scientists. They commonly refer to "an intention action gap" but they don't mention the knowledge part.
Alexandra Alhadeff: I thought that you actually uncovered the first step. That is implied in the intention piece, but is actually separate. There is knowledge, there is building awareness, but that doesn't mean that people care, that doesn't mean that they're motivated. That doesn't mean that there is intention. And then there is the next gap, which is bridging intention and motivation to action. So, I think you actually filled a missing link with your framework.
Perry Carpenter: In the cybersecurity world, we almost always start with education or knowledge acquisition, and we do that because we're usually trying to convince someone to do an unnatural behavior. And so we can't assume that they have intent yet. But, again, even after knowledge acquisition, we can't make the assumption that someone will care. Remember, security awareness reality number one. Just because I'm aware doesn't mean that I care. And that's where designing for the behavior that you want someone to accomplish, the behavior that you want someone to do, comes into play. Through designing for the behavior that you want, you can naturally make someone's inclination bend towards that behavior and that gets really really interesting, and that's where different models for behavior design come into play.
Perry Carpenter: So, let's take a few minutes now to look at some of those models. We'll start with what is arguably the most famous behavior model to date. This is the Fogg Behavior Model created by Dr. BJ Fogg out of Stanford University.
Dr. BJ Fogg: I'm BJ Fogg. I'm a Behavior Scientist at Stanford University. I run a research lab there called The Behavior Design Lab and I teach there once a year. Always about behavior change.
Perry Carpenter: And BJ is well known around the world for having this classroom that turns out successful entrepreneurs. People who have gone on to create some of the most successful and engaging applications used around the world.
Dr. BJ Fogg: One of my students is the Co-founder of Instagram. One of my students is the Co-founder of Clubhouse.
Perry Carpenter: And if you're familiar with the work that I've done on applying behavior design principles to security awareness, you've probably already seen the name BJ Fogg, because his model is the primary model that I use in my design principles. In fact, in Chapter Four of my book, the chapter that contains all of the core behavior design principles, I use BJ's model quite extensively and got his permission to, not only use it, but to expand on it and then BJ was a key reviewer for that chapter and gave his blessing for the way that I used his model. If you can, I would encourage you to go ahead and go to behaviormodel.org, so that you can see this model as we talk through it.
Dr. BJ Fogg: What I call The Fogg Behavior Model is the cornerstone in a larger foundation that I call behavior design. So, in my Stanford lab, we realized we weren't doing persuasive technology anymore. We hadn't for a few years. We weren't studying that and our work had shifted. So, the behavior model is the cornerstone for what we call behavior design. Behavior design is a set of models and a set of methods. One of the models is The Fogg Behavior Model, which includes motivation building and prompt. Behavior comes down to three things. When a behavior happens, it means there is motivational to the behavior and there is the ability to do it and there is a prompt for that behavior. Just three things. Motivation, ability, prompt.
Perry Carpenter: That's the key thing about the Fogg Behavior Model. You'll see it represented as an equation. It is B=MAP. Behavior equals motivation plus ability plus a prompt at the time of behavior. The way that BJ describes it, he says "Behavior comes down to three things. When a behavior happens it means that there was sufficient motivation at a time when there was also the ability to do it, so the behavior felt easy enough to do and there was a prompt at the time of the behavior. So, just three things. Motivation, ability and a prompt. And by necessity, if any one of those is missing or lacks in sufficiency, the behavior, by definition, does not happen.
Dr. BJ Fogg: When I first put those pieces together in about 2007, it might have been 2006, but I mark it as 2007. I thought can it really be this simple, this elegant? And there was part of me that was, like, no, there's got to be more here I'm not seeing. But it turns out the answer is yes. All behavior, any kind of behavior, any age, any culture, comes down to motivation, ability and prompt. And that is not a deliberate manipulation or effort on my part. That's what it comes down to. And that's really wonderful and it's kind of a beautiful thing. You can explain the behavior model to somebody in two minutes, as you know. But then there are just so many ways to use it that I myself am still discovering other ways to use this model.
Perry Carpenter: So, BJ, in this model we have motivation, which ranges from not having much motivation to being highly motivated. And ability, which ranges from something that's very difficult to do to something that's very easy to do. And then, of course, the great thing that happens is when somebody is motivated enough and a behavior is easy enough to do, when the prompt happens, that's where the magic happens. That's where behavior comes in. But I'm wondering if you can describe a little bit about how you came up with how these three pieces fit together and what the prompt actually is?
Dr. BJ Fogg: The prompt component is so important and there are very few models, are very few approaches that have acknowledged the importance of the prompt. Now, the prompt is not the motivator. The prompt is the thing that says do this behavior now and no behavior happens without a prompt and that was the last piece of the puzzle that came to me as well. I mean, I was like okay, it's motivation, it's ability, this is what makes up motivation, this is what makes up ability. I've defined those specifically in new and unique ways. And then what happened Perry was this, I had to go to Europe and give a key note and I told them I would talk about hot triggers. So, I used to call prompts triggers and I was like, okay, I'll talk about hot triggers. So, I fly over to Europe and it's like, I still haven't figured this out. Once I got settled and landed it's like, okay BJ, you've got to get this together. You know there's this thing that prompts the behavior and you're going to talk about hot triggers like tomorrow or in two days from now, so you've got to figure it out.
Dr. BJ Fogg: That's when I put all the pieces together and it's like, okay, there it is. It's these three things. So, if you look at older versions of my model and some people still perpetuate this, it's motivation, ability, trigger. Now, I got rid of the word trigger a few years ago because most people interpret that as the motivator and it's not. A trigger is just simply whatever cues or prompts the behavior. So I made a pretty big decision and shifted, and it's motivation, ability and prompt.
Perry Carpenter: So, a prompt is this thing that comes in and is the, well I'll use BJ's other word for prompt, his previous word, Trigger. It's the thing that triggers the behavior. It's the thing that is asking you to do it or telling you to do it. Now, the prompt can be either internal or external and the gold standard is an internal prompt. It's just you feel like doing it. Something is welling up within you that is telling you to do it. But prompts can also come from the outside, like an app notification or a billboard or somebody just asking you or telling you to do the behavior. It can be an email, it can be a pop up, it can be any of these things. A prompt could also exist within a person's context. It could be a poster on the wall, it could be a billboard that they see, it could be just something sitting out in front of them and it's really interesting to think about how we can manipulate an environment that would then inject the prompt. And that's where things like nudging and other behavior tools that we're going to talk about come in.
Perry Carpenter: Now here is a danger warning. We live in a society now where everybody is inundated with prompts all the time. There is so much noise out there that it becomes really, really difficult to cut through that noise. And we live in the state that you might call prompt fatigue. I mean, think about how many app notifications you receive on your phone or calendar invites or other little pop ups that are happening on all the different devices that you have. And then think of all the billboards that you see and all the other things that are competing for your attention all the time saying "look at me now." That is prompt fatigue. And what happens is our minds create this defensive barrier against prompts and prompts that look the same over and over and over again become invisible. Our mind filters them out as a defense mechanism. And so, the question that we always have to be asking is "Are our prompts going to be visible to the people that we are trying to prompt?" And that's why BJ's gold standard becomes this prompt that lives within the person.
Perry Carpenter: Now, we can't always achieve that as security professionals, but when it comes to things that can be formed into habits, I think that that's the goal.
Alexandra Alhadeff: At Fabulous we wanted to increase the likelihood that people would complete their habits. Habits are a core part of the Fabulous experience, as are routines and routines are really a sequence of habits. We wanted people to complete their habits, to complete their morning routine. We hypothesized that present bias was keeping people from completing their habits and routines and present bias is our tendency to focus on short term benefits over long term benefits and we're all susceptible to present bias, as our users are as well.
Dr. BJ Fogg: Behavior happens when motivation, ability and prompt come together at the same moment. That's for all behaviors. Habits are a type of behavior, so it's a subset. Habits also happen when there's motivation, ability, prompt. But what's different is habits you do quite automatically. So, any behavior you do quite automatically we could consider a habit. The way they become automatic is through emotion. So, it's not just the behavior model that you need to understand to create a habit. You also need to understand the role of emotion. Emotion wires in habit. Once the habit is wired in, then you don't have to have that emotional payoff anymore.
Perry Carpenter: We've spent a lot of time talking about the Fogg Behavior Model and there's a really specific reason for that. I want you to have these ideas of behavior being a combination of motivation, ability and prompts within your mind. And so, as we look at other behavior models, it really still comes down to understanding where there is motivation, what the person's ability is or what barriers exist and how we're asking them to do it or how they're being told to do it. What is making somebody want to do this or not want to do it? How easy or hard is it for them to do or what barriers might be in their path? And how are we asking them to do it? How are they being told to do it or what might exist within that person that would be telling them to do it? What I love about the behavior model that Matt Wallaert teaches in his book 'Start At The End' is that it is so easy to understand. Even an executive can understand it!
Perry Carpenter: All you have to do is take out a sheet of paper or go up to a white board and on the left hand side draw a big up arrow and on the right hand side draw a big down arrow. And then on the left hand side think about things that would promote or encourage or make this behavior easier and then on the right hand side think about the opposite. What makes this behavior harder? What might demotivate them to do it? What might make them just not even think about doing it? And so, what Matt is proposing is a competing pressures model where, on the left hand side you have a promoting pressure and on the right hand side you have an inhibiting pressure. So, let's get to it.
Matt Wallaert: Behavior is a competition between things that make behavior more likely and things that make behavior less likely. One of the reasons I have people map that out really specifically in ways that maybe others don't is, it turns out that there is a systematic bias in how we tend to understand behavior changes. So, if I tell people I want you to get people to eat more M&Ms, they will systematically over-index on promoting pressures. Branding and new flavors and whatever, like, anything that makes it more attractive, and ignore inhibiting pressures. Availability, cost, health concerns, other kinds of things that might make a behavior less likely. And vice versa, by the way, if you're trying to eliminate a behavior, which happens in security. A lot of times in security we're trying to get people to stop doing something. We have a tendency to over-index on punishment and making it harder to do and less on why the hell is anybody doing that in the first place?
Matt Wallaert: That's a conversation I think people almost never have in phishing, which is like, why are these emails evocative in the first place? What are the promoting pressures? Why am I responding? There's a realm of phishing that is about making it seem that someone that you care about is in distress. Could we just make ways that make it easier for us to reassure ourselves that our people are not in distress? That isn't like dealing with a phishing piece. Let's deal with the originating promoting pressures. When I was a kid my parents were, like, "Okay, here's the family codeword and if there's an emergency and we need somebody to pick you up at school, we'll make sure that they know this word so they can tell you this word." It's a common child safety practice. That is a great affordance for the promoting pressure. For the fear that makes you do something wrong in the first place. And I think that that sort of stuff is often under-talked about and so that's why I try to get people to really explicitly draw up here and down here and be really explicit about it, because we have this tendency to ignore either side.
Matt Wallaert: Another way that we sometimes get people to do it is, and I love this one in security. This is like 'Sneakers' and every film ever, which is think like a thief. Imagine you were trying to get everyone in your company to respond to a phishing email. Like, what phishing email would you write? And in doing that, it will reveal to you things that you could do to lower your potential risk. So, let's take a really practical example. So, Clover Health. Because we are, as a Medicare Advantage Plan you are ultimately employed by the government. This is the government that is the payer. If Medicare sent an email about an emergency, it's something we're really going to pay attention to. And so, saying hey, you know what? All Medicare emails will come from the compliance officer. It will always be filtered through this person, as just an extra step. Again, addressing the oh my God I need to panic about this email because I don't know if anyone else has got it and I don't know what else is going on and I don't know if these things are urgent. Okay, urgency will be driven by this person. And unless it comes from this person, it's not urgent. Let them be the gatekeeper of urgency.
Matt Wallaert: It's that sort of okay, what promoting pressures? Why am I afraid in the first place? Helping people understand how Medicare even works. How would they contact us? What time line does it take? Like, how long do we have to respond? Hey, you know what? Medicare is never going to tell you you have an hour to respond to something. It's never gonna happen. Medicare has very documented time lines in which they're gonna ask you to do things and so, if you get something that says "hey, you have to do something in the next 20 minutes," that's definitely not Medicare. Those sorts of pressure lessening interventions in cybersecurity research.
Perry Carpenter: And now let's hear from Alex Alhadeff about the behavior model that she uses.
Alexandra Alhadeff: I gravitate to behavioral diagnosis and design models that help us close the intention action gap.
Perry Carpenter: Yes. So, what does that look like at Fabulous, Alex? Can you break that down for us?
Alexandra Alhadeff: We follow four steps. Behavior, barrier, solution and test. First we define the behavior that we want to change, whether it's getting people to drink water as soon as they wake up, exercise, sleep better or engage in deep work. Next we identify the barriers that prevent people from accomplishing that behavior. Then we design solutions. Finally, we evaluate the effectiveness of those solutions through RCTs or Randomized Control Trials, which are the gold standard in experimental research. So, by defining the behavior we want people to achieve, diagnosing the barriers at play, designing solutions and testing them, we can help close the intention action gap.
Perry Carpenter: So, let's recap what Alex was talking about. The behavior model that she prefers and that is used at Fabulous is a four step model. They start by defining the behavior in as crispy a way as possible. By that I mean you're leaving as little to doubt as possible about what the behavior is that you're trying to help someone achieve. And then they move onto barrier. So, behavior and then barrier. And I think that's really an interesting component, because it is acknowledging this thing that Matt was talking about of what is the inhibiting pressure? And so, this focuses on removing inhibiting pressures. And then it moves onto solution. This is the scientific hypothesis that the Behavior Design Group comes up with about how to address dealing with those barriers.
Perry Carpenter: And then finally the testing phase. And I should mention that all of these behavior models do have rigorous test phases that go with them. So, it's not just deciding something and then trying it out randomly. There should be some kind of measurement that you put in place. Some kind of idea of understanding if you're achieving the behavioral outcome that you're going for. Again, this is a four stage model, so BJ Fogg, motivation, ability and prompt. The Matt Wallaert model, which is all around competing pressures, so promoting pressures and inhibiting pressures. And then this last model, which is this four step model, starting with defining the behavior, looking at potential barriers, why somebody may not be able to achieve that behavior, then moving to the solution phase and the testing phase.
Perry Carpenter: Before we move on, there's one other thing that I want to touch on here and this isn't a behavior model per se, so much as it is a technique. A way of approaching behavior. And this is called nudging. And so you may have heard of the concept of nudge theory before. What nudging does is, looks at all of the different available choices that somebody has at the time of behavior. So, it's looking at the environment. And all of those choices are what is called the choice architecture. And then, someone who's trying to design for a certain behavior isn't necessarily making any of those choices restricted. Instead, what they're trying to do is stack the psychological deck in their favor, so that the person more naturally chooses the desired behavioral outcome. This can be as simple as making the desired behavior a little bit easier to do. Like if I want somebody to choose water over soda, make the water within more easy reach than the soda. But don't make anything fully off limits.
Perry Carpenter: Or I can take the thing that I want somebody not to do and make that just a little bit more difficult. And so, in all of this you also see how this relates back to the Fogg model. If I'm wanting to nudge somebody I can add a little bit of motivation. I can make the behavior a little bit easier in some way. Or I might prompt in a very specific way. And so, keep in mind, there's a ton that we can learn by looking at all of these different models and all of them can inform us in different ways and can help balance things out so that we don't get tunnel vision. Ultimately, that's what this is about. Being able to broaden our perspective enough that we can start to create some really crispy defined behaviors that we want our people to take and then we can design with those behaviors in mind.
Perry Carpenter: Okay, there's one more technique that I really want to share with you, because this is powerful and you can think of it as a mash up between nudging and the Fogg model.
Perry Carpenter: If I'm wanting to really stack the deck in my favor for a specific behavioral outcome, I want to create what I call a power prompt. A power prompt is where I manipulate as many of those factors as I can at the same time. So, when I send the prompt, can I also increase motivation? Or, when I send the prompt, can I also make the behavior be or feel easier to accomplish? Or, can I raise motivation and make the behavior feel easier at the time of prompting? Social media platforms do this all the time. You get a post on your time line, they send you an email. That's the prompt. And within that is a little bit of information, John Doe posted on your time line and what that does is opens up curiosity. That's a motivating factor. And they provide you a convenient little link for you to click on. That's making the behavior as easy as possible. And then before you know it, you've scrolled through LinkedIn or Facebook or Instagram for a couple of hours and you're wondering how you descended into madness. But that is the power of a power prompt that has been properly modeled and executed.
Perry Carpenter: Okay, let's recap. We talked about this knowledge, intention, behavior gap and the fact that that leads to three realities of security awareness. Just because someone is aware doesn't mean that they care. If we try to work against human nature we will fail every single time and what our employees do is way more important than what they know. And the culmination of those three realities are what lead us down the path of behavior design. We looked at the Fogg Behavior Model that consists of motivation, ability and a prompt, all coming together at the time of behavior. And then we looked at the competing pressures model, where there is a promoting pressure and an inhibiting pressure. And then we looked at the four stage model, where we move from defining the behavior to understanding the different barriers that might prevent that behavior from being able to take place, to thinking through the solution and then all the way into the testing phase.
Perry Carpenter: So, we have just a few more things to cover before the end of this episode. Number one, I want to put some more meat on the bone for you. I want to help us come to an understanding of how important it is for us to define our behaviors specifically. And then I want to move onto how we can apply behavioral design principles within the cybersecurity context. But first, let's talk about how behavioral design principles are used by the dark side.
Perry Carpenter: And when I say dark side, I'm specifically referring to two different things. Number one is addiction. It is the dopamine rush that comes with a well designed application that is using behavior design principles in order to create a habit, where the habit is not a good healthy habit like drinking more water through receiving notifications, the habit is constantly scrolling through the app or always looking for new notifications because of that dopamine rush and that variable reward system that gets employed. One of the people who often receives a lot of negative credit for this type of highly addictive application ecosystem is BJ Fogg, who is one of our guests today. But, in reality, he was one of the earliest ones raising his hand saying there should be concern here because of the highly addictive nature of what was called, at the time, persuasive technology. Let me let BJ address some of this.
Dr. BJ Fogg: I have been unfairly accused of addicting kids to games on mobile phones and so on. Even the Netflix documentary, 'The Social Dilemma,' has a clip of me of ten seconds and they imply that here I am teaching Stanford students to addict people through social media. That clip is from a health conference. I am speaking to professionals that are creating health products. But they didn't use it that way at all. They used it quite deceptively. And so, even though I really agree with the aims of the film I, frankly, in 1988, I was the one that first said "Hey, there are ethical issues here." In 2002, in my book, I talk about those ethical issues. In 2006, I make a formal testimony to the FTC about the problems that I see coming with persuasive technology. I mean, people can watch. It was a video testimony I gave to them.
Dr. BJ Fogg: So, fast forward to 2020, where they say I'm the problem. No, I'm the one that shone the spotlight on the problem and said "This is coming." So, it really is unfair and in some ways it's hard. Tristan Harris who started the Time Well Spent Movement and was a central figure, was a student of mine and I mentored him and I was helping him move forward before he got famous. I was trying to help him move forward with his work. But now it's hard for me to align with Time Well Spent when this documentary, that I think is clearly associated with them, points to me as a bad guy. I'm not the bad guy. And you look at the history of what I've done. I'm the person calling out "This is a problem and we should be doing something about it." And, in fact, my book 'Persuasive Technology' came out in 2002. Perry I truly expected that policymakers would be calling me and say "Oh my gosh, what should we do about this?" They didn't.
Dr. BJ Fogg: Then when I did the testimony to the FTC in 2006, and I outlined here the three things that we should be worried about, then I thought oh, people are going to get in touch. Nobody did. And it really took until about 2016, where people started caring about the issue and then my research had moved on to what I call behavior design, which is not the same as persuasive technology. The filmmaker for 'Social Dilemma' and other things, I think they knew the truth, but I think they just needed somebody to point to as the black cad and they picked me and that's really unfair. It's hurtful. It drives me a bit bonkers, where you grab a clip of a video and you use it deceptively in a film that's supposed to be about ethics and doing the right thing. So, it's been a little hard for me to get over that, but I've had to. I've got to move on with my work and go forward and help people in the ways that I can.
Dr. BJ Fogg: So, in the face of that, let's say the last year and a half, bam, the book 'Tiny Habits' has come out and it's really resonated and it's been such a delight to get that method and behavior design into people's hands. And every day I get emails from people thanking me for changing their life or changing one of their family member's life. That has just really been, just a gratifying experience. I mean, it's a lot of work, not just to write a book, but also all the research and testing and iteration and everything that led to it and sometimes it feels like oh my gosh, is this ever going to be worth it? But it is feeling worth it now and it's just a wonderful thing.
Perry Carpenter: So, the fundamental ways that behavior design is used can be employed in very negative ways and BJ was one of the first to raise the alarm about that. And I'm super appreciative to BJ for allowing me to ask that very difficult question to him and him being so open and forthright with his response. So, BJ, if you're listening, thank you so much.
Perry Carpenter: And now let's turn our attention to the second way that behavioral design can be used to further dark purposes. What I'm referring to is known as a dark pattern.
Perry Carpenter: A dark pattern is specifically a form of user experience design that works against the interest of the end user and for the interest of someone else. And that someone else could be the app developer, it could be a vendor or an advertiser that is getting paid per click, it could be a vendor that gets somebody into a subscription that they're unable to get out of easily. Or it could be a cyber criminal that has tricked somebody into clicking on something in order to take over a device or to scam them out of something else by sending them to a fake website. There's lots of ways that behavior design is used against us.
Perry Carpenter: I'm pretty sure that each one of us has been the victim of a dark pattern at some point. And these can even be employed by otherwise reputable vendors. For example, have you ever been on a website and then accidentally subscribed to something or intentionally subscribed, but then when you go to unsubscribe you can't figure out how to do it. It took minimal effort to subscribe and it seems like you have to hunt and search or click seven or eight layers deep to figure out how to unsubscribe. That is a dark pattern. They took advantage of making things very easy and intuitive to subscribe and very very difficult to figure out how to unsubscribe. Here's one of the more interesting examples that I've seen though. And this is on a mobile device. So, imagine you've opened an email or you're scrolling through a social media feed and then, as happens, in that there's an ad. And so, maybe this is an ad for shoes and you look at the ad and at the same time you notice that your screen looks dirty.
Perry Carpenter: There's a speck of dust on it or a little flick of dirt or a hair. And, as you do, you blow on your screen to try to remove that and that doesn't work. It's still there. So, then you rub it on maybe your shirt or your pant leg and look again. Still there. It's so stubborn. So, what do you do? At that point you naturally, without even consciously thinking, take your finger or your thumb and you try to smudge it away. But, of course, that won't work because this is a dark pattern. That speck of dust or fleck of dirt or hair is not actually on your screen at all. It's part of the image, there to trick you into activating it and because mobile devices are capacitive touch devices, when you touch it with your finger or your thumb it activates the screen. You've essentially clicked on that link at that point and you are taken to where the vendor or the attacker wants you to go.
Perry Carpenter: Whether that's a website, so that they can get pay per click revenue or it's taking you to a malicious website to try to harvest information from you or to do a drive by download of malware onto your device. That is a dark use of behavioral design.
Perry Carpenter: Let's spend a few minutes talking about the specific application of behavioral design principles within our programs. We'll first hear from Matt Wallaert, as he describes the importance of being very specific in what we mean, as we describe the behaviors that we're wanting to design for.
Matt Wallaert: I think people underestimate the importance of a clear behavioral statement. I have provoked any number of interesting fights at interesting companies by forcing them to write behavioral statements and getting them to realize that they're actually not as aligned as they think they are. That is actually the primary mistake. The primary mistake, I think, at a lot of companies is they don't actually know what behavior they want and they think they do, but they're using terms to which they don't agree on the definition until everybody, like, nods their head and it's "Yes, we are in laser alignment." I'm like, "No, you're not." I'm listening to you and you're all saying the same thing, but you're not actually aligned. Security is a great example of that. There's a million times where, you know, alright, we want people to be secure and we all think we're talking about the same thing and then you press on it a little bit and you're like, man, you really have a very different idea of what that means than I do.
Perry Carpenter: So, if you want to do some sort of behavioral design within your organization, it's very important for you to know exactly what behaviors you're trying to take on and making very general statements like "We want to deal with phishing" is not good enough, because when you think about it, resilience to phishing is actually a collection of several behaviors, and you need to define each one of those and train for each one of those and design for each one of those specifically. When you think about resilience to phishing and the behaviors that relate to that, the collection of them is things like not clicking on the link, which also has a behavior of, do you know how to interrogate the link? Do you have an attitude of being cautious or suspect of every email that you open? Do you know how to look at the headers of the email? What if there is no link in the email? Do you approach that differently, like a business email compromise type of email.
Perry Carpenter: All of these would imply that you need different behavioral statements for each one of these subsets of the types of behaviors that revolve around the topic of phishing. So, the term that BJ Fogg uses for this is to crispify the behavioral statement and Matt was dead on. We tend to use very similar terms in the ways that we describe the behaviors that we're trying to design for. But oftentimes we're not in alignment on what those actually mean, and that's why it's super super important for us to strip away all preconceived notions and all assumptions and really, really be crispy about what we actually mean when we define each specific behavior. Because if we don't do that, then we're going to have unmet expectations and we might design for the wrong thing.
Perry Carpenter: By this point you've got a pretty good foundation in the fundamentals of behavioral design. So, here's where the rubber meets the road. When it comes to cybersecurity, there's not a lot of literature out there that talks about how to use behavior design principles within a cybersecurity context. But one of the groups that has done some great research in this area is a group called ideas42. And Alex Alhadeff was one of the primary writers on the paper that they published. I've brought Alex on to describe the paper, the research and some of the findings, so that you can get a firm idea of how these principles apply within a cybersecurity context.
Alexandra Alhadeff: The paper was written with support from the Hewlett Foundation and the New America Foundation, with the goal to fill the human factors knowledge gap within the cybersecurity community, by linking behavioral insights to prominent vulnerabilities and information security.
Perry Carpenter: Alex, can you describe the process that you went through in putting that paper together and maybe get into the form that it took, because it took a little bit of a different form than maybe a traditional research paper would take?
Alexandra Alhadeff: We reviewed the literature and interviewed over 60 experts from the field, to identify problems and solutions around human behavior, where previous solutions had failed and our research was very broad. We looked at behavioral challenges at the level of end users, IT administrators, enterprises and the policy environment as well. And we had expected our primary output to be a white paper. But then we became concerned about the readability and impact of that format within the cybersecurity context, so we decided to write a novella called 'Deep Thought: A Cybersecurity Story.'
Perry Carpenter: By the way, it's a really well put together paper and I'll have a link to it in the show notes.
Alexandra Alhadeff: It's a true crime tale of a hypothetical attack, along with behavioral science interludes, which explain concepts introduced in the narrative. As well as an appendix with key takeaways that can be used to improve security protocols. And the decision to write a novella was partially inspired by the research of Rick Wash. A professor at Michigan State. He was one of the experts we spoke to. Wash found that stories about cyber events can have a stronger impact on how people think about security than standard awareness training, and this is something you talk about as well Perry with what you call the 'Trojan Horses of the Mind.' Not only can stories affect people's decisions, their lessons have wide reaching potential for influence, because people tend to share and retell stories. By choosing the novella as our medium we gained the ability to engage with our readers on a deeper level.
Perry Carpenter: Talk about some of the findings from that paper and then maybe even more broadly within the cybersecurity context. Where do behavioral design principles fit in within a cybersecurity context?
Alexandra Alhadeff: An example that really resonated with me during the research is that of security warnings. People ignore warnings 90% of the time, but warnings are important. They indicate expired SSL certificates, untrustworthy web pages and malware risk.
Perry Carpenter: What is that makes people not take security warnings seriously or filter them out within their mind?
Alexandra Alhadeff: People habituate or get used to warnings. Our neurologic response to stimuli drops dramatically after the second exposure and continues to decrease with each subsequent exposure. Another barrier is around mental models. A common mental model is that warnings aren't worth paying attention to, which isn't a surprise, given that warnings are false positives 81% of the time. And even in the case of true positives, the user might never become aware of the infection, leading them to believe it was a false positive and mistrust similar warnings in the future. The second problem, us thinking that a true positive is actually a false positive was extremely fascinating to learn.
Perry Carpenter: Alex and her research colleagues also had some interesting findings about how cognitive bias interferes with our understanding of risk.
Alexandra Alhadeff: Another barrier is related to the affect heuristic. Research has shown that people sometimes judge the risk of taking a particular action, not based on the calculated risk, but instead on how they feel about the decision or action. So if you're excited to stream free music or a movie, you might discount the risks associated with accessing those sites and dismiss a warning altogether. And present bias also has a part to play here, because we tend to overvalue immediate benefits. We'll likely choose to watch the movie now and deal with the consequences later, whatever they might be.
Perry Carpenter: One of the most interesting recommendations that came out of that research was this idea of using polymorphic warnings, so that we can start to combat the fact that our mind is constantly wanting to filter out warnings, or adapt to the warning landscape.
Alexandra Alhadeff: Warnings that change size, shape and orientation have been found to be resistant to habituation, at least in the short term. Another step we can take is to make consequences of ignoring a warning more salient and vivid. For example, if you let people know that should a bad actor steal your credit card information, they can ruin your credit history. This isn't a pure information dissemination anymore. You're now infusing emotion into your message. And taking it a step further, warnings might play short 30 second videos of victims telling stories about their experience with a hack, and once the video ends, the user can proceed. And such an intervention might be effective because stories can make information more memorable and relateable.
Perry Carpenter: We've covered a ton of material on this episode, but there's one more thing that I need to let you know and that is what happens when it's not working when you've designed for behavior. You've got a really nice crispy statement, you've built some great behavioral interventions and they're just not working. Well, I can tell you how to start to debug this and this advice comes from BJ Fogg. The way that you start to debug a behavior design that's not working is you run the equation backwards. So, remember, the equation forward is B=MAP. If you're not getting the behavior that you want and you run that equation backwards, you start with the P. You start looking at the prompt. Is the prompt visible or are you dealing with prompt fatigue or that prompt is not entering the consciousness of someone or it's not effective at touching that person in some way? So debug the prompt first and see what you need to do there.
Perry Carpenter: If the prompts are okay, then you start to look at ability. Is this thing just too hard to do for somebody? And if it is, can you make it easier? Can you add assistive technology? Can you put training in place? Can you deal with that in some other way? And then after you solved for the ability piece, only then do you go to motivation, because motivation is the most difficult thing to solve for. And when it comes to things like motivation, then you start looking at alternative methods like nudging or bringing in different communication strategies, and I would encourage you to go back to episode one on unleashing 'Trojan Horses for the Mind' to think about how you might increase motivation.
Perry Carpenter: Well, we've reached the end of our time together today. I'm going to give Matt Wallaert a final word and then I'll be back to summarize with a few closing thoughts.
Matt Wallaert: I can't make people do the right thing, but I can create environments in which it is dramatically more likely that they do, in fact, do the right thing, and I think that's where we under-invest.
Perry Carpenter: Honestly, we could have filled several hours with content all about behavior design and, so, you should be expecting a part two or a part three of this at some point. But I think we've covered a lot of the great foundational elements today. We touched on the knowledge, intention and behavior gap and the fact that out of that flow three realities of security awareness. Just because I'm aware, doesn't mean that I care. If we try to work against human nature we will fail and what our employees do is way more important than what they know. And then we moved into different behavior models. The Fogg Behavior Model. Behaviors equals motivation plus ability plus the prompt. We talked about the competing pressures model that Matt Wallaert covers and then we talked about this four stage model, and the distinguishing piece I think in the model that Alex uses is that it specifically talks about the barriers to the behavior.
Perry Carpenter: Then we brought in the use of nudges, power prompts and other ways that we can subtly influence the behavioral direction of people. And then, of course, we talked about the dark side of behavior design and how some behavior design strategies can be used to help create an addictive circumstance and some behavior design strategies can be used to encourage behavior that works against the self interest of the person that they are designing for. And then lastly, we capped off with the need to very precisely define the behaviors that we're wanting to build interventions for, so that we can actually meet expectations and have the best possibility of doing something that's actually going to work. And then we talked about how to debug behaviors by working the Fogg model backwards. Starting at the prompt, looking to see if there's something gone wrong there. Is the prompt effectively invisible to somebody or is it not reaching that person in some way? And if that's fine, then you move to ability and see if the ability is easy enough for somebody to do and then finally you will get motivation.
Perry Carpenter: Thank you so much for staying with us. I know this was a longer episode and thank you Alex and BJ and Matt for sharing your time and your expertise with all of us. If you're interested in more information related to behavior science or behavior design, I encourage you to take a look at our show notes, where we're gonna have references to BJ's book, Matt's book, the ideas42 paper and a whole host of other things related to these topics. And if you've enjoyed today's episode, I would really appreciate it if you'd take just a couple of seconds to go to Apple Podcast and rate and consider leaving a review. That does so much to help. I'd also encourage you to post about it on social media, recommend it to friends and if you haven't yet, go ahead and subscribe and follow wherever you like to get your podcast fix.
Perry Carpenter: If you want to connect with me, feel free to reach out on LinkedIn or Twitter. In addition, I also participate in a group on Clubhouse and we meet once a week on Fridays. It's called The Human Layer Club. So, until next time, thank you so much again. I'm Perry Carpenter, signing off.