Back

EP 3 – Adam Shostack: 4 Question Framework For Simple Threat Modeling

read

Most people think about threat modeling as an extensive, costly and heavyweight exercise. But what if it didn’t have to be? What if threat modeling could be as easy as asking and answering a few simple questions?

In today’s episode, we speak with Adam Shostack about his simple four-question threat modeling framework. Adam’s framework was developed based on 20+ years of threat modeling experience ranging from startups to more than a decade at Microsoft. He believes deeply that organizations must rethink their approach to threat modeling.

In this episode, Adam walks through his framework and teaches us how we should all be approaching threat modeling.

Topics discussed in this episode:

  • Why threat modeling shouldn’t only be for organizations with large teams of application security engineers.
  • How to bridge the gap between the security team focused on threat modeling and the development/engineering team.
  • How security engineers can support and train their developers on how to incorporate threat modeling into their day-to-day work.
  • Where threat modeling should fit into your application security program priorities.
  • The surprising benefits that threat modeling brings — outside of knowing the risks that exist.
  • How most organizations let perfect be the enemy of good (and what they should be doing instead).

Resources Mentioned:
Shostack white paper — Fast, Cheap, and Good
Shostack 1 minute educational clips on Youtube
Showstack threat modeling resource

Transcript
Harshil: Today we have a very special guest with us, Adam Shostack. For those of you who don't know Adam, he is a technologist, an author, and an undeniable authority in the field of threat modeling. Adam, welcome to the show. You have been a very important figure in anything related to threat modelling. I've used your work for my own inspiration. I've learned a lot from what you have talked about in the community. So this really makes me very excited about this episode. Now let's jump right in. Most of our audience will already know what threat modeling is, but I would love for you to help us understand how you think about threat modeling. What does it actually mean? Why is it important?

Adam: Sure. So let me start with what threat modeling is. Then go to why it's important. So a threat in this context is a promise of future violence. Threatened to beat me up if I didn't give him my lunch money and threat modeling is the use of model abstractions so we can think about the security of a system before it's completed. And that's a little bit unique. Out of all the security tools we have, we've got a lot of things that help you analyze your source code or your run times or your operations. But the cheapest time to make a fix is before you've written any code. You don't have anything to throw away, you don't have any compatibility issues. And so threat modeling is important because it allows us to say, what are we working on? What can go wrong? What are we going to do about it? Did we do a good job and bring those questions to bear in a way that focuses all the rest of our security work and does so in a way that has huge payoff? It's the ultimate in shifting left.

Harshil: That's phenomenal. I think that's an interesting way to talk about threat modeling, which is modeling the future potential threats of violence, as you describe it, in a fast-moving, agile environment, especially when everything is about speed of delivery and pushing the features out. Developers love that. Leadership loves it. How do you take this opportunity to take a step back? Think about what we're building, what are the security constraints around it? What are the controls built in, and what are the potential threats on whatever they are building? How does that happen? Adam Shostack

Adam: So the great thing about the four question framework which I've given you is we can apply it at different scales. So I can ask what can go wrong with this thing that we're working on right now, with this user story, with this sprint, with this improvement. And I don't need to do the whole step back, think about the whole thing. It's great to do so. But in a fast-moving team, we accumulate some technical debt. We want to minimize the debt. We accidentally accumulate. Threat modeling asking what can go wrong with this improvement? Lets us be cognizant. And we can ask that in two minutes. We don't have to spend hours on it. It doesn't need to be Waterfall-y, heavyweight. And we can also go and we can look at the mountain of security debt that we've got. And security debt is just the type of technical debt we can say, okay, when we look across this whole thing, what are the big issues? And we can do that now and then, and so we can fit in and align with these agile practices. Which one, these practices are great. I do a lot of training, and in fact, today in the post-training retrospective that we did, we talked about a different way of tracking who's speaking so that we can push for a little bit so we can make sure we understand is everyone getting a chance to speak up. And we said, oh, we could do this or we could do this. And so we've got an experiment. We're going to run the experiment in class tomorrow using Google Sheets to track things. That sort of quick iteration is tremendously important to product improvement. I love it. And threat modeling fits in it.

Harshil: Right. I think the fundamental challenge here, though, is even if threat modeling doesn't take a lot of time, let's just say hypothetically, let's say it takes two minutes or five minutes of asking the right questions. The inherent challenge here is that we're assuming that somebody knows what questions to ask and that person is present in the Dev team, or that person is present when things are being planned or designed or built. Right. So I think that is where a lot of these functions break down, which is typically the people who know what questions to ask or how to do threat modeling. They belong in a security team, or there is some limited group of people who have that extra training as compared to a Scrum team, for example, that may or may not have a security representation or a person who's aware of how to do threat modeling, how to think about the architecture from a threat perspective. Do you have any thoughts on how to bridge that gap?

Adam: So the first thing I want to say here is you already asked the right question to start with, which is simply what can go wrong? Asking that is a fine place to start. And then there's a whole set of simple, easy to use ways to engage with scrap modeling. In fact, I released a white paper not too long ago entitled Fast, Cheap, and Good. Yeah. And so the white paper details like six or seven easy ways to do it, where you don't need to know about stride, which is mnemonic to help you remember threats. In order to stress model, it's helpful, but start out with what can go wrong here. Maybe look at the OWASP top ten. Maybe say, I read a news story. Could something like that impact this? If we think about this not as threat modeling is this thing on a pedestal. But threat modeling is a method. It's a way of thinking, then the development team can do it. And what's more, they should do it. We're moving more and more to a world in which developers are responsible for quality. Security is an aspect of quality. We need to help them, but they can do it.

Harshil: Yeah, I love the name of the paper, the title of the paper, Fast, Cheap, and Good, because that's the exact opposite of what most people think of threat modeling is. Right. I cannot count how many people I talk to almost every single week and say, oh yeah, we have a threat modeling team that does user interviews over a period of several weeks, and we produce this long form document, which is 15 pages long that obviously nobody looks at. But that's what people think of threat modeling as, which is this extensive manual exercise that's very expensive. And you can only do it when you have enough resources in your team. Obviously, nobody's arguing the value of the importance of doing it. But it's a matter of this perception that threat modeling is this heavyweight exercise that is only doable at scale by people who have a lot of AppSec engineers.

Adam: I'm sitting here nodding because everyone listening to this podcast can see that. But yeah, that's why I gave the paper the title I did for a lot of good reasons. We created these approaches, but the world has shifted and the need has shifted, and we've got to meet people where they are.

Harshil: I think there's a lot of interesting trends now. At least a lot of people are recognizing that this is a challenge. I've been hearing more and more about threat modeling as code. I think OS has a fantastic project. Do you have any thoughts on how mature those things are? Have you seen people use threat modeling as code? To what extent?

Adam: So first I want to say I love the experimentation. The set of tools that are available to threat model today are so diverse. They're so different from when I created the Microsoft threat modeling tool. The experimentation, writing your threat model in code, where we apply some structure to it, where we can do some analysis to say, is it present, is it decent? This is really powerful, and there are limits to it. And first, I want to start with the power. Right. It's so cool to have it be there as part of the code because that's what developers work on. Right. That's the deliverable that developers think about. That's the thing they check to make sure it's properly up to date. And so encoding our threat models with the code has tremendous power. And I think it's early, but I think we're going to see a lot of evolution of the tooling over the next couple of years in the same way that we see evolution and all sorts of other tooling. And I'm excited to participate, and I'm excited to watch it happen.

Harshil: Fantastic. Now, the sort of related challenge that a lot of security practitioners think about is also that because threat modeling is important, we want to do it. We have to do it. But there's also a number of other initiatives in a typical abstract program, like just performing risk assessments or scanning with tools or pen testing, bug bounty, all those kinds of things, secure coding, training with developers. There are so many things you can do. But if you are just starting an AppSec program or you have some basic things in place and you're trying to think of how to make it more mature, more comprehensive, where does threat modeling land in the priority? And how should someone think about is this the right time to initiate threat modeling exercises, or should we wait?

Adam: So once you move from thinking about threat modeling as a heavyweight thing to a super lightweight thing, I think now is the time to start. And I say that because all of these things the pen tests, the bug bounties, the SAS, the DaaS, the secure coding, all of it works better. When you have an understanding of the question, what can go wrong? The reason we use SaaS is because we understand that people will write vulnerabilities. The reason we put pen tests in places we know test coverage is never perfect, and a different perspective will help. The reason we put bug bounties in place is because we don't trust the pen tests to work. The simple act of asking what can go wrong and what are we going to do about it has such an impact on our ability to focus the remainder of our work. But more importantly, it gives us an opportunity to start thinking about the defenses in better ways. We can start thinking about building parsers in memory safe languages, or even using tools from Microsoft, like EverParse. Right. And this is a formal parsing language that takes a bit of upfront work. But if you're parsing a lot of data from untrustworthy parties, it gives you the ability to do so with a great deal more safety. And once you've done this, you can, as I understand it, not have to worry about things like buffer overflows, right?

Harshil: Yeah. So it sounds like investing a little bit of upfront time could help you potentially avoid dealing with repetitive patterns of the same issues. Reoccurring again and again and again. You know what would be fascinating, though? Like, I've seen a lot of people do code assisted pen tests. So you have your code and you're pen testing. Basically with that perspective, I've never heard anyone talk about threat model-assisted pen test. Like, it's so obvious. Right. You take a threat model output and you think about what makes sense from an attack vectors perspective.

Adam: You know, every pen tester does a little bit of threat modeling. They ask the question, what is this thing in front of me? And they have exploratory techniques in which they figure out a model of what it is they're looking at. And then they ask, what can go wrong? Why don't we feed them our threat models? And I know organizations that do this. Here's the scrap model. You get points for everything you find in our system that is not listed in the system model diagram.

Harshil: Right. It’s the feedback loop between the two of them, the threat modeling and the pen-testing. Right. So to make both of them better.

Adam: Yeah. And also in that same feedback loop sense, I know of teams where the SOC won't greenlight, turning on a new service until they get a threat model. Because if they don't have a threat model, they don't know where the logs should be coming from. And so they have no way to validate what it is they're onboarding, they have no way to plan response for it.

Harshil: Yeah, that's great. Actually, I'm guessing that would be super helpful for the teams to really even tune into what the Dev team should be logging depending on the risks identified in this exercise. Exactly. So going back to your earlier comment about the four key points on how to do threat modeling, typically, when people talk about threat modeling, they instantaneously think about Stride or some of those industry-recognized frameworks. Tell me a little more about why you think starting with those four simple questions is potentially a better approach as compared to doing a more theoretical exercise.

Adam: So the reason I like to start with these four questions is because they work as a frame. I call it the four question framework. What are we working on? What can go wrong? What are we going to do about it? Did we do a good job? And then something like Stride works as a way to structure our answer to what can go wrong. Similarly, we can use a kill chain to answer what can go wrong. People used to use attack trees to structure their answer to what can go wrong. When we get to what are we going to do about it? We can put controls in place. Those can be preventative, they can be detective, they can be responsive. We can feed into a risk management process, and we can say controls or mitigations, but we can also accept, we can eliminate, we can transfer. One of the big questions I always have for a risk management program is where are you getting your list of risks? Threat modeling can feed that. And so if we think about this, there's a lot of advice out there to use this risk scoring system or this prioritization system within track modeling. We can have that conversation. I'm good with that. But the advantage to the four questions framework is I can tell you that is how you should threat model without being overly constraining to you. If I say you should use data flow diagrams and you need to have at least 15-20 elements in your data flow diagram, boom. I just killed any engagement we might have with Agile. But if I say, what are we working on? Well, what are we working on this Sprint? And that's the reason for the four question frame is it lets threat modeling engage with the other engineering factors, and all of the choices we're making are about what structures and tools do we use, what support systems do we need, what are the outputs need to look like? But it's all the same recognizable ways of answering the four questions. Within the way we engineer, for example, within the way we engineer here at Tromzo, though, we could say we do this and this and this, and it's an answer to what can go wrong and what are we going to do about it? And we record those in our user stores just to take an example.

Harshil: That's a great way to look at it. I think one of my challenges that I ran into personally previously is that if the model was sort of free-flowing like this, it's not very specific. The challenge then becomes that the people who can actually perform a good threat model, they become the bottleneck, because then you're looking at somebody who's experienced enough, who understands business as well as technology, and they can think through multiple levels of abstraction and identify those things. So that posed an inherent limitation in terms of how can I scale threat models across my company? Because I only have a limited number of people who are very senior. So have you seen people solve that challenge that we're not just relying on most senior talent to be able to do this?

Adam: I apologize, but respectfully, you're allowing the perfect to be the enemy of the good, right. You're saying are most senior people thinking in multiple levels of abstraction? Yes. I love talking to people who do that. And by saying we need those things and saying that the developers can't do this, where you might end up is no scrap modeling instead of, okay, scrap modeling, that's exactly what a lot of people end up. I've been threat modeling for 23 years. 23 years. Ago, Bruce Schneier and I wrote a paper on threat modeling for smart cards. I care about this. I can talk all day about it. And it is hard for me to give up on the perfect I would like everyone's threat modeling to be as good as I can do it. And that only requires 23 years of experience. So it's not a realistic request. We have to get to the point of saying, and this goes to the agile thing. Iterate, do it, retrospect, improve, do it again.

Harshil: Let's take this example of let's take an imaginary AppSec team. They have, let's just say five security engineers. They know how to do threat modeling, but they want to do it consistently across a few critical products or services that their engineering teams might be building. What do you suggest as a good step to get started with in terms of should they do it all themselves in a limited scope, or should they train the developers, or should they write a document and ask everyone to follow it? What do you think is a good way to get started?

Adam: So do not be the bottleneck, be the support, be the help. Go to those folks and ask them what can go wrong with this thing you're building. Collaborate with them, show them, prove to them that 30 minutes of asking what can go wrong is like a can of worms, that there are these problems which exist and we can deal with them. But just asking that one question can be so powerful. And then they'll come back to you and say, can we do that again? Can we do that again? And the mistake that Microsoft made, this was a good mistake to make. It was a natural mistake. Was that was how they started was folks like Mike Howard and Windows Snyder and Jason Garms would go and help people threaten model. And then Microsoft has a lot of smart engineers. They were like, yeah, we got this. Thanks. We can do this next time. And they had like a paper on Stride that was four pages long. And then they started to ask, how do we make this consistent? How do we make this checkable? And so they wrote eventually this 17 step process. It was super heavyweight. It's hard to do because it tried to take into account all of the variations that were there. And so being flexible and focusing in on the framework gets you much further. And it took trying to be a bit more prescriptive to discover that.

Harshil: I think that's a great way to look at it, which is just get started. Right? You don't have to wait for perfection. You can just get started. And eventually the process will improve, which is better than not doing it at all. Do you recommend any places where people can either go themselves for learning or getting trained, or they can refer those resources to security Champions or developers and what have you.

Adam: So, I mean, this is my life. So, yes, I do have recommendations. If you go to Shostack.org/resources/threat-modeling or you just go to showstop.org and you click on Resources threat modeling. There's white papers about how to get started. There are videos I have, of course, that I titled The World's Shortest Threat Modeling Course is a set of mostly 1-minute videos that are free on YouTube that are designed to help you get started. The thing I want to emphasize for people who are getting started is don't skimp on the retrospectives. Right. You're doing something new. It's scary. It's difficult. People will have questions. Give them time to ask those questions. Don't do the thing. You'll see this at the end of the webinar, someone will say, are there any questions? No. Okay. Well, thank you all very much for coming. Give people time to realize they have a question to formulate it, to turn their microphone on, to start speaking. That takes 30 seconds. And if you don't give them the time, then they'll eh, whatever. I'm not going to bother emailing Adam about that because he's gone now.

Harshil: Yeah, I think that 1-minute video idea is fantastic, especially because that's how much time I can get developers to spend on security anyways. Right. It's fantastic. Now, let's imagine a world where some organizations are doing this consistently. Even in an agile model, where developers are security champions, do threat models fairly consistently. Now, all of that exercise results in documented threats. Ideally, typically, in my experience, a lot of times what happens is even if you do the right threat models, you spend a lot of time upfront and identifying those threats and documenting it ends up living as a document somewhere that nobody looks at, ever. 27 page PDF. Fantastic diagram. Beautiful looking diagram. Never gets updated. Have you seen any security teams or Dev teams really make it actionable and not just stop at identifying threats, but really act on it?

Adam: So I love bugs. I love tracking work items out of a threat model as a bug. So we might have a bug that the web frontend is using dynamic SQL, and so it's probably vulnerable to SQL injection. Or we might have a bug that the files in everything in our system is able to write to this particular S3 bucket. And so we lack integrity controls. Or you might have that same sort of issue be expressed as a bug of we need to fix the permissions in our S3 buckets. And so if you start to think about these outputs as the sort of thing that can be addressed by feature work, it brings us back to the thing that engineers like to do, which is ship features. And if the feature is —make sure that only these four apps can write to this S3 bucket, and we write some unit tests to validate that that's happening. Well, that's a threat. It's a tampering threat, and it's a feature that I can go and write and I can sign off on and say, yes, I shipped that thing rather than put it into the PDF, put it into your bug tracking system.

Harshil: I think that's a good idea. It definitely should live in the same place where deaf teams prioritize all the other work from. Right. So I think that's a basic, simple thing that we all can do, which is make it live in the same repository where work gets prioritized so it gets at least visibility. The other aspect of sort of related thing is when you have this list of threats identified, when you have a documented threat model, the obvious outcome of that is now you know what you have to protect against. What are the potential risks that are relevant to this particular artifact? Have you seen any other benefits outside of obviously just knowing what risks? Is it anything else that teams realize any other benefits just by doing this exercise?

Adam: The weird thing that happens is as you start to communicate about your designs and your goals, rework often drops away because everybody has an understanding of or not everyone has an understanding. Before we get to the understanding, the threat model becomes an opportunity to have a design conversation. And because we're doing it for security and because we're dealing with partner teams, we bring those partner teams in and say, hey, come to the threat model discussion so that you know what our expectations of you are, and we can make sure we nail down who validates what where. Then all of a sudden everyone understands what work is being done better. It can be a place for some of the design conversation and not design in the sense of I have to create a beautiful Visio diagram and send it over to the Art Department for shading. But design in the sense of we're doing this work like this right now, and we want to make sure that everyone we're engaged with knows what we're doing so that the pieces fit together when we're done.

Harshil: That's a great way to look at it. I think as a result of this exercise, as you said, just having that conversation broadly with different members that could be a great training opportunity itself. Right. Obviously, you're talking about the design implications and threats and what have you, but the developers might be able to relate to the security concerns of what they're building in this context better as compared to sitting through a 20 minutes video of Java secure coding training.

Adam: Yes. The thing to watch out for that the threat here is that I'm calling your baby ugly. Right. The thing that we did before we understood how to threat model will have some security flaws. And so you really want to make clear that threat modeling, that discovering threats is a win even when it's annoying because we're discovering it rather than an attacker discovering it and that lets us take the time to fix it when we want to fix it. And when you set that expectation, it works way better than if you come in and you start discovering problems.

Harshil: Fantastic. We are right at the time for closing. Do you have any thoughts any closing thoughts that you'd want to share with our audience?

Adam: I think my closing thought is this doesn't have to be a big heavyweight thing. It's something you can start doing today. You ask the four questions, what are we working on? What can go wrong? What are we going to do about it? Did we do a good job? You focus on those questions, your development will get better and more secure and it is the highest leverage security thing I think anyone can do and literally you can start immediately after listening to this podcast.

Harshil: Fast, cheap and good threat modeling, right? That's right. Fantastic. Adam, it was a pleasure having you on this podcast. Thank you so much.

Adam: You had great questions. Thank you.

Harshil: Thanks for listening to the future of application security. If you've enjoyed this episode or you are new to the show, I'd love to have you subscribe wherever you get your podcasts so you don't miss any episode. And if you like the podcast, I'd be grateful if you can leave us a review on Apple podcasts, thank you for listening.

Rate this article

Recent articles

Solving the Challenges of Engaging with Developers

On a recent episode of the Future of Application Security podcast, Chad Girouard, AVP Application Security at LPL Financial, talked about some of the challenges to overcome...

Read more
What’s Caused the Need for Software Supply Chain Security

On a recent episode of the Future of Application Security podcast, Dave Ferguson, Director of Technical Product Management, Software Supply Chain Security at ReversingLabs, explained why the...

Read more

Ready to Scale Your Application Security Program?

Sign up for a personalized one-on-one walkthrough.

Request a demo

[email protected]

Request a demo