Back

EP 16 – Mukund Sarma: How Chime Built a Scalable Product Security Program

read

Chime, one of the fastest growing players in the financial technology space, has a mission of providing financial stability for their customers by eliminating many of the issues that come with traditional banking.

In today’s episode, Mukund Sarma, Director of Product Security at Chime, shares how he helps his team address the challenges in building security programs, and maintaining a solid and proactive security culture within the company.

Topics discussed:

  • How Mukund got started in cybersecurity.
  • His experience in building application security programs for FinTech companies. 
  • Different approaches in risk mitigation in FinTech, product security, and application security.
  • What product security is and how its definition differs from company to company.
  • What skill set Mukund looks for when hiring engineering and security teams.
  • How Chime’s internal Rails application, Monocle helps their team with strategic engineering and security decision making.
  • Why Mukund opted for a gamified approach for their security processes.
  • Why Mukunds team decided to integrate GitHub badges within Monocle.
Transcript

Harshil: Hello everyone, and welcome back to another episode of the Future of Application Security. Today, I am super excited to have a guest with us who has done some very interesting work in the software security, product security space. Welcome, Mukund Sarma, to our latest episode. Mukund, I'm so excited to have you here.

Mukund: Thanks, Harshil. Nice to be here too.

Harshil: So, everyone, Mukund is a Director of Product Security at a company called Chime, but I will let him introduce himself. And Mukund, I would love to hear obviously what you do today, but also how you got started into the space of cybersecurity as well.

Mukund: Cool, thanks. I'll tell you what I do today, and I'll go into how I got into cyber security. So currently I run a security organization in a company called Chime that focuses mainly on product security. So what product security at Chime means is it's a combination of three different teams. We have application security, we have infrastructure and cloud security, and we also have a security engineering team. So our AppSec team is a very typical AppSec team in terms of looking at how do we secure all the interfaces and applications that we build for either our members or our internal Chime employees. Infrastructure of cloud security, we are primarily based on AWS, so it's more of an AWS cloud security team, if you may. We do have small footprints and different aspects of other clouds, so we handle that set up too. And security engineering is more or less like a platform security team, if you may. So we're going to build services, frameworks and libraries for the rest of the company to use. So you got things like how do I go encrypt, or how do I go tokenize a piece of data, or how do I actually secretly store the secret, things in that sense, we build libraries and frameworks for the rest of Chime to use. There's also a team that focuses mainly on how to eradicate a few classes of vulnerabilities at the framework level. So all our services internally are built using Chime frameworks. So how do we actually add security controls right there so that some classes of vulnerabilities are not something that an engineer needs to worry about. So, yeah, that's the team. The team is about 20 folks right now. And more about how I joined or moved to security, I started working as a software engineer in the world of crypto, where crypto didn't mean Web3. It actually was actually how do you look at confidentiality and non repudiation integrity kind of aspect. So I started working with the government of India and I was working on building different ways of how you can actually work on key exchange and depending on the context and where, what's needed, look at ways where there are multiple authorizers and stuff in that sense to unlock or lock a key and things in that sense of the world. So that's where I initially started working on in the world of security but then I realized I actually really liked AppSec a lot more when working with a few folks in that sense. I had a natural instinct to moving towards the AppSec field, so I started working more on the sides towards actually how is the whole world of AppSec structured right now, and what is web application security, and how do you look at why are some of these classes of vulnerability, where do they even exist, like, how do you prevent it? And then from there I moved purely into an apps role. I worked with an office for about nine months and where I got a lot of web app pen testing. Actually learn about how do you build tools to solve a few aspects of detection of vulnerabilities. Like you have specific types of vulnerabilities that are just relevant to your ecosystem. So how do you actually start building tooling for that? And then I worked on a bunch of consulting gigs where I got a lot more exposure into different industries and how securities are viewed from that perspective. From there I went to work for another fintech company called Credit Karma where I was there for about five years. I started on their AppSec then moved and we built a team called Security Architecture. So my friend Danny and I, we built this team at Credit Karma which was primarily dealing with how do you actually enforce secure architecture in the rest of the ecosystem. There are a lot of engineering teams that have their architects and stuff but how do we work with those architects and these principal level engineers to actually make sure that security is embedded in how they are looking at that program? So yeah, I was there for about five years and then I moved here to Chime. I've been here since April last year, 2021.

Harshil: Phenomenal. That's so cool. So you've got a pretty decent experience in the world of fintech companies working at some of the really really good logos, Credit Karma and Chime. So tell me a little bit about, you know, when you build an AppSec program, especially when you're a consultant, I'm sure you've seen a lot of different AppSec programs at different companies, but when you think about application security at fintech, in the world of fintech, is there anything particularly special in there or is it just common like every other vertical?

Mukund: I think it is different in terms of like when you are dealing with millions or thousands or whatever size or member base is, people's financial lives. So imagine one day waking up and not being able to buy a coffee because the application is down or the whole company got breached or something in that sense, like it could disrupt every aspect of your day to day function like paying for gas, getting coffee, like going, taking a metro to school or work or whatever it is. So I think fundamentally we are dealing with a much more sensitive set of applications and the ecosystem itself is highly more regulated, I would say. Regulations aside, I think you are responsible for making sure that the safety of people's data and money is something you are accounted for, or not just the security, but also the firm itself. So yeah, I would treat the risk of something in the world of fintech as different from an industry that is more like marketing or gaming or any other industry, I would say. It depends on what the risk is you're dealing with.

Harshil: That's a great point. I mean, I think all of this is dependent on the risk that exists in that particular vertical. But if you think about typical approaches of risk based decision making, risk based approaches, most of the conversations in the conferences are available on the internet, it's all focused around governance, risk and compliance type approaches. When you think about risk at a strategic level, like in a vertical, most of it is around like, how do you do compliance and how do you do so and so things. But if you put a very specific lens of application security or product security in general, are there any things that that risk that's present in the fintech vertical, is that different for the world of product security or application security? And I'm asking that question so our audience can basically infer that for their own verticals, right? Whether they're in health tech or financial services or whatever it is.

Mukund: Yeah, for sure. And I think that's a good question. Think of it as more of a functional risk than overall what is a risk for the company in terms of cybersecurity risk or something in that sense, I would say. So think of it like, all right, if we were to not have segmentation between our services or in the network layer, what would that mean if an edge service were to be compromised or if one of the classes were to be compromised? Would that mean that there is complete lateral movement end to end? What is it from a functional risk perspective are you looking at? I would phrase it more towards that and less about “Oh yes, we are susceptible to a fishing kind of attack” and talk a little more high level versus trying to go a little more very targeted way of describing the risk and what is the likelihood and what is the impact. I think the way to prioritize is either with a framework of use and I've been using a risk based approach, is to say that first you go into an ecosystem, you look at, ok, what are the risks? You ask a bunch of questions, you're trying to understand the ecosystem, you're trying to understand how things are built and deployed so then you have a good picture and understanding of how systems in this particular organization are built and deployed and they work so you know what can go wrong. Now that you bring out a security hat and then you look at, okay, what is it that would happen if the bill by plan were to be compromised? Is that the end of the world for this company, or would that mean that we have a way to actually verify the artifacts that have been deployed or what is inside there? Similarly, what happens if a developer account were to be compromised, right? Would that mean that an attacker could now go build and deploy a different service or get access to all our member data because they have access to the databases? What's the level of risk when one of these aspects were to happen is kind of how I look at it. So yeah, one of the first few steps I have taken is like, okay, we build out this risk register for whichever company I go for, and then we look at what are the highest risks and how do we work on tackling that perspective. So think of it as more functional risk per se, but more of a risk tolerance per se of how do you actually look at it.

Harshil: Yeah, that's a pretty holistic approach of looking at risk. Even wearing your product security hat, which is not very common, a lot of people don't really focus on that strategic view of what are the risks and how do you mitigate them. So it's pretty cool to hear that. And I think the other thing that got me interested that’s related to this holistic view is the use of the term product security. Now, I've had several guests previously also talk about the same thing and it feels like there's not a consistent definition of what product security actually means. I'd love to hear how you define product security and what does that mean in your company?

Mukund: Yeah, I think each company has their own definition and I think that makes sense for their set of terms and what they are trying to look for. For us, our product, like, what does it take to build that product and what is it that you go into? Like if you actually look pretty deep, it goes all the way down to what laptops are people using and stuff in that sense also. So all it goes on to end point security. So does that mean even endpoint security stuff is in product security? I would say maybe, yeah, I think so. I think it really depends on how you want to define the entire set of what you say. And at Chime we have defined it to be like anything that is production related and that affects our member data directly or affects the functionality of some particular repairs and services of our member facing side of the house directly. That's kind of how we have defined it. So the three teams that I've talked about, our application security team or cloud security team or security engineering, all of these are looking at how do you actually ensure that we did not introduce any new sources of vulnerabilities that could be exploited or we make it harder to introduce some of these sources of new vulnerabilities, or how do we actually tackle some of them right before that even being introduced. So how do you actually educate developers, how do you actually build tooling to detect it? How do you actually eradicate certain classes of vulnerabilities? So it's a combination of all the three, I would say. So there's detection, there's prevention, there's also education, there's all of that. So for us, per se, it's looking at everything, you want to call it production related security. That's kind of how we phrase it in our world. With things moving to the cloud, it's no more just, “Hey, this is pure AppSec, this is pure InfraSec. Like your job is to build the VM and secure the VM, and that's all. But it's no longer that way. We are all using a lot of cloud native technology, which means you're actually writing infrastructure as code, you're actually writing a lot of cloud native libraries and SDKs so It's a little more muddled at this point. And it really depends on your organization, on how you modify what it is in scope for you. But right now we are small enough to keep all the three teams I've mentioned under the world of product security.

Harshil: Yeah, that makes sense. I mean, the world of software development is becoming more full stack, right? So everyone, the dev teams individually are self service. They're capable of obviously building their own microservices or code and deploying it and running it and operating it as well. So they are maintaining ownership end to end, so they are responsible for the end to end stack. So it makes sense for product security or application security to evolve into product security, which is also more full stack and build that end to end view for the development teams. Now, the interesting challenge that I ran into in a previous life was the skill sets that you have in the talent from a security perspective, they are sort of narrower, right? So if you think about engineering talent or security talent, they are either very focused on application security, some folks are really good at pen testing, some folks are really good at static analysis, you know, what have you. You pick your tool. But as you broaden that horizon now your product security team also has to understand the intimate details of Go, Python, Java, JavaScript, whatever your stack is. Also Kubernetes and AWS and understand all the different tooling behind and understand the CI/CD processes. It's a very broad skill set. And add on top of that the security expertise needed. So how do you think about talent for that wide of a spectrum of skills that you would need? What is the thing that you look for typically?

Mukund: Yeah, and that's a very good question. I think if you look at my team, there's a huge mix of software engineers, SRE engineers, infrastructure engineers, and a few security engineers. I say a few because hiring for security was hard. So I had to get creative when I started building up this whole part. And in my experience previously, I've had a lot of good security champions that are not necessarily part of security, but outside of security, and that they want to get into the world of security, or they want to try it just a little so that they know what it means to work in the world of security. So I’ve targeted some audience that are outside the specific scope of PR AppSec engineers or PR security engineers. I've got, like, hire a lot more software engineers that would be down to, like, trying out or building for things related to security or SRE engineers, who knows how end to end systems are built and deployed, and how does it actually make sure, like, look at, for example, vulnerability management, right? Your typical way of, “Hey, let's list out all the laundry list of CVEs and get teams to patch it”. It may not work at scale or in a company that's more driven, but actually get us the data versus tell us “Hey just go actually fix it because it's a 9.8”. Like, what does a 9.8 CVS mean in our ecosystem? Is it really a 9.8? So how do we actually build enough tooling and give the right recommendations to engineers on what it is. So I think that's where I have… In my previous places, there have been conflicts between the infra teams and the security teams just because we're not talking the same language. So I was like, okay, why not bring those folks into the world of security and get them to do it?

Harshil: Yeah. The hard part is also how do you convince them to leave their comfort zone and do something radically different? I mean, not radically different, I guess. It's a little bit out of the normal engineer's comfort zone in getting to security. How do you incentivise them? Is it better career prospects, better pay, better whatever? What is that…

Mukund: All of the above. No it’s been hard, like, I tell you I’ve had sometimes roles open for six to eight months sometimes, and it's been hard to hire, and I don't think…. You keep hearing about it, but I didn't realize how hard it was. So I had to go actually find all these engineers to go and run the team. But the way I've chosen is to talk to them about the fundamental problems. I think most of the security folks feel like they can't tell engineers what their challenges are, and I think that's a very non intuitive and kind of destructive way of building that relationship. In fact, the way we approach it is we tell engineering teams “Hey we are having difficulty with this because you folks have decided to go with 20 different frameworks and 20 different languages, and now we have to keep up with all of these. So can we have a more standardized way of saying we want to choose a few language stacks or languages or text stacks and things?” in that sense. So I think the notion of you cannot tell a normal engineering team what your challenges are and you have to figure out all yourself is broken. I don't think that works. In fact, it's a very healthy ecosystem for you to actually be telling them that, “Hey, you need help with these things. We're not able to figure out how to do this in this language or in this ecosystem. Doesn't even make sense for us to be looking at this class of issues in this ecosystem, or is it fundamentally not an issue?”. So you have to have that conversation, you should feel free to tell them that you do not know and you want to learn and things in that sense. So it's perfectly fine. I think that's a huge gap today that I've seen is people not being willing to say they don't know and go look at things.

Harshil: I think it makes a lot of sense. But I also agree that in a lot of cases, the security engineers lack the necessary confidence or the experience to go and have that conversation with the development team. Because if you're a career security professional, you've only been doing pen testing or bug bounty and you're running scanners and things like that. You haven't spent enough time actually understanding the intimate details of development environments in particular languages or frameworks and whatnot, so you don't feel confident about going and talking to them. So you got to be at that level to have that conversation, educated conversation about, “Hey, can we use this framework versus that? Or can we use this versus that?”.

Mukund: I would challenge you on that, I guess. I would say every time I've reached out to folks and I've done this several times all across my career, it's like, “Hey, I don't understand how this works”. I remember when I was an intern, like seven or eight years ago, I didn't know what a Kafka was, I didn't know what Bigstep was. I was like, “Hey, what is this system? Tell me what it is. Why do you need this?”. And they're like, oh, they introduced me to this whole world of distributed systems and things in that sense. Like, I didn't come from that background, I didn't do that kind of work before. So that's perfectly fine in my opinion. This is, again, I agree, I understand that every person may not be comfortable if I tell them that I'm not comfortable with this, can you explain that to me? But I feel like every time I've asked, not just in these organizations, even on the consultant, every time I've asked someone, they've taken the time to come explain to me. I don't think people are at a point where they're like, “Oh, you don't even know that. Why are you in here?” kind of a thing. Most times most companies do hire good people. You're not going to end up…

Harshil: You’d be surprised, haha. Yeah, no, I think it makes sense. And the other key point is also where do you actually engage with the dev team, right? So if it's early enough conversation where, you know, the dev team is just introducing Kafka just to carry your example, then you can have the conversation, “Hey, why are you using this? What is it used for? Do you have an opinion on it?”. But if you're too late, it's already in production, it's being used for three years and then you're coming in and it's like, “Hey, what is this thing?”. Well, it is what it is, right? We're just going to run it like whatever you have to say about security, we are going to have to do it to continue running with Kafka. So I think the point of engagement and when you get inserted, that makes a big difference as well.

Mukund: I agree, and I think that's an extremely valid point, but I also feel like there are resources outside of your own that you can actually go. Like, there are, I'm sure there are enough and more tutorials and training on what exactly Kubernetes is right now, and things in that sense. So if it has been there for three years, more likely there are other organizations using it, it's not just your organization, so there are going to be resources outside too. But I think fundamentally, you should be comfortable enough to be put into a spot where you do not know something. And I've had that all through my career. I've been randomly put into scenarios where I had absolutely no idea what that was. I feel like I had to go round up, learn about what it is, understand what it was, then look at it from a security perspective because everyone is first introducing things only from, “Hey, here's how you do it, and here's how you build a Hello World program” or whatever. But okay, like, how do you actually take the time to go look into it and understand it from a security perspective? So it does involve a lot of involvement from you taking that approach of being, we should be okay with being put into uncomfortable spots.

Harshil: Yeah, makes sense. All of this brings me back to one of my most exciting topics, which is helping developers make the right choices, make the right decisions. I'm personally very passionate about that topic. But at the same time, your team has built a very interesting product called Monocle. Now you guys have written about it, there's blogs about it. So audience members, if you want to read more about it, just go look up Monocle and Chime security team. But Mukund, why don’t you explain just a little bit about what Monocle is and why did you build it?

Mukund: Yeah, it's on a very simple terms, simple Ruby application that we built that basically gamifies security for the rest of the engineers. So what I mean by gamify is we give every team or every repo or every service a score they call an A grade or B grade. Or you can put a score between 0 - 100 or whatever it is and we say that this is your security score. This is very similar to how open source projects also have the security badge or test coverage badge or how often they build and deploy kind of a thing. So it's in the same world of having those badges on people's repos or services at the team level that tell them how they are doing from a security perspective. So we tell teams “Hey, as long as you are a B+ and higher, you are good, you don't have to worry about am I not doing enough from a security perspective because that kind of translates enough to say you are doing enough. Obviously we want all of you to have an A grade but it doesn't mean that it's introducing a critical vulnerability or anything in that sense where you have to stop your development work or things in that world to go fix the issue”. So if a team at any point wants to know how they are doing from a security perspective, all they have to do right now is go check out their grade and say, “Okay, I'm doing fine because we have at least a B+ or above. Or if they don't, they'll be like, “Okay, let's look at how do we actually go look at what the issues are? What is it that we can prioritize? What is a simple enough tasks that you can do that can still get you about that grade or get you the minimum security that you're required for your particular service in that world? So it's a very gamified portal right now to say that every team and every service has a score and teams are competing to make sure that they're on the leaderboard. So everyone wants to be on the leaderboard and make sure that they are working on the right action items from there. So what does that mean from an engineer's perspective is if there are score factors. So things like are you using our secure base image? Are you actually fixing all your dependencies or actually upgrading them if they are to be vulnerable? Or do you have any open vulnerability tickets in your Jira or whatever you're using? Have you recently introduced a new or fixed any new class of issues by itself? So we kind of onboard teams to new tools that we have. So for that effort to make it more lucrative for them to use our tools, our new processes or whatever it is that we're building, we give those tools, that process, the grade to our score factor two. And then let's say we want our services to use our new version of a particular library, we give that a little higher score so that we can try and get teams to use that versus use the old legacy way of doing things. So it's a very gamified way of actually looking at it. And also from my previous experience, it's always unclear for an engineering team to naturally say are they doing the right thing from a security perspective, and are they doing enough or are they not meeting the bar? Like what's required from them? Is it just open Jira? You have all these other security tools, what about issues from them? Like what do they do without those, all of those things? So this is a unified view of just saying that “Okay, if you have this or a higher score, you're fine, you don't have to pause your work and fix security issues, we can continue”.

Harshil: Yeah, so that was going to be my question, right? Because, you know, finding out problems, filing Jira tickets, telling people what to do, like, everyone does that, all security teams do that. What made you go towards this approach of first of all, building a score for every single asset and every single team and then gamifying it. Like, why did you build a score instead of just saying “Hey, you have X number of vulnerabilities or X number of tickets”, and why do you need a score for that?

Mukund: I'm all in for making things fun. So I think people enjoy the gamifications out of the house and I think we could introduce any number of new controls or tools or sources of identifying issues in the background but not change what that means or not change the expectation of what that means for an end user. Like, tomorrow if let's say if everything was its own set of ticket source or a new way of saying this is how you have to work on security. One, it's a little confusing for the engineering to keep up with all these different sources of how we are requesting work for you. Two, it also means that it's a much more involved and heavier process every time we change a particular tool or a process to say that we need to let people know we are stopping, we need to re-educate them on how to go look at this new set of processes or tools or things in that sense. So this is a much easier way for us to…because while we’ve started with a security score, we can change what that security score means to what tools are in our ecosystem.

Harshil: Right.

Mukund: To be honest, I don't want the teams to worry about what tools we use or how it is. I tell them what is wrong and whatever they are doing or whatever they have introduced, and here's how we need them to fix it and give them that one unified way and not worry about “Hey, this came back from our static analysis tool or this came back from our source code analysis or whatever like aspect. I don't think an engineer needs to worry about that.

Harshil: Yeah, that's amazing. So basically what I'm understanding is reporting on vulnerabilities and all of those things, that's one aspect of it. But you're also driving the adoption of secure frameworks, secure best practices or secure images. Adoption of security tools by building this composite score that really talks about or demonstrates your security hygiene or security posture rather for every single asset and team. And then you communicate with that team. Okay, so you talked about flexibility in building that score so you can drive the initiatives that you think are important at that point of time. So your score can change, underlying factors can change. So now the question is, let's say you have that score and some teams do actually calculate the score, but the hard part is how do you get it in front of the teams? Typically they would have a leader board and they will show it in an engineering all hands or company all hands saying, “Hey, these are the top five teams”. Or if you want to go in the other way, you can say “These are the worst five teams”, or whatever it is, right? But Monocle is doing something interesting, which is using GitHub badges. Tell me a little bit more about why you made that decision of integrating with GitHub badges and showing it to developers directly.

Mukund: Yeah, imagine, like, go back to the world of a software engineer, right? You’re opening GitHub in the morning, you're looking at what is your score, you're looking at ll the pull requests, you’re looking for any open issues or something in that sense of the world. So it is an actual phase or a natural step for an engineer that's working on that particular code base to open that particular code base in the day and go look at, “Ok, how do I have to go review pull requests ,or how do I actually go look at open issues or why is my bill taking so long?”, or anything in that sense. Like they'll have to go look at your action screen or anything in that sense. So people are going to Github every day for their own day to day work. So putting a security score right there gives them that view of saying, hey, they'll be like, “Oh wait, I'm waiting for this build to happen but I see that my security score is right now a C. What happened, like what changed?”. And then maybe they can go open up and look it up and say “Okay, what dropped this score? All these are simple fixes. It'll take hardly enough to do all these fixes. Let me just go make this out or put it in the backlog for the next print” or something in that sense. So just re-engaging with the developers right where they are is the majority of why we chose the GitHub badges option.

Harshil: So you're basically incentivizing the developers to do the right thing by showing them and gamifying the score and building that competitive spirit within the dev teams, right? So I'm guessing you're not using a gating function, or like you're stopping people if they are lower than a certain score or any of those actions, right?

Mukund: No, not sharing with anyone, not doing anything. I don't believe in any of that aspects. I feel like at the same time, I do have enough guardrails to actually tell them that what they're doing is not the right approach from a security perspective, and do they accept this risk or do they understand what they're going to do? So we do have a risk adviser kind of functionality in Monocle that actually tells people, like for example, they didn't do a fundamental aspect of things that we want them to do and we think that is a gating functionality, we don't necessarily gate it. What we do is we put that behind the guardrail to say that “Hey, we see that you are going to merge this pull request. Are you going to deploy this new version of the service without fixing this critical issue that we identified?”. And we take them to a screen that explains what that risk is and then we say “Do you accept or do you understand that you are accepting this risk by deploying?”. And if they say yes, in the background we create a ticket and assign it to the team to say that this person acknowledged that they agree that this is a risk that they are accepting and going to deploy. But I think that the approach the team has been taking over here is that we less block them. There can be so many times where we don't have the context. Like sometimes something is broken and they need the only way to fix it is by getting that in there. So blocking is not the way to go because we don't have all the context all the time. So I’m more of a “Here are the guardrails, here's what we have done. The platform is built for inserting any guardrails we need, but you can always bypass it if you want to”. But we will know when you bypass it.

Harshil: Right, makes sense. So have you seen any impact to the security posture improvements since you're not forcing people to take action? This is all based on goodwill, competitive spirit.

Mukund: Yeah, initially when we first launched this feature, it was so new that people didn't know what they were doing, and they just happened to go through the process. So we could see a huge number of folks going through that risk acceptance kind of flow to say that they… because they don't realize what was happening. Like, this is, even though we communicate about it and things in that sense, it takes a while for people to actually understand what's happening. So the initial few months were heavy in that process of people not just bypassing the security controls or these things that we wanted them to fix before they deployed, but ever since we have that, it's down to like single digit numbers right now. It’s like we hardly see anyone do it because if they know that something is wrong and even if they do it because something is broken, they first reach out to us on our channel and say, “Hey, we're going to do this because it's broken. It's not that I'm going to be malicious and going to get it out”. So we see that huge drive of people wanting to do the right thing because they also know that if it is broken or if something is inaccurately captured in our end and they have a way to unblock them. Similarly, we also have seen a huge uptake of people actually looking at Monocle when they’re doing Sprint planning, for example. So a lot of engineering managers, like one of the sections in their Sprint planning is let's look at Monocle for our team to see what things need to fix the Sprint, or are we good? So that's a huge culture change that I've seen in the last 18 months where we have actually seen people actually look at, go to their security score and actually look at what it is and what are the things that they need to actually do to actually get their score up.

Harshil: That's phenomenal. That's such a powerful way of changing your security culture in a positive direction. And I feel like this is much more powerful and relevant than making a developer sit through hours long secure coding training, right? I mean, there is a time and place for security training, obviously. It's not useless, it's good, but this is so much more relevant, so much more actionable, and drawing people to think about security at every step of the process. Phenomenal. Have you seen general reduction in security debt or general improvement in security since you launched Monocle? What has been the highest ROI?

Mukund: It's a question I ask myself too. I want to say yes but at the same time it's a hard thing to actually capture because there are so many we're doing, we’re not just doing Monocle. We are doing so many other things to get technologies right. So actually it's a compound effect, I would say, because we are doing some of those things to make it easier. For example, as long as a team that's building a service is using our own secure based image and are either redeploying at least once every two weeks, they get vulnerability management or patching for free, per se, because all the patches in the base images are introduced by us, because we rebuild and redeploy pretty often, we tell them when their dependencies are out of date and are vulnerable so they can go fix it themselves. So to be honest and like with a lot of these new tooling like Dependabot and everything else, all of this has become so much easier to get some of that basic vulnerability management of action stuff done out of the box. So I think it's a compounded effect because we are doing quite a few things in the background to make sure that the recommendations that Monocle is giving are very bite sized and actionable. It's not like you accept and risk your file upload, go figure out how you want to solve this problem. It's not those kinds of problems. A company can spend months and almost a year to build something out the right way in there. These are more like we see you don't have something configured you want to fix that misconfiguration. Things like very very bite sized fixes that can actually make it easier for developers to get the momentum rolling is kind of what we focus on. So the very actionable feedback is kind of what Monocle is doing. Obviously, there are going to be more strategic aspects of things that we want the team to fix, which we'll work with them on in the background as a one off. But things that we want every team to have or we want every service to have and are actionable and bite size is kind of what we focus on.

Harshil: That's such a phenomenal idea, democratizing security and making people accountable for their own choices, whether it's through showing them the security score at every repo level or highlighting bugs being found as they are pushing code and making them go through the risk exception in line as the dev process is going on. I love that idea of involving developers and making their own right choices. In some cases, exceptions are needed and what have you, but this is a great way to distribute the ownership of security across the organization. On that note, we are right at time, Mukund. I feel like I could have talked for an hour more at least, baut I hope you can join us again in one of these sessions some time in the future. It's been such a pleasure having you here with us and thank you for sharing such an amazing insight from your work experience currently. I'm sure the audience will love this episode. Thank you so much, Mukund.

Mukund: Yeah, thank you so much. It's been a great talk so thanks, Harshil.

Harshil: Thank you.

Rate this article

Recent articles

How Do You Justify Investment In Product Security?

How do you justify investment in product security? On a recent episode of the Future of Application Security, FullStory’s VP of Product Security and Compliance, Mark Stanislav...

Read more
Should You Outsource Product Security Maturity Modeling to a Third Party?

Should you outsource product security maturity modeling to a third party? On a recent episode of the Future of Application Security, FullStory’s VP of Product Security and...

Read more

Ready to Scale Your Product Security Program?

Sign up for a personalized one-on-one walkthrough.

Request a demo

[email protected]

Request a demo