EP11 — Lessons From Building Thirty Madison’s Product Security Program
Thirty Madison is a healthcare technology company that offers direct-to-consumer healthcare and wellness products for people living with chronic conditions. Founded in 2017, the company has raised over $200 million in funding and has more than 400 employees.
As a healthcare company with millions of customers, Thirty Madison has the responsibility of holding their customers’ most personal information. Keeping this highly sensitive data secure is mission critical to their business. A single breach could jeopardize their reputation and ruin their relationship with their customers.
To ensure their customers and employees are secure, Thirty Madison brought on Anshuman Bhartiya to put in place a Product Security program that is capable of keeping up with the rapid growth of the company. In today’s episode, Anshuman joins Harshil to talk about the lessons learned as he built their Program Security program from scratch and the tactical advice he has for others who find themselves in a similar position.
- How to decide what problems and risks to prioritize when you are first building a product security program.
- Questions to ask executives and co-workers as you begin building your product security program.
- How Security Guardrails can influence developers to build secure code from the beginning and how to actually make that happen.
- Anshuman’s favorite Security Guardrail he’s implemented.
- A lightweight approach to building and securing your SDLC.
- #1 piece of advice for someone who is just beginning their product security journey.
Anshuman: Yeah. Hi Harshil, I'm super excited to be here.
Harshil: Fantastic. Anshuman, before we go further in the podcast, your background and experience is super interesting to me, and super exciting as well. Could you explain to our audience just a little bit about where your career started, how you got into this space, and the twists and turns it took over the past few years?
Anshuman: Yeah, absolutely. I’ll try to keep it as short as I can. So I came to the US in 2008 to pursue my post graduate. I did my Master's in Computer Science, and I took a bunch of security courses. My constitution was security. So that really sparked that interest there of just a security domain in me, and I started to become more curious about it. After that, I basically started a couple of jobs where I was a web applications developer. I was just building web applications. So my background is really computer science, You know, building software from scratch. But soon after, I had to transition into security just because it seemed so interesting. And then my first security gig was with the boutique security firm called Cigital. I think it’s now acquired by Synopsys, and there are so many folks out there who come from that same company, and are super well rounded. So that really started my security journey. I spent a few years there. I learned about how security works in general, different technologies, different ways of web application hacking, so on and so forth. And since then, I've predominantly been working in product based companies, securing the products, right? Basically making sure security is based in the SDLC, working with our engineering organizations to make sure that code gets shipped securely and safely. So I worked for EMC. I was doing product security incident response for them. And then I basically moved to the West Coast after that. I had a short gig with Intuit as well. I was on the offensive side there. So again, I've switched domains, I've worked on the defensive side, offensive side, and I worked across so many different domains within information security, like CloudSec, AppSec, InfoSec. Yeah, and then just being with Thirty Madison for the past year or so, helping build the product security function here.
Harshil: Phenomenal. I got to tell you, man, I come across so many ex Cigital people. The Cigital crew has done really well for themselves in this world of AppSec. Some very very phenomenal people came out of Cigital.
Anshuman: Yes, I couldn't agree more.
Harshil: Yeah. And so it's also interesting to me that you spent some time in the offensive side doing Red Team related work at one of the previous roles, and then you moved into the defensive side as well. Is there a story behind it? Was there something of particular interest to you in terms of making that switch?
Anshuman: There were a few things for sure. So as a security consultant, when I started off my career, what I used to hear quite often from a customer is, yes, you can hack us any way you want, you provide us with all these security vulnerabilities, but the recommendation or how to fix something is always something that customers look for. And most of these engagements, they don't focus on the recommendation piece of it, right? So that's really what made me interested in figuring out the other side as well. Like, I knew all about web application hacking, most of it, right? But then I also started getting curious as to how as a security engineer on the defensive side, how would I go about protecting our application infrastructure? So that was really a little moment for me, if I remember correctly. And there was a good opportunity for me to actually be on the offensive side in Intuit, as well as look at the defensive side. I had access to both. So I basically observed a bunch of problems that could have been solved using automation and whatnot. So that's really how I started to get into more of the product security side.
Harshil: That's phenomenal. And I think doing both of those things well is incredibly hard, especially because on one hand, you know sometimes how easy it is to exploit things and whatnot. But on the other hand, you would also know how difficult it is to actually fix those things, right? Especially some of those foundational architectural flaws. It might look simple, it might look dangerous, but it is really difficult. And when you think about all the business priorities that the developers might have or the dev teams might have, it's not always a black and white decision in terms of making a security investment.
Harshil: It's a phenomenal journey. And currently you're at Thirty Madison, and I believe you were the first product security person to join this company?
Anshuman: Yes, and I've been busy working for the past eight or nine months to help bootstrap the product security / application security function from the ground up.
Harshil: So what does that feel like when you join a brand new company, and if they've done some work around it, but you know there's a little bit more maturity to bring in, how do you prioritize what is a day zero problem that needs to be solved right now versus something else that can be done next week, next month? Like, how do you make those decisions?
Anshuman: Yeah, sure thing. It's actually funny you ask this question because I recently blogged about it as well. So how to go about basically building a product security program from scratch. So I think what it boils down to is really understanding what risk means to your organization, what the risk appetite looks for, right? How much risk is your organization willing to tolerate and then building your program and prioritizing things accordingly, right? In other words, if there are a bunch of applications, let's say, that are exposed on the internet, right? Like, a few of them are hosting critical customer information, whereas other ones are mostly for POC purpose, right? As an org, if your risk, if understanding risk is clear, you're obviously going to prioritize the one that has customer information, right? So I think the first few weeks are really about understanding what that looks like, and then having a sense of what are all the assets that are the most critical for an organization, and then building a program around it. So, yeah, it all comes down to the risk and how well you and your C levels understand that.
Harshil: Yeah. So I'm guessing you would have to do a lot of discussions and interviews and information collection to get to that point, right? Because if you or your team didn't exist in that company before, I'm guessing there's no system that collects all this information for you.
Anshuman: Yeah, absolutely. And that's essentially the first thing that I did. My CISO basically asked me to speak to a bunch of folks who were like VPs and directors, and I just basically went to them, and I just listened to them, and they just kept sharing with me the details about their environment, their culture, everything. And I just noted each and everything down. So that's essentially the first step is to have an understanding of the lay of the land, so to speak. And that helped me frame my thought process accordingly and approach the right folks.
Harshil: So it might sound simple that you talk to a VP or a senior person in engineering and you collect all the information, but a decade or so ago, when I was trying to do that, I was really struggling with what is the exact question I ask? What do I need to know? I'm sure the world has changed now, but do you have specific questions that you are asking them?
Anshuman: You can start with some basic things like what keeps you awake at night, right? Have you known or heard of any kind of security vulnerability that has been disclosed to you or you heard from somebody else, right? Those kinds of things get the conversation going and then obviously you can prod a little further as to things like how data storage works. Like what kind of data does your organization process, store, transfer, and how is it secured? Those kinds of conversations can be long, but then if you keep your questions focused and just try to get the high level information, I think those can be really productive.
Harshil: Yeah. Have you run into situations where they say, “I have all these vulnerabilities, can you fix them for me?”?
Anshuman: Yeah, I think the expectation from any security professional is to just kind of solve security right from the first day. So yes, obviously I do get asked that quite often.
Harshil: Yeah. Having worked with a lot of other engineering people it's amazing how sometimes they try to think of, “Hey, you're security, so you're responsible for doing everything security. That includes fixing my code for me. But yeah, thankfully it doesn't happen as often these days as it used to before.
Anshuman: I just want to add something there. To be honest, I don't think it's an unfair ask, right? It is my belief that security professionals should know how to read code and also how to write code, right? That is just something I know, and I know how important and valuable it is for my skills. So if somebody asked me to go and fix their code, I'm happy to work with them and I'm happy to recommend how to go about fixing something as opposed to just saying, “Look, this is your code, this is your responsibility, I'm not supposed to fix that”. I think that's the sort of culture that if you set the culture on the right tone in the beginning, it will reap great benefits in the long run.
Harshil: Yeah. I don't know, man. Maybe I have to disagree with you in some context, right? I get that in some cases where dev teams are maybe not the security experts, so you may have to help them on how to actually fix it. Or in some cases, security teams even build security systems, like a securities management system or crypto systems, right? You give them and offer them as a service. But to me, if I'm a developer, I'm writing code that has vulnerabilities and if I expect somebody else to fix it, I tend to think of it as like I go to a dentist and the dentist tells me to floss my teeth and I say, “Hey, can you do it for me every night?”. It’s just in a way, for me, it's my responsibility, right? Like if I'm writing that code and I'm introducing bugs in it.
Anshuman: Yes, I totally agree with that statement. However, this is again, my opinion, and my belief is as security professionals, you should make it difficult for the developers to commit bad code in the first place, right? By means of secure by default library of frameworks, right? So you've already provided that guidance to them in the beginning. Now, if they choose to not follow that or go a different route, yes, that's when they need to go and fix the stuff. There's some nuance around that.
Harshil: Yeah, 100% agree. I think that's the best way of doing it. Like, if I draw some parallels around how quality and testing has evolved. Back in the day, we used to have teams of QA people. Developers would write code, send it to them, they would do testing, send it back to developers for fixing. And that has changed now. It doesn't happen in most cases, right? There's automated tests, developers write their own test, CI/CD systems orchestrate those tests and deploy things as soon as they pass it. So all the developers see is whether the test passes or not. That's it, right? Security is not in that position just yet. We still do the testing for them, we run the scanners for them, we triage and prioritize vulnerabilities for them, and then ship it back to the developers. And by that time they're working on the third or the fourth feature already and they forgot about what they did.
Harshil: So it’s kind of like we're still living ten years behind the rest of the organization. But taking the approach that you mentioned, which is ensuring security by default or security guardrails or whatever that is. Like, putting it in place to help the developers and influence the developers to build it secure from the beginning, that's a much more effective way. I don't think anybody would disagree with that. The challenge is how do you actually do that, right?
Harshil: We've been talking about this security by default for decades now. A lot of static analysis companies would say that, “Hey, you do this, we can do that”. But I haven't seen a lot of people adopt it. I'm curious how you are thinking about implementing those things from an operational perspective.
Anshuman: Yeah, sure. So recently, again, like SAS tools, statistical analysis tools, I have never been a big fan of them just because of the amount of work it creates, right? Like the number of false positives, the number of triaging one has to do until I started playing with Semgrep. And I am in the process of writing a blog about how you can go about building highly effective SAS workflow and providing feedback as early as a PR or the pull request. And I've had pretty good success with it just because the feedback loop is instant, and as soon as a developer has to commit a PR and if they're not following a certain standard or whatever, if you provide that feedback right then and there, you're going to see some changes, right? So I think that's the way to go about it. The whole concept of shifting security left and making sure you do the scanning, you provide results in the CI/CD pipeline. I think that's the right approach. And I know it's far from being solved, obviously, but there's some good work happening.
Harshil: Yeah. Can you give me an example of an awesome guardrail or control that is implemented using Semgrep?
Anshuman: Yes, absolutely, 100%. So we all know how difficult authorization issues are to be found, right? You can't expect a scanner to find authorization issues just because it doesn't have the context, it's not smart enough, right? However, when it comes to SAS tools, what you can do is you can come up with a framework where each of the end points that our application has, they need a certain card or like a certain header or certain other language, I think they're called decorators, right? And you can write a certain code to identify the endpoints if they don't have that particular guard, right? And there are a few different rules that you can write around it. First example is as an organization, you standardize certain guards to be used across all services so you can catch any endpoints that are not following that, or these services are creating their own guards, which won't be happening. That's one rule. B, within the guards itself, there are certain checks that need to happen to make sure that you're allowing a certain account to get information from that account itself and not other accounts, right? Things like IDOR issues. Again, you can write rules to specifically check for that particular code. So yeah, I've had some great success again, and I do plan to blog about how to go about thinking about it. But I think authorization issues and identifying them at scale is something that I didn't think was possible before using scanners. But I think the SAS workflow has changed that.
Harshil: Phenomenal. That's a great use case of enforcing a standard. And I'm guessing you would have to go through a lot of the discussions with the engineering organizations to have everyone align on that standard, saying, “Yes, this is an expectation, you also do it, and now we're going to implement this as a control”.
Anshuman: Yes, I think that's really the first step is socializing. How are you going to go about it, right? Standardizing that. If you don't have that, adoption is going to be really difficult.
Harshil: Yeah. In one of my previous companies, we went through an exercise of enforcing a standard docker based image because earlier people would use whatever they want to. So initially we came in with the opinion that, “Hey, everyone should use one of these four approved base images”. And a vast majority of the people agreed to it, but then there were these corner use cases where you just have to use the upstream source image. You can't rebuild it every time. So a lot of these corner cases came up and then at some point, like after six months of socializing and working, that's when we were able to figure out that, okay, now we are in a position where we can enforce the use of approved base images except for these, like a few of them, which we are okay with. And I think Semgrep also helps with some of those controls too.
Anshuman: Yeah, absolutely.
Harshil: So now this is a topic that bleeds into a little bit beyond what traditionally is called application security, right? Now you're talking about container security and you're talking about cloud and things around it. The reason we are talking together is also the blog that you wrote recently around product security. So if you can help us understand what this new concept of product security actually means, like how is it different from application security, it would be very helpful for our audience.
Anshuman: Yeah, for sure. So I also mentioned in my blog that folks use these terms interchangeably, some consider AppSec as a subset of product security. I don't have a strong opinion on this, but if we were to think about how it used to be referred to way back and now how it is being referred to, I think product security does include things like how do you secure the application, and then how does the application get deployed securely, right? So that deployment phase, I don't think was historically considered when we spoke about application security. It was all about whether this application is secure or not. You try to find all the osquery categories, but then you would stop there when it comes to application security. I think in the new world with containers and whatnot, it is super important to basically cover the entire lifecycle from the point where the code gets committed to the point where it's actually deployed in production inside a container. Because there's so many things involved.
Harshil: Right. And one of the interesting conversations that I had recently with one of our colleagues was in his company, they don't actually build software products. It's easy to think about product security if your company is in the business of building software products. But if you're building hardware or some other devices, or shoes, which has nothing to do with software products, then it's hard to think about it that way. So the conversation we ended up having was, well, product security could be any software product, could be an internal or an external product. It doesn't have to be an external customer, but it could be an internal application too, that is critical for your business. So anything. It could be whatever, it doesn't just mean like a product that a company is selling. It’s sort of a different way to look at it, but it brings in this sense of like it's beyond just software, it's more full stack, it includes your end to end lifecycle and some of those things as well. But you're right, I think it's in a world where your CI/CD configuration is stored as code, your Kubernetes configuration, network configuration is stored as code, cloud configuration is stored as code, everything is as code. So it just makes sense to expand the definition of application security to also include some of these other things and go beyond it. And right now we are playing with the terminology of product security and we'll see if that gets better adoption.
Anshuman: Yeah, I think you’ve mentioned something about infrastructure as code, right? This has just started to come more in the line recently. Infrastructure is immutable. You have infrastructure as code, right? You have these tools like Terraform so I don't think this sort of existed before. So yes, infrastructure as code would absolutely count as a subset of product security where you can run the same tools, whether SAS or whatever, against the repository that holds that code. So yeah, I think that's a great point.
Harshil: So when you think about infrastructure as code security, what do you typically do? I mean, there are some companies, there are some products and some ways of analyzing your Terraform code. You know, sort of like a static analysis on Terraform code, right? And then there are the CSPM solutions that look at the runtime configurations of cloud and whatnot. What is your view in terms of what should be done? Step one, step two, what can come after? Do you have an opinion on that?
Anshuman: Yeah, I have some ideas around how to go about implementing it. And the reason why I say I have an idea is because I haven't verified it fully to be confident. But I think it all comes back to the initial statement that's security by default framework, right? If you can work with the infrastructure team to come up with some security by default Terraform modules to, let's say, deploy a VPC, right? You can write rules to figure out if folks are actually using that module or not, right? So again, it gives you more assurance that as a security team reporting the right guidance, and then you can identify this where they're not following that. And you can go and have that conversation as to why they're not, and then fine tune your models accordingly. So I think that iterative constant feedback driven approach seems to have worked in other cases. So I would be surprised if that doesn't work for infrastructure as code as well.
Harshil: Yeah, that's phenomenal. I mean, I think going on that topic of an iterative approach, you recently wrote about a lightweight approach to building an SDLC, security and SDLC.
Harshil: Can you share some of your thoughts? Your blog is amazing, by the way, For our audience, if you're interested to read more. Also, can you share the URL or a place where they can go read more about it? And if you can elaborate on what a lightweight approach to secure SDLC is.
Anshuman: Yeah, sure thing. So, as a founding product security engineer, I was the only one doing product security. You are expected to sort of improve the application security posture of the organization and somehow bake security in, right? And securing SDLC is the right way to go about it. You must have heard about this from big companies like Microsoft and Google, they all have very mature processes. But as a founding product security engineer, it can be pretty overwhelming, right? Because there's so many activities you could be doing. Threat modeling, architecture reviews, code reviews. How do you prioritize what and to what extent you actually do these activities, right? So that blog basically walks through the thought process of how you can do it in a way that you can handle, right? So the first step really is something called a rapid risk assessment. I was basically inspired by how Mozilla goes about doing their RRAs. They have written about it extensively, you can find it on their blog, and then I also read some of the other blogs. But the idea is to make sure that you have visibility into anything new that is getting built, shipped, or anything that is already existing, right? And the way to get that visibility is to have your engineering organization fill a questionnaire, and that questionnaire will help you understand how critical or how risky a particular application infrastructure really is. And based on the outcome from that rapid risk assessment you could then figure out what additional activities would be required. To give you a more concrete example, again, if an application is being introduced that is supposed to be internet facing, that is supposed to have a bunch of customer information, you are going to get that visibility from this rapid risk assessment. And then you will obviously want to do additional activities like threat modeling right? So without figuring out all the details, just that first step would allow you to figure out how to go about it, right? As opposed to an internal application, which is getting built, it's not that risky, it's not that critical, and you just stop at the rapid risk assessment phase. So I think having this approach really helps you prioritize ruthlessly, which is something you have to do as a founding product engineer.
Harshil: So one question I have on that topic is…. by the way, I love Mozilla's RRA template as well. It's very pragmatic, it's very practical. But the challenge that I had before was in a company that's been around a little bit, there will be applications and services that have been built years ago, if not months ago. And there are several hundred developers, or quite a few developers who are basically pushing features in them. There are multiple releases across multiple services, and maybe there are a few new services getting added. So in an environment where there are multiple releases happening at multiple times, different times, different services, not necessarily new services, but new updates, new features that could change the risk of a service, how do you know when to run this RRA?
Anshuman: Yeah, that's a great question, and that is something I’ve struggled with, to be honest. Again, all the things that I'm sharing are based on my experiences. This is exactly one of my biggest experiences when I first started implementing the RRA is the product teams, the engineering teams, they came to me and they asked me, when do they decide when to fill the RRA? All right, if they are shipping a new feature, do they have to fill the RRA? If they are just making small changes, do they have to do that? And I don't think there's a right or wrong answer here, to be honest. I think as a security engineer, I felt just giving them guidance and making them decide when is the right opportunity, I think it comes back to the point that at some point, you have to trust your engineers. You have to trust your engineering organization to make the decision because you can't be hand holding them, you can't be enforcing policies on them. That's just something that doesn't scale, that doesn't work. So that's what I did. I basically gave some brown bag sessions. I presented this RRA in an all hands meeting. I explained to them, like, why is RRA important? What is it that we're trying to capture from it? And here are some examples of when you should be filling an RRA versus when you should not be filling an RRA. I think that sort of general guidance seemed to have worked. And I also told them, if you are confused by default, just go and fill it. But if you're sure it's not going to affect significantly, then you don't have to fill it. So I think that approach worked well. Obviously, it's not foolproof. There are cases when I see something that has been deployed that hasn't gone through the RRA, I see those as opportunities for me to improve the RRA process.
Harshil: Right, yeah. And there was recently an interesting talk given by Teja. Teja Myneedu, he's the head of product security at Splunk, and he gave a presentation at, I believe it was the AppSec USA conference last year. He talked about this exact same topic on how to trigger some of these assessments. So I don't remember exactly what he was doing, but what I've seen done very well is, at least in one of my previous companies, we were struggling to figure out the right time when there was less engineering process or less engineering discipline and how they bucket things together. So if the developers are basically working off of just random tickets, Jira tickets, in their mind it all makes sense. But for security it doesn't make sense because the individual Jira ticket might be very small and it will never meet their requirements or the full context from the rapid risk assessment. But when you aggregate those things to a feature level, at some point when our engineering team became mature, they said, “Okay, we're going to create this ticket for a feature and another ticket for a release”. So then what we did was we created a Jira template that got attached to a feature level ticket, not the individual small change, not the commit level changes. There are typically fewer features, so you could complete that rapid risk assessment, which for us is like seven or eight questions. So we used to do that at a feature level, but we could only do it because the engineering processes were mature enough that they were segmenting the tickets into certain buckets of work.
Anshuman: Yes, and I think that's essentially why this is a struggle for us because every product engineering team, they have their own means of shipping software, and it's just not possible to have a consistent way of introducing this RRA. So at some point it's just not possible to mandate it via policy or via future ticket until your engineering organization speaks the same language, right? Yeah, so I think with every company as they go and mature, you're obviously going to improve your security processes as well.
Harshil: Fantastic. Okay, so RRA or rapid risk assessment is an important piece. What else is important as a part of your lightweight security?
Anshuman: Yes, sure. So after RRA the outcome is basically a list of any follow up activities that you would want to do on application or infrastructure. So I speak about architecture reviews, right? Again, architecture reviews, in order to implement it in a lightweight fashion, I believe the security team doesn't have to do anything special right out of the box. If the engineering organization already has a well defined process of RFCs or design reviews or just asking for comments and feedback from other folks. And I think that's a good place to integrate the security questions as well. And you can ask questions about how they are doing authentication, authorization. Are they introducing any end points that are being exposed to the internet? So having those questions basically allows you to have a better insight about the technology stack of the frameworks that the application is proposing to you. So that's one thing that can be implemented. The other thing is threat modeling. Again, threat modeling, it has been spoken at length by different folks and they are the experts. But I feel like threat modeling can be as easy as you want it to be or as complicated, right? I've seen folks use tools to go about threat modeling that works really well, but I think just having that conversation with the engineering team and asking them questions about how is this particular component interacting with this other component, how are you doing authentication, how is the communication happening? Just those basic questions would lead you in certain areas that can easily be identified as high risk areas or some areas that are not worth looking into, right? So I think threat modeling is also a very good activity and it's one of my favorite activities to do with engineers is to figure out what threats a particular application faces before even code gets committed, right? So threat modeling architecture reviews. I think code reviews, I didn't write about it much because I believe as a founding product security engineer, there's only so much code review you can really do. You cannot be reviewing code of every PR, it's just not possible, right? So I think I consider it more like an ad hoc activity where you know exactly the repositories that hold your crown jewels that basically deal with authentication and then you just work with your engineering counterparts to make sure that they involve you in that code review whenever a code gets committed. So that kind of approach seems to have worked well. But yeah, those three, four activities, I think if you can somehow figure out a way to embed or integrate within the SDLC, you're going to create less friction, you're not going to hamper developer velocity significantly, but yet you're still going to start getting more security signals, which is what you want.
Harshil: That's phenomenal. So that's a very good way to sum up all the high priority assessments or things that you should do to really go beyond just running tools, right? Because running tools will only give you a little bit of information. But this is really helping you understand the full picture, the business context of a lot of things, the true risk of a lot of things. I'm guessing you would run into challenges around staffing these things as well because these will take time, especially if you want to do it well, then these are manual processes, manual things that need to be documented, that need to be discussed, time and scheduling and all of that becomes important.
Anshuman: Yeah. I think at some point you have to make it interesting enough for the engineers to start doing it themselves as opposed to relying on the security team. So again, to give you an example, if you make threat modeling interesting enough, if you make the engineers feel like attackers, that is an actual thing that they would want to do, right? So, yeah, resources are always going to be a problem, you just have to be smart enough to figure out ways around it.
Harshil: Fantastic. Yeah, I think that's a good way to think about it. That resourcing will always be an issue. We just have to pick and choose our battles, figure out what's really important, understand that really well. Do you do anything to train your developers on security?
Anshuman: I think training is something that, again, I have a very different opinion as compared to so many other folks because I've seen it work really well. I've also seen it fail like there’s no tomorrow, right? Because I believe if your developer training education is not super focused with what they are doing on a day to day job, nobody is going to be interested in learning about how to fix cross site scripting, right? So if you have the resources to build a training around the stack that your engineers work with on a daily basis, then it's going to be a super successful program, right? And it's just that as the only product security engineer, you cannot be building that in the beginning. What you can be doing is investing in resources and training or just working with engineers and helping them understand what threat modeling is, how to think about attackers, right? So that's the training element that I think should be implemented in the beginning as opposed to the actual formal training.
Harshil: Yeah. And just to connect to what you were saying earlier, there are two approaches. One is to train every developer to use authorization correctly or you build a guardrail around it to prevent it from…
Anshuman: Yes, exactly.
Harshil: It's a much more scalable approach. Haha.
Anshuman: Yeah, it is. And trust me, it works. I was surprised that I was able to identify endpoints before it got deployed in production that could have exposed millions of customer information. I was mind blown when I saw that happen. So it's just about how you approach and how you think about the culture and what works, what doesn't work. It definitely takes a few weeks to just understand the organization overall, and you should be doing that, I feel.
Harshil: Yeah. Phenomenal. Well, Anshuman, do you have any parting thoughts for somebody who's coming in brand new as a first person to build a product security or application security program? Any suggestions? Any thoughts for people like that?
Anshuman: Yeah, sure. I would say don't get overwhelmed. Reach out for help if you feel like you need it. Also, go read my blog. I feel that I've shared my experiences and insights, and I hope that is helpful. And I'm more than happy to share ideas, collaborate. So please feel free to reach out to me anytime.
Harshil: Phenomenal. And how do they reach out to you, Anshuman?
Anshuman: You can find the information on my website, anshumanbhartiya.com. I'm happy to post the link in the podcast if that's okay.
Harshil: Fantastic. Yeah, we'll provide that as well. Anshuman, this was such a phenomenal chat. I love having innovators and disruptors on the podcast, people who are doing interesting things and not just doing it, but also sharing it with the rest of the security community to help each other. So thank you so much for writing your blog. Thank you so much for coming to the podcast, and I look forward to having you again.
Anshuman: Yeah, this was super helpful to learn more about how you think as well, right? And yeah, I'm super happy to be here. And as always, I'm always looking to collaborate and help solve security.
Harshil: Fantastic. All right, thank you so much.
Anshuman: Thank you. Bye.
How do you justify investment in product security? On a recent episode of the Future of Application Security, FullStory’s VP of Product Security and Compliance, Mark Stanislav...Read more
Should you outsource product security maturity modeling to a third party? On a recent episode of the Future of Application Security, FullStory’s VP of Product Security and...Read more