Back

Ep 18 — Daniel Wood: How Unqork Scales Product Security

read

Unqork is a no-code application platform that helps large enterprises rapidly build complex custom software by completely removing the usual development challenges of a traditional code-based approach.

In this episode, Harshil chats with Unqork’s Chief Information Security Officer, Daniel Wood, to learn more about how he’s helped build and scale the company’s product security program.

Daniel has more than a decade of experience in cybersecurity having worked as an information security analyst, and lead security engineer in previous roles.

Topics discussed:

  • Daniel’s career journey and his transition from risk-based security work, to technical security engineering, consultancy, and corporate security work
  • Changes Daniel implemented after joining Unqork, and how he chose what security aspects to prioritize and invest in
  • Leveraging the OpenSAMM or BSIMM model to guide security investment decisions
  • Unqork’s goal of building product security features to reduce friction between the engineering and security teams
  • How to drive the adoption of security initiatives across an organization
  • How Unqork handles code ownership, architecture review processes, and threat modeling
  • Unqork’s maturity roadmap for the future
Transcript

Harshil: Hello, everyone, and welcome back to another episode of the future of application security. Today, I have Daniel Wood with me as our fantastic guest today, and I'm super excited to talk about how Daniel built and scaled the product security function at Unqork. Daniel, welcome to the show.

Daniel: Yeah, thanks for having me.

Harshil: Daniel, I would love for you to give a brief overview of what you do today and maybe a little bit of a preview into your career history. How did you get to the current position you're in and what did your career trajectory look like?

Daniel: Yes, sure. So currently today, I'm the head of product security over at Unqork. So I'm responsible for security architecture and security engineering of the Unqork platform, everything from secure development practices, secure infrastructure, and cloud security as well. Going back to how I started my career, I started off as a risk analyst, doing support work for the government, doing a lot of certification accreditation, also known today as system authorization, assessment and authorization work. And then from there, I moved over from a more risk-based approach to going towards more technical security engineering work, so that's everything from your security testing, your penetration testing, your reverse engineering, and your offensive security related practices. I did that for a few years, maybe about five years or so. And then from there, I moved on to working for a company doing healthcare security for just roughly about a year or so, and then moved on to working at the, they like to call it the world's largest hedge fund, but I started working at Bridgewater Associates where I created the security testing program, and then headed up security engineering there for them. After that, I went over to this offensive security firm called Bishop Fox, and I ran their consulting practice there for two years, doing everything from penetration tests, security assessments, cloud related assessments, product security assessments, and so on. And then after Bishop Fox, that's when I came here to Unqork to build out their product security function.

Harshil: That's phenomenal. I love that journey, starting as a risk analyst, doing more broader risk-based security work, and then going more and more technical. And by the way, Bridgewater Associates is phenomenal. I've known so many very talented people from that company, also Bishop Fox. But fantastic to see the journey and transition from a risk-based approach to more advisory to very deep technical at Bishop Fox and then going corporate. So tell me a little bit about that. What made you go from consulting to corporate? And having been through the journey, I know those worlds are very different, but I'd love to hear your thoughts on why you made the shift.

Daniel: Yeah, absolutely. So starting off my career as a contractor for the government, I never had the responsibility to actually address the things that myself and my team were pointing out on a regular basis. And that's actually one of the reasons why I had left government contracting work was because I wanted to take ownership over the things that I was identifying and see them through resolution. And so throughout my career, over 20 years at this point, but I kind of waffled back and forth in terms of consulting and then being on the inside and having that ownership capability. And at the end of the day, I realized that what kind of fills my cup at the end of the day is being able to address the things that are pointed out, coming up with solutions to really gnarly security problems that can get identified. It's easier to identify issues, but it's much harder to identify the root cause, do the analysis, and figure out how to prevent it from happening again in the future.

Harshil: Yeah, that's pretty cool, man. I mean, I feel like that's the big difference, how consulting people approach things, that you can write a report and walk away the next day and you're done. You're right, you wash your hands off of it. And obviously it's not easy, it's easier said than done. But taking it to the finish line and getting things remediated, working with the rest of the organization, building champions across the company, is incredibly important to be successful as a security professional within an organization. And what do you think are the key differences between the two? Since you've lived both these worlds for a while now, do you feel different as an insider security person now?

Daniel: Yeah, I think as a consultant, I really cared about my craft of, let's just take penetration testing as an example, always being up to date on the latest exploits, understanding how to do certain things and being as cutting edge as I can be. But I was never really growing that kind of empathy muscle in terms of understanding what my audience was going through in terms of “Here's a whole bunch of vulnerabilities or a whole bunch of issues in front of you, now you have to fix them”, right? Well, now that I'm on that side of the house, I can understand all the various stakeholders, what they have to deal with from a prioritization standpoint, what goes into addressing those things. It's much easier to kind of, I say, cast the proverbial stone at a company if you've done multiple tests over the years and keep seeing the same things over and over again. But as soon as you put on that other hat and you can understand why you may be seeing the same things, it really gets you to have that empathy and to kind of unwrap the logistics within a company. It's been very interesting and also very rewarding because it allows me to grow my people interaction muscle, which is very underdeveloped, in my opinion, which has been great.

Harshil: Yeah, and I can imagine with your background across Bridgewater and Bishop Fox, finding out problems in your role would not have been an issue. Getting them fixed was probably the bigger challenge, which is the case with almost every single company, right? So when you joined Unqork, were you joining an existing product security team or were you beginning from scratch?

Harshil: Yeah, that's awesome. I think we had a couple of guests earlier who said exactly the same thing, which is they leverage OpenSAMM or BSIMM. So it sounds like a well established pattern now. Now, one of the things that I feel is important is that OpenSAMM and BSIMM give you a very good direction of what an ideal state should be across many different domains. But how do you pick and choose which ones you can invest in? Because if you're starting from a lower level of maturity in any organization, you're going to have a lot of work to do across all the domains of OpenSAMM and BSIMM. But how do you pick and choose which ones are a priority or should be a priority for that particular business that you're in, that particular company you're working for?

Daniel: Yeah, so prioritization can come down to a couple of different things. A lot of people like to follow, like a risk-based prioritization framework, but that only works in my mind anyway. If you understand what your crown jewels are and you do a crown jewels assessment, you understand the risks to the “must protects”, right? And then once you understand what your most important things are within your company, then you can start to prioritize your action items out of that of “Okay, these are the things that I have to protect, what are the things that I need to do to make sure that they are protected?”. And then there will be nice to have or other things that you would want protected but maybe aren't critical to the business in terms of operating. And so from that approach, from a holistic security program perspective, that's where I would kind of start with. But from a product security perspective, I would look at everything from what is the current state of the product, what are the vulnerabilities, the weaknesses? Has it been exploited before? What third party dependencies are in use and what does the attack surface look like? And then you can really start to stare into things like everything from developers, where are the vulnerabilities coming from in terms of the preponderance of vulnerabilities. Is it one singular developer or is it a couple of different developers? And that may start to reveal things like lack of security education and training for developers from a secure coding perspective.

Harshil: You know, what I like about what you just said is that you mentioned a couple of different things. One is leveraging the OpenSAMM model for giving a direction of what you should invest in, but also using the lens of what are the crown jewels and what do you really need to do? So combining, I'm guessing you're combining those two things, like these are objectives and use the OpenSAMM or BSIMM model as a way to get to those objectives. But when you think about crown jewels, now you're a product security person, what are those crown jewels? I'm not asking about any specifics at Unqork, but generally, are those just technical assets like applications, platforms, software assets, or other broader things like perception within the market or customer credibility or whatever? How do you think about what those crown jewels could be?

Daniel: Yeah, there's definitely tangible and intangibles. So from a tangible technical perspective, source code, intellectual property is definitely very high on the list for a software company, of course, but also for the more intangible related things, our customer lists or customer base or inference of who our customers could be. Maybe there are customers that we are not allowed to share publicly that are actually doing business with us. And so from there, that's a level of trust that's been established that you can't really replace if it's ever breached. And so customer trust is a huge piece of it as well, and then also our processes and how we do things. Everything from a battle card, from a sales and marketing go to market team if a competitor was able to get a hold of a battle card, they would understand how we see ourselves in the marketplace and how we see them as a competitor in the marketplace. And then that will allow them to then embrace a strategy that could actually beat us in the market. So there's a lot of thinking exercise around that, around threats and kind of the red teaming concept from a non technical perspective that goes into it as well.

Harshil: That's amazing. I love the business enablement lens on this as well. So it's not just we are trying to protect, but we are also trying to advance the business by winning in the market, making Unqork the absolute winner, at least to doing things from a security engineering perspective. Does that mean your team is building or is helping build security features within the platform as well?

Daniel:Harshil: That's amazing. So I'm guessing your team is building those secure frameworks, secure paved roads for the engineering team to use. So one challenge that I've seen in the past is a lot of times security teams would build those services or frameworks or secure defaults and they expect engineering to just adopt those things. But the reality is, just because you made it doesn't mean people will actually use it. So have you run into that challenge of adoption of things that you have built?

Daniel:Harshil: Yeah, so that's a fantastic one. I love this point because this is so fundamental to almost every single product security application security initiative. Because even if you think about it as a basic thing like ”Hey, the AppSec team just bought this brand new shiny tool”. Let's say a dependency scanner, but then other teams need to adopt it whether it's a dependency scanner tool or it's a new crypto library or it's an authentication authorization framework, whatever you guys have built as a security team. How do you get people to adopt it? And that is a very much of an operational thing day to day that you just have to deal with as leaders of AppSec and ProdSect teams. What are your tricks, what are the methodologies that you use to drive adoption across the organization for things that your team is working on?

Daniel: Yeah, I think really step one is growing a cross functional level of communication with other leaders within the org is definitely important. So if you don't have a good relationship with engineering or development from a security perspective, you're always going to have that friction, you're always going to get push back. And so the thing that a lot of security practitioners come across with is more of an absolutism method of approach, which when you put yourself in, again, this is exercising that empathy muscle, you put yourself in the shoes of engineering or development, all or nothing, well, sometimes it's good, most times it's not good. And so being able to compromise based off of the risk posture of the thing that you're talking about is absolutely key. And then also being very, I guess, truthful to yourself, right? There's always going to be known issues, there's always going to be known unknown issues, and then there's the weird gray matter of unknown unknowns. And so really being able to step back and say, “Okay, based off of the risks that I am aware of and maybe the risk in my marketplace that we haven't actually seen yet, what makes sense from a prioritization standpoint?”. And then making sure that you're getting in sync with the engineering and development teams on this is what we like to do in this quarter, or the next quarter, what's the level of effort that will make it possible to achieve that and work with them on it. Unfortunately, product security or application security is always a little bit lagging behind where you want it, but that's just kind of the nature of it.

Harshil: Yeah. And I think logistically, if you think about it, the challenge comes in because you're one central team with a product security, application security, and you would have tens, if not hundreds of different dev teams. So when you want to drive adoption of certain things like that, let's say a crypto library that you want the dev teams to adopt, do you go all the way to the top, to the CTO, VPN and say, “Hey, we'd like all of your teams to adopt this new thing”? So you tell your directs that, “Hey, we need to adopt this thing” or you go one by one with every single leader and then you take that longer route in socializing what your teams are building.

Daniel: Yeah, I think the key is really at the strategic level, making sure that you're in sync with the level of leadership in engineering or development. Kind of the rough direction of where you need to go, right? Maybe all the different details you're not in sync with or you haven't had a conversation about, but really you shouldn't be worried about those things at that level. You should be worried about the direction. As long as you're directionally going in an appropriate manner in terms of adopting more security, the little details usually work themselves out in terms of prioritization. So if you think about platform security features versus operational security practices, for example, a lot of engineering teams will push back and say, “Hey, vulnerability management is a security only related practice. You're asking me to slow down on all these platform enhancements and features in order to do this related hygiene, whether it's patching, making sure your third party libraries are up to date, or secure coding closing vulnerabilities”. Really, that's an operational practice that is core to an engineering discipline. And one thing that I actually left out before the very beginning when we were talking about kind of the intro, before I was a security practitioner, I was actually a developer, I was a coder in PHP and Ruby on Rails, unfortunately. But I definitely can again empathize with engineering and development teams because I've been there as well. And so it's just a matter of if you're going to develop something, why not develop it securely from the beginning? Why wait until security comes to you and basically throw security on at the end, right? It's more of like, if your definition of done is to produce something that is insecure, then maybe you should start thinking about what the true definition of done should be. And I know I'm going down a rabbit hole a little bit, but it's really tethered to the tactical level of operational security should be part of the engineering discipline, the platform security enhancements and the strategic initiatives is what you should really be focusing on in terms of prioritization of a roadmap.

Harshil: Right. Yeah, I don't think anybody would disagree with what you're saying, but I think the implementation of that is just so incredibly hard. Like even operational security. You know, patching, we've known patching is so important since the beginning of time, I guess, but it's so so hard. It has been very difficult to do before and even now in a modern DevOps cloud native world, it's the exact same issue. It's just hard for people to do boring things and honestly, patching is not something that developers get excited about and wake up in the morning and say, “I'm going to patch a bunch of dependencies”. That just doesn't happen. It's really, really hard. I get it. I think the approach of what you guys are taking, which is building safe defaults, secure defaults, safe secure frameworks to just eliminate that risk so nobody has to even think about going back and fixing a security debt, that's a fantastic approach as long as you can drive adoption of those things. So do you use any metrics or any ways to report on adoption of those secure defaults and frameworks and things that you guys are building?

Daniel: Yeah, so we've just started doing basically KPIs really in terms of our vulnerability management practice. So we have all of our issues and our defects basically categorized by the different development teams, and at what stage they are, and we're tracking those. And then also in terms of adoption of processes, we have other metrics that we can rely upon. I think just to kind of toot our horn a little bit here, Unqork decided to go towards basically a government, United States government as a client. So one of the things as a SAS company you have to go through is FedRAMP, right? So you have to get your authority to operate. We achieved our ATO in roughly six months, which is very, very fast, especially in terms of the maturity of a, kind of a startup company and our platform. And so one of the things that we had to do was to integrate and work well with the development engineering teams to make sure that we were providing as much lift as possible, helping them come along for the journey, helping them understand the control requirements, the things that we had to build and really adopt in terms of federal requirements. And so integration, I will be beating that drum a lot when it comes to helping AppSec teams, product security teams scale and really accomplish the things that you need to get done.

Harshil: Yeah, that's amazing. So I'm guessing the environment would be very complicated now that you have a FedRAMP environment and a commercial environment as well. For a lot of these operational things, you mentioned vulnerability management and things, KPIs and metrics and reporting, do you have things like a program manager, project managers, or some help from people who can get stuff done, or is it all done by your security engineers?

Daniel: No, we definitely have a great bunch of program and technical program managers. We have a dedicated FedRAMP program manager within security, and then we have a program manager and senior program manager also within security that interfaces with all the program managers and technical PMs within engineering and development. And one of the things that was key was integrating the two teams together so the PMs are working together to make sure that prioritization was aligning. And then we brought all the basically senior security and normal engineers and developers together on a regular cadence to make sure that we were driving vulnerabilities down, we were driving platform enhancements up, and really doing that on a, you know, sometimes multiple times per week cadence being as tight knit as possible.

Harshil: That's fantastic. So what I'm hearing is the technical program management team that you have, your security TPMs interface with the rest of the organization's TPMs on strategic priorities, things like adoption of secure defaults, safe frameworks and libraries and things that you guys are building in security engineering, and also improving the metrics on the operational things like vulnerability management and SLAs and things like that. Is that correct?

Daniel: Yes, that's exactly correct. It's very paramount really for relationship building and communication to have two sides constantly talking with each other. A great example would be when we first rolled out our formal security architecture review process, we had our senior PM work with the various PMs within engineering and development to really let them know what was coming, what the process looked like, and then gathering the various development leads together to then step through the process. If it was just a kind of ad hoc one on one type process or we just implemented it and said, “Hey, everybody, go through here”, then we would see poor adoption, we would hear a lot of issues and it just wouldn't really be a very good plan, to be honest.

Harshil: Yeah, and this is one of the things that gets discussed so many times. I'm a part of several applications to Slack channels and every once in a while somebody would ask like, “Hey, we want to do architecture reviews, but we don't know how to because the dev teams are modern, they operate in Agile, and there's no really waterfall, like, checkpoints where you can do, like, I'm going to sit down and do an architecture review on this thing that you're building”. That's just not the case these days. But also at the same time with companies that have mature processes, they have similar review processes, not for security, but for other things already existing in place. I'm kind of wondering how you approach that at Unqork. What was that middle ground? Where would you get inserted and who did those architecture reviews?

Daniel: Yeah, that's a great question. So we're actually working on an initiative now to deeply integrate our security architects and our security engineers into the, you know, we call them camps, basically the various teams within engineering and development. So, you know, we'll have an architect and an engineer be a part of the grooming of the backlog and also help identify security champions, right? So if there's a particular engineer, developer that has an interest or is able to spot badness or security issues more than others, identifying those and then helping to cultivate them and train them up to kind of help with the scaling factor because typically most organizations have about one security or one AppSec engineer for every, I don't know, 20 or 30 engineers or developers. It's so hard to scale without having engineering helping you. So that's what we're currently working on is integration, scaling, growing security champions. And then the one thing, the one trap that you don't want to fall into, especially with a very fast agile shop, you know, incremental changes all the time, is to do an architectural review on every single change, every single ticket that happens. If you do that you're going to bury yourself underneath work that is going to be different than what it was a week prior. And so really taking a step back, being a part of the larger epic related conversations. What are you really trying to achieve? And if you can get in sync at that level with the development team and then have that security champion be a part of those conversations and help usher in things like continuous threat model to build iteratively week after week or even threat modeling as code, which is kind of a newer concept you can kind of get away from the more archaic “Let me create something in Visio or Lucid chart drag and drop arrows and you can procedurally generate your threat models. It's much easier.

Harshil: Yeah, I 100% agree. I think I'm really curious about when you say don't threat model everything, which is obvious, but how do you know what to threat model? And you mentioned something like pick epic related things, which I'm guessing is bigger initiatives, not changing a color on a text box, right? So it's more of a bigger conversation. Does that assume that your engineering organization has a process to make distinction between what's an epic level process, larger build architecturally impactful change versus a small change that's just a task or a bug fix that doesn't need to be reviewed from a threat modeling perspective?

Daniel: Yeah, that's a great question. So actually, maybe about a month ago I had that conversation with one of our leaders in terms of they're responsible for the user interface, the user experience and they said, “Oh, these are all just graphical changes, they don't need security review”. And then in the conversation it turned out like new types of input and new ways of handling user data. They may actually need a review in terms of is the right RegEx in place for this new data type and that sort of thing. So really it comes down to educating developers and engineers on when you really do want to do something from a manual perspective, and what are some of the things that you can automate, right? The boring stuff that you had mentioned earlier, no one wants to do all the boring stuff in terms of vulnerability management and patching. Well, if you can automate as much as you possibly can in a safe way, of course, then you can focus on the more meaningful things. And that's really the key in terms of if you're trying to scale your practices and processes within a much larger development organization. You do need to automate as much as you can while you focus on the more meaningful things. So that also includes things like creating a list. It can be as simple as bullet points of, you know, these are the things that we must securely review, anything that touches authentication or authorization, anything that has to do with the crypto module”. Like never write your own crypto. If you want to write crypto, come to security, or anything that touches a session token, for example, that should require some level of security touch. But if you want to change CSS or if you want to change the footer or text font colors, feel free to go ahead and do that, that's a UX related thing and not security related.

Harshil: So we used to do that in one of my previous roles and it took us a while to come up with those five key questions. And that's when I realized that keeping something super simple is so difficult. If you want to write even the statement that you want to keep it broad enough that you can catch the right things, but also not so broad that everything falls in scope for that, right? It's a difficult exercise to get to that point. One of the challenges that we still couldn't figure out on how to get to that was even when we came up with this question, how do we get in front of the developers so they see those questions at the right time? Because what was happening is we were hiring like crazy, a lot of new people joining the team, people departing, teams were changing, you know, leadership was changing all the time. So even if you have that list of ten questions, how do you know that the developers actually looked at it at the right time so they can respond?

Daniel: Yeah, that's a huge issue in terms of just documentation and engineering resources, knowing where to go for the right documentation and when. So one of the things that I'm very passionate about is identifying code owners, right? Understanding who owns a particular function is especially important when it comes to like session tokens or authorization models in the code base. Because if you identify that owner, they own that risk around that component, so to speak. And so when you take a step back and you want, say, new engineers or new developers, you'll have that documented and they should be able to document the threat model against what they functionally own as well as the risks associated and the proper way to use it. And really holding them accountable to documenting is, in my opinion, very important. But the other thing is it's not just security documentation. Those things that you should do from a security perspective should be part of engineering documentation. I find in a lot of places they will generate a huge Confluence tree of all this robust documentation for security and the engineers will say, “I didn't know it was on Confluence, I didn't know to look there” because they only live in their area, the knowledge base, or they only live in their area of the repo. So making sure you're really well integrated in terms of security concepts into everything that engineering consumes, and making sure that you're doing things like quarterly newsletter blasts or something like that to let them know like, “Hey, this information is here”, or in an onboarding process of a new dev, making sure that they understand these are the resources and the processes.

Harshil: I love it. Do you have a security section in the new devs onboarding?

Daniel: That's a great question and I wish I had the answer for that.

Harshil: All right, cool. Yeah, that's a great idea, though. Like, if you're documenting all these things, if you're building these enablement tools for developers, might as well get them started the right way, right? Like get them onboarded, get them pointed to the right resources. Even if they might not become your security champions from day one, at least they know where to find the resources. That's a great point. You also mentioned something very interesting, which is you're passionate about code owners, and that can get very gnarly a lot of times, right? Especially if you're struggling with legacy code, mono repos and things like that. Are you able to share any insights on how your organization manages code owners? Nothing confidential, obviously, but any tips and tricks?

Daniel: Yeah, I can kind of share a general process. So if you use Git, for example, right? You can do like a Git history or the Git Blame log, and you can rewind back to who actually created the code component in the first place. If that individual is no longer at the company, then you can see who touched what areas of the code over the life cycle of the code today, essentially, and see who the major contributors are. And then that will help you understand essentially, who would be the best person to own that code. If it's somebody who's touching it on a regular basis, that's probably more likely than somebody who touched it three years ago and only added a to-do item or something like that, and one little call. So that's definitely a helpful thing. I would also say don't be afraid to interject yourself into engineering meetings and ask questions. A lot of the times I will just be like, “Hey, sorry to play the stupid guy here, but who's actually owning our JSON Web token structure? Who can answer these questions for me?”. And then over time, you build out that knowledge base of who knows what about the code, who owns what parts of the code, and then syncing with engineering leadership to say, “Hey, based off of what I know, and based off of the activity of engineers, it looks like these things are true”, and “Can you agree? Are these true or not true? And if that's the case, can you own X, Y and Z?”

Harshil: That's amazing. That's pretty cool. Going the Git Blame route, trying to figure out who the probable owners or probable people who can answer those questions. That's a good way, as compared to keeping spreadsheets of people that may get outdated very quickly.

Daniel: Right. And the originating owner, the one who committed the code in the first place, may not actually be the best resource for being a code owner. So really it's understanding the relevancy of touching the code base as well.

Harshil: Yeah. And it also becomes tricky in mono repo type of situations because you have one manifest file and multiple people from multiple teams are touching it and then the ownership might go down to this specific dependency in this manifest file, right? So it does get complicated quickly. Cool, awesome. So Daniel, it sounds like you made a lot of progress from when you joined Unqork to the status of the things that you've been talking about in this podcast. What does the future look like in terms of the next couple of years of roadmap or maturity roadmap for your team?

Daniel: Yeah, I think from a general perspective, most people's roadmaps are going to be centered around the software bill of materials based off of that executive order that came out earlier in the year. And so really focusing on trusted supply chains, making sure that we identify all third party libraries, that we understand the risk of using certain libraries, making sure that they're updated, playing that patch game. But also, it's not just third party libraries, it's making sure that our infrastructure, our confidential computing is becoming a huge thing. Making sure that our worker nodes are doing the right level of encryption, all those levels of things, and really giving customers a solid understanding and a level of, I would say, comfort with our security. Because the application security doesn't really stop at the application layer. It's really the, at least from a product security perspective, it's the entire ecosphere. So it's the cloud infrastructure, it's what spins up the application, the underlying server, not just the code base.

Harshil: That is amazing, Daniel. This has been such a phenomenal conversation. I love the fact that we talked about so many different things, not just technology, but also how to really make this function more effectively in a holistic manner. Partnering with engineering, building out processes, how to drive adoption of things across engineering. This is all the time we have for this recording today, Daniel. It has been such a pleasure to have you on this podcast. Thank you so much for spending the time.

Daniel: Yeah, likewise. Thank you.

Rate this article

Recent articles

Solving the Challenges of Engaging with Developers

On a recent episode of the Future of Application Security podcast, Chad Girouard, AVP Application Security at LPL Financial, talked about some of the challenges to overcome...

Read more
What’s Caused the Need for Software Supply Chain Security

On a recent episode of the Future of Application Security podcast, Dave Ferguson, Director of Technical Product Management, Software Supply Chain Security at ReversingLabs, explained why the...

Read more

Ready to Scale Your Application Security Program?

Sign up for a personalized one-on-one walkthrough.

Request a demo

[email protected]

Request a demo