Security Unlocked 10.27.21
Ep 50 | 10.27.21

Securing Modern Software


Nic Fillingham: Hello. And welcome to "Security Unlocked," a podcast from Microsoft where we unlock insights from the latest in news and research from across Microsoft securities, engineering and operations teams. I'm Nic Fillingham.

Natalia Godyla: And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft Security, deep dive into the newest threat intel, research and data science. 

Nic Fillingham: And profile some of the fascinating people working on artificial intelligence in Microsoft Security. 

Natalia Godyla: And now let's unlock the pod. 

Nic Fillingham: Hello, the internet. Hello, listeners. Welcome to the 50th episode of "Security Unlocked." My name is Nic Fillingham. I'm joined, as always, by Natalia Godyla. Natalia, welcome to you. Congratulations on 50 episodes. We did it. Achievement unlocked. 

Natalia Godyla: I don't think it's fully sunk in that we are at Episode 50, but I don't think it's sunk in that we've also done 49 other episodes. 

Nic Fillingham: I think it's a big achievement. And, you know, to all the listeners that are listening and have been subscribing and have been with us from wherever you joined the "Security Unlocked" journey, thanks for sticking with us, and we hope you're enjoying the podcast and will stick with us for the next 50. Natalia, you were working on some 50th anniversary gold-stamped chocolate coins. Can you give us an update on that? Are they - you had a portable, handheld sort of crucible, smelting device. 

Natalia Godyla: As for smelting coins - going great. My neighbors love it. I live in an LA apartment building, so at a minimum they're enjoying the strong smell of chocolate. Don't think everybody else - i.e. my landlords - love the, you know, burning bit. But... 

Nic Fillingham: (Laughter) You just have to explain to them that it's in celebration of 50 episodes of your podcast. And they'll be like, oh, why didn't you say so? Keep going. 

Nic Fillingham: Speaking of commemorative gold coins - I'm not sure that - that's a bit of a stretch. We'll see. Our guest today is the OG of security research - Chris Wysopal, aka Weld Pond, one of the founding members of the L0pht, who really sort of helped define this space of cybersecurity and security research and vulnerability discovery and mitigation back in the '90s. Chris is joining us today as part of a conversation that Natalia and I had with him for "The Security Show," which is a YouTube video show that Natalia and I worked on, where we brought luminaries on to talk about security topics. But we thought that conversation was so great, we wanted to bring it over to the "Security Unlocked" audience, and we thought Chris would be a very fitting guest as part of Episode 50. Natalia, you know, "The Security Show" was your brainchild. Tell us - what can folks expect in this conversation with Chris? 

Natalia Godyla: Well, it's incredible conversation focused primarily on application security. He draws from, obviously, his longtime experience in the security industry and all of the vacillations that the industry has experienced over the years. But we asked questions like - what tools and frameworks do you use to build secure software? And how do you connect your security and dev teams so that you have a modern, secure software development life cycle? And he just gives such tactical guidance for security teams who are really looking to level up their software development or level up the security of their software. 

Nic Fillingham: We're going to bring you many more of these, where we get to speak to fantastic security luminaries and thought leaders who don't work for Microsoft and are out there in the non-Microsoft world doing great things. So be on the lookout for that in future episodes of "Security Unlocked." But again, thank you so much for listening. Thank you so much for subscribing. Even if you've just listened to one episode, even if this is your first episode, thanks for coming on the "Security Unlocked" journey with us. We hope you enjoy Episode 50. We hope you stick around for more. And with that, on with the pod. 

Natalia Godyla: On with the pod. 

Nic Fillingham: Today we are joined by Chris Wysopal, who is the co-founder and CTO at Veracode. Well, Chris, welcome to the show. Thanks for your time. Could we start with a quick intro for the folks that maybe have heard your name but can't quite place the face? 

Chris Wysopal: Yeah, so my name's Chris Wysopal. I'm the co-founder and chief technology officer at Veracode. Started the company 15 years ago. We focus on application security. Before that, I was a security consultant doing pen testing and code reviews. Actually, did some of that work for Microsoft as a consultant, helping secure some of the flagship applications. And before I was a consultant, I was a vulnerability researcher. Vulnerability research actually started out of the hacker scene and the computer underground. And when I started doing it, it was something that wasn't quite welcomed by vendors and governments because no one likes their pants pulled down. And so we kind of had to hide the fact that we were doing this from the people that were employing us in our day jobs. I was a programmer. 

Nic Fillingham: Any of these, like, mid to late '90s tech movies - "Hackers," "Sneakers," "Lawnmower Man" - you were there. Did any of them get it right? Or were they all just silly Hollywood hyperbole? And have you seen any of them recently? Do they hold up? 

Chris Wysopal: Well, you know, I think "Sneakers" kind of got it right. Back when I was with my hacker crew at the loft back in the late '90s, we were saying, like, maybe this could be a job for us - like, doing vulnerability research and doing pen testing. And we knew about "Sneakers," and we said, we want to be like the "Sneakers" guys, right? We want a van. We want to roll up. We want someone who's doing, like, radio surveillance and someone else who's sneaking in. And you know, we actually saw that as, you could actually do that as a job. 

Natalia Godyla: What are the biggest threats to software today - biggest vulnerabilities? And are organizations ready for that? 

Chris Wysopal: When developers use open-source libraries and they include them in their products, and then they don't really think about the vulnerabilities that are coming through that open source, it becomes a problem for the entire ecosystem. We saw that with Heartbleed - everyone scrambling around, trying to patch their usage of the SSL library. If you're using open-source, you really have to think about it differently. You have to think about it as, you're managing, you know, something a supplier is delivering to you. But you have to think about, what's the bill of materials of my software? What did the supplier drop off to me? And then I have to make sure that I'm keeping track of vulnerabilities as they come forward in the future and update that. So it's kind of like a car manufacturer, you know, keeping track of all the parts they use in case there is a defect in, say, airbags, and you need to do a recall and replace them. 

Nic Fillingham: And are there specific tools or processes to actually - - to do that, to monitor the open-source libraries, the open-source code that you're using to treat it more like a supply chain in a bill of materials? If a business or organization is sort of adopting open-source for the first time, how do they do it safely? 

Chris Wysopal: Yeah, so there is definitely products out there that do it. You know, OWASP has a free tool called Dependency-Checker that I know works for Java and some other languages. There are commercial tools. Barracuda has one. It's called Software Composition Analysis. And the idea is, you can run this on your software, either in source form or binary form, and it gives you a list of all the open-source you're using, and it detects the version numbers. And then that connects to a vulnerability database, which can then tell you, hey, this version has a vulnerability in it. 

Chris Wysopal: Some of the newer capabilities that these tools have is to actually tell you if you're using the part of that open-source library that's going to make your application vulnerable. So not every open-source library with a vulnerability in it is actually making your - is making your application vulnerable. Because you might not be using that part. And so being able to know if you're using the part that's exploitable is a big help in managing this. Because we find - some of our customers - one application is using hundreds of applications - hundreds of libraries. And you just know, when using hundreds of libraries, there's definitely vulnerabilities in there. It becomes difficult to manage. You want to have that information. It helps lower the amount of work you're doing to maintain security. 

Natalia Godyla: So stepping back for a bit, what is modern, secure software development? What does that entail? What does it look like? 

Chris Wysopal: It's interesting because modern, secure software development has changed because development practices have changed. Modern software development is much more iterative now and much more agile, where things aren't planned out in advance. And you have people writing software where they are going from new feature idea to delivering that software in maybe a week or two or even in days. And so modern software security has had to adapt to that. 

Chris Wysopal: What you need to do is, you need to do things in small chunks and in an iterative way so that, say, you're threat-modeling just that one new feature at a time, sort of on-demand, or you're doing manual penetration testing on just one feature at a time. And your automation has to run at the speed of your development pipeline so that if your development pipeline - you know, building and running your automated tests - takes a few minutes, your security testing has to fit into that few minutes, too. So this is where we're seeing automated tests get faster and faster but also operate on smaller and smaller chunks and shifted further and further left. So you're not - you're working on the little pieces of the application, and it makes it much quicker to get to discovering that vulnerability. 

Nic Fillingham: So let's talk about shifting left. What does shifting left mean, and how do you do it? 

Chris Wysopal: Shifting left means that you're really becoming part of the development process. And not even just becoming part of the process, before we deploy the software, let's run all our security all at once right before we deploy it. The problem with both of those is you're about ready to deploy your software when you get this big list of things to fix. So obviously, that's going to slow things down. 

Chris Wysopal: So shifting left is trying to discover the vulnerabilities as close to when those vulnerabilities are created as possible. So you would want to be doing automated testing, you know, on a feature branch as a developer or some developers are working on a feature, before it's in that mainline branch, when they're deciding, hey, are we done here? Are we done with this feature? Are we going to commit this code? Shifting left would be like, let's do our security testing there, in part of the pipeline when I'm doing my unit testing. You can even shift further left and scan code in the IDE right as the developer is scanning. 

Chris Wysopal: There are some limitations that you don't have the full context of the application because you might just be looking at, you know, one method of code, one class of code. But if you can find a certain percentage of vulnerabilities there, that's great. The other thing is when we - talking about earlier, third-party code, when you bring an open source library into your application, you want to understand if that version has vulnerabilities right then when you first start using it or when you're building in the pipeline. You don't want to find this after the fact. So lots of different techniques of testing software can be shifted left. And even penetration testing can be shifted left. 

Natalia Godyla: So I think you just touched on that - some of the how elements. So how do you actually incorporate security into the process? What tools do you use if you're - if a company's trying to transition to this process, what should they be thinking about as, like, foundational elements? 

Chris Wysopal: Yeah. So foundational elements for tools would be things like static analysis, which it's - basically it grew out of code review, right? So you can actually inspect all the code just like you would inspect it - code by doing a code review except you're doing it extremely fast. And you can look for a huge amount of different coding patterns, of control flow and data flow and usage of risky functions that will highlight something that's exploitable. 

Chris Wysopal: We talked a little bit about software composition analysis, which tells you what open source you're using, but there's a real benefit to doing runtime testing, which actually models how the code is running in a particular environment. So tools like dynamic application security testing, which really focus on exercising a web interface or an API interface, a restful API. There's interactive application security testing, which inspects code as it's running. It's being driven by unit tests or other kind of testing. And those types of tests can be built into the pipeline also, and tell you things that you might not be able to see because they can see how the code is interacting with its environment. So other microservices, other APIs that are outside of your system, the actual container stack it might be running in, that - those types of testing can see more of how the environment is affecting the security of the program. 

Nic Fillingham: So how should companies assign ownership of secure code across the software development lifecycle? 

Chris Wysopal: This is one that I think makes a huge difference, and it's assigning the ownership of the security of the code to the people who are writing the code. When you have a finding, whether that was found by an external pen tester, some static analysis tool, it should get routed to the people who own that piece of code. Ideally, they're the ones doing the testing, right? They're running the testing. And so it's close to the time when that code was produced. 

Chris Wysopal: You know, I see it all the time where, you know, you wait till the end of a many-month project, and you find all these vulnerabilities. And then you go to fix them, and the people who wrote the code aren't even on the team anymore. Or I've seen places where they don't want to take up the time for the developers that are their prize developers who wrote the code, and they - I've seen - even seen people outsource the fixing. So you outsource the fixing. The problem I see with that is it takes longer to fix. It might be cheaper, actually. But the real downfall of that kind of model is the people who created the vulnerabilities in the first place don't get to learn from that feedback loop. 

Natalia Godyla: How can somebody who's in maybe the security organization work with that group to raise the level of priority and get that part of their remit? 

Chris Wysopal: The way I've seen it work well - and this is something we actually do at Veracode, 'cause of course, we're a software company - is to have a small, core, you know, application security team that has a lot of expertise. Maybe a ratio of 1 to 100 developers is something that I've seen work. And that team helps each individual scrum team get up to speed and ideally educates the team with training and even goes above and beyond that and create a security champion within that team to try to scale out the expertise. And they can help out. You know, when there's vulnerabilities found, they can, you know, pair up with a developer that's fixing it. 

Nic Fillingham: Do you have guidance for someone that's at the very beginning of that journey? 

Chris Wysopal: I definitely recommend not using C and C++ if you can help it. But let's just say you've picked - you know, you've picked Node.js or you've picked Python or whatever you've picked, you know, educate yourself if there are classes and frameworks out there that will help you do things like input validation and outputting coding and session management and secure authorization, secure, you know, two-factor. And a lot of these might be libraries you're going to be selecting. And the other thing that's really important is to build in security tooling right into your pipeline as you're building your code, so you're not sort of inadvertently building up all this security debt, technical debt. Security debt is a business decision, and it can help you get to market faster. But it should be something that you sort of make a conscious decision about, which means get your security tooling implemented in your SDLC early and fix things early. You'll find that they're easy to fix as it becomes part of your process. It really needs to just be part of your normal development process. And then you choose to say, well, what's the policy of the risk I'm willing to tolerate in production? What's the risk I'm willing to tolerate shipping to my customers? Because you're not going to fix every low-severity flaw. 

Natalia Godyla: For when something does happen, how can companies prepare themselves to detect and respond to anything that - any incidents related to insecure software? 

Chris Wysopal: You really need to do vendor risk management. You need to think about, what am I using the software for? How risky would vulnerabilities in this software be to my business? And then if it's considered something that would be risky, you really need to push back on your vendor and saying, you know, did you build this securely? What kind of encryption are you using? What kind of - do you have two-factor authentication? Will it integrate in with my single sign-on software? Can I operate it securely? So there's those basic questions about sort of the features, that it's being delivered securely. But then I think you need to really get beyond that if it's critical software and say, are your developers background-checked? Do you do security testing in your software? What third-party libraries are you using that are open-source, and how do you manage the risks in that? How do you deliver me patches when the next Heartbleed happens? So it's really - a lot of this is still today questionnaires where you're asking the vendor to talk about their process for creating the software. But I'm starting to see, you know, need for evidence of either software testing that you're performing yourself or third-party software testing. Manual pen tests are very popular for this. 

Nic Fillingham: So what are some of the more recent innovations in application security that you would love to educate people on? And how can companies up-level their tools and processes in this way? 

Chris Wysopal: So we're starting to see, you know, security become code just like infrastructure became code, and it's really, truly become part of the development process. The cool thing about that is you can clone a development repo and get all of that security testing and configuration as part of your pipeline just by cloning a repo. And so I really think that that's the future, that security testing just becomes, you know, a first-class part of development tooling. The other thing that I've seen that's really cool is just shifting things that are traditionally operational security and vulnerability management to the left. 

Natalia Godyla: Well, thank you so much, Chris, for joining us today on the show. 

Chris Wysopal: Oh, thanks for having me. It's been a great conversation. 

Natalia Godyla: Well, we had a great time unlocking insights into security, from research to artificial intelligence. Keep an eye out for our next episode. 

Nic Fillingham: And don't forget to tweet us @msftsecurity or email us at with topics you'd like to hear on a future episode. Until then, stay safe. 

Natalia Godyla: Stay secure.