The BlueHat Podcast 11.13.24
Ep 41 | 11.13.24

BlueHat 2024 Day 1 Keynote: Chris Wysopal AKA Weld Pond

Transcript

Nic Fillingham: Since 2005 BlueHat has been where the security research community and Microsoft come together as peers.

Wendy Zenone: To debate and discuss, share and challenge, celebrate and learn.

Nic Fillingham: On the BlueHat podcast join me, Nic Fillingham.

Wendy Zenone: And me, Wendy Zenone, for conversations with researchers, responders, and industry leaders both inside and outside of Microsoft.

Nic Fillingham: Working to secure the planet's technology and create a safer world for all.

Wendy Zenone: And now on with the "BlueHat Podcast."

Nic Fillingham: Hello, and welcome to a special episode of the "BlueHat Podcast." Hello, Wendy.

Wendy Zenone: Hi, Nic. How are you?

Nic Fillingham: I'm a little tired. I'm a little mentally hungover. You know, in the sense that BlueHat 2024 finished yesterday. So we've just come off three days of incredible conversations, sessions, keynotes, villages, hallway chat, collecting of enamel pins, swapping of stickers. There was probably some karaoke and interpretive dance somewhere in there as well. But yeah. I feel bad, Wendy, because you couldn't make it and I, you know -- it would have been great to have you there obviously as part of the team and part of the "BlueHat Podcast" crew. But yeah. You weren't there. You were missed. I'm sorry you weren't there.

Wendy Zenone: I would have loved to be there, but I loved working with everyone to help plan and I'm excited to get to revisit some of these presentations after the fact, especially the keynotes, because we didn't stream anything so I wasn't able to see it. So this is going to be really great for me.

Nic Fillingham: Yeah. And so today's episode is the audio from the BlueHat 2024 day one keynote which was given by none other than Chris Wysopal AKA Weld Pond who is one of the co-founders of Veracode and also one of the co-founders of the L0pht spelled L-0-P-H-T who were significant, incredibly significant, and influential in really the very, very early days of this whole idea of vulnerability discovery and hacking and disclosure and responsible sharing of information with vendors and impacted people. And presented to the senate and I won't give it away. Oh, my gosh. Just knocked my water bottle over.

Wendy Zenone: You're so excited. So excited.

Nic Fillingham: I'm not going to give it away, Wendy. I think we should just roll audio and let people listen to this fantastic keynote where Chris walks us through all that history and ties it all together magnificently with sort of the current state of where we are and what's coming for the future of this space. So maybe, Wendy, without further ado, shall we listen to the day one BlueHat 2024 keynote audio?

Wendy Zenone: Yes. Let's do it. I'm excited. [ Applause ]

Chris Wysopal: It's great to be here. This is my third BlueHat. I was actually at the first BlueHat in 2005. I ran into Michael Howard out there and he reminded me that he has a picture of me with H.D. Moore and Dan Kaminsky from the first BlueHat and it sort of made me redouble my reason for wanting to do this topic which is before we lose the history and more of our colleagues I want to talk about how we got to this point where hackers are collaborating with Prada companies like Microsoft and how we actually, you know, a lot of us work for Microsoft or work for other companies that build software. A lot of us are independent researchers. But that collaboration is really here today, but it wasn't a foregone conclusion that this was going to happen. This collaboration actually came from somewhere and in the '90s I would say it was pretty divisive as Tom said. You know, there were some big problems with Microsoft products around 2000 when they were starting to get popular and Microsoft frankly needed help, but they wouldn't have had anyone to call that they could trust unless the communities had come together over time. And I like to start off with this picture because it's just like a point in time. I'm going to go back further than this, but this was in 1998 where me and my colleagues from the L0pht testified before the U.S senate and just this moment in time where hackers were trusted to give a voice and have an opinion about computer government security. This was the first hearing on computer government security. They had representatives from the general accounting office that were auditing departments. They had representatives from think tanks like Peter Neumann from SRI. And then they had a bunch of hackers that found vulnerabilities and were doing full disclosure at the time. That's me there if you were trying to figure out. A little bit of hair changes have happened to the group here. I think I have the third longest hair here. Not quite sure. But, you know, I always like to say how did -- how did I get to this point? How did I get into this group that was invited? And I think that the simple answer is we made trouble. Right? We made trouble by doing full disclosure, by talking about problems, by taking a consumer advocacy approach and saying, you know, "We don't think we should live in this world with vulnerable software and something has to change." Right? We're still living in the world of vulnerable software. It's gotten a lot better, but it was really bad back back in the '90s. And we made trouble enough that the media paid attention to us and listened to us such that we got into the press like the "New York Times" and the "Washington Post" and when it came time to discuss computer security at one of the highest government levels they said, "We need to hear from these guys. We need to hear their opinion on this." And this really showed that hackers had a voice that could help. And that was a big moment in starting to build that trust where companies and the government could start to trust what we were saying. And we weren't just doing this to wreak havoc and to cause problems for people and give information to worm writers. Right? That wasn't the intention. So now I'm going to go way back to the beginning and when I started, you know, on my computer security journey which was really in the early '90s this was what we had. We had the orange book from the DOD talking about how do you assess a system for security design and looking for all the security features. This book doesn't talk about bugs at all. There's no such thing. Buffer overflow wasn't even really public knowledge at this time. This questions whether the NSA knew about buffer overflows. But and exploiting them. But that wasn't -- that didn't exist. Like that was something that hackers figured out. Like Aleph One, Elias Levy, published a paper in I think it was '95 how to exploit buffer overflows. So hackers actually contributed hugely to the idea of how to secure systems by saying, "Well, this is something that you're not considering. And this is something that bypasses all these security features and design." And then the other big entity we had back then was CERT which was formed after the Morris worm and the idea for CERT was it would help. They would help coordinate when a vulnerability was found in Unix or in some internet protocol. They would coordinate that and then they started doing products too. And if you found a vulnerability in a product in the early '90s you'd send it to CERT and it just would disappear. Right? It was just like a one way sort of like when you're dealing with someone in the government and they have clearance and you don't. You tell them stuff and they don't tell you anything back. That's how vulnerability disclosure was in the beginning. They're like, "This is great. Does anyone else know about this bug? No? Okay. We'll make sure the vendor knows that no one else knows about this bug except you." What do you think that meant? That meant that there was no coordination. The bug didn't necessarily get fixed. If it did get fixed, it could have gotten silently fixed. How would anyone even know they needed to patch? So this is how vulnerability disclosure was before really full disclosure became a thing and we said, "Hey, wait a minute." People should -- people should know about this problem because that's the way to actually get it fixed. And then another formative paper. I don't know. Everyone should read this paper because this was really formative to me. And this was the first time that I saw that adversarial and offensive security was really brought and documented, improving the security of your site by breaking into it. So Dan Farmer who was at Sun at the time and Wietse Venema who was an academic, later worked at IBM, basically they collected all the ways that they saw hackers breaching systems. They said, "Let's collect all of these things." They used a misconfiguration here because the default configuration wasn't changed or they misused trust or other things or that -- you know, something wasn't patched. Let's collect all those things and then we will try them. Right? The beginning of network penetration testing and host penetration testing. And they documented this approach back I think in 1991. And then came hackers writing tools sort of in the early and mid '90s. And these tools could be used for offensive reasons. Right? You could break into a system with this. But we all know that you can break into a system in order to secure it. Right? And so Alex Muffett wrote Crack because he wanted to see if there were weak passwords people were using. At the time people started using this to secure their systems. Right? If you were a sys administrator you said, "Yeah. I don't want my users using guessable passwords. I want to use Crack to do that. I don't know -- " There was no password quality mechanisms back then. I want to see if they're using guessable passwords, known passwords. And people who actually did this like Randal Schwartz at Intel he got fired and charged with a felony for doing this on a system that he was the administrator of. He was the administrator. And he thought this was a way to secure the system and they said, "Oh, you broke the policy." Right? You can't discover someone else's password. And so we had a clash there. The policy prevented you from doing the thing to secure the system. Now today, you know, not allowing guessable passwords or known passwords is part of a standard. Right? That's something that we try to do. And we try to do that with password quality tests instead of cracking, but back then you had to crack. That was the only way to do it. Dan Farmer and Wietse Venema took their paper and all these different attacks that they were seeing and they scripted them up and they built SATAN. They scripted it up and they released it as a tool to hack into systems in order to secure them. Well, this didn't go over very well at SGI, Silicon Graphics, where Dan Farmer worked at the time. And he got fired. He got fired for releasing a tool that is essentially a multi billion dollar industry today. Right? We all use attack surface management and vulnerability scanning. It's a multi billion dollar industry. But when it was first introduced it was seen as something that was bad. Right? By these major companies like SGI. And then a tool which actually has been flagged a bit for being malware, netcat, the network Swiss army knife which really lets you set up connections, and Hobbit wrote this in '96. I ported it to Windows in '97 and I found three vulnerabilities just because I had a new tool and I could look at sockets and connections. And one of them was pretty bad. It allowed you to in Windows NT bind. As a user process you could bind to a port that the admin had bound to like port 80 or port 25 and you could bind in front of it. Like if you bound later it let you see the traffic afterwards. You couldn't really do that unless you had a tool that allowed you to explore these things. So, you know, I look at these tools as things that were absolutely necessary to get the security where we are today, but at the time they were pretty scary. And then hackers started writing commercial software. Right? To -- they took the ideas of these freeware software and said, "Let's write commercial software." ISS founded by Chris Klaus, he hung out on pound hack. He went to all the different forums, read bug track, and started collecting all these attacks and making a commercial product to do this same thing that Dan Farmer got fired for. At the L0pht we took our research into weak passwords and weak password storage like the LANMAN hashes in Windows NT and wrote a product so people could actually audit their passwords on their systems. And that's our little banner ad down there with the sniff crack faster. We actually wrote a sniffer that could sniff the SMB transactions and I think it was NTLMV1 was very weak and if you sniffed it you could actually -- you could actually crack the challenge response. And then of course we had hacker information resources and I think this is -- this is where we first started to come together with the government security people and the product -- the tech vendor security people started to come together. I started -- Bug Track was the place you disclosed vulnerabilities. You talked about these vulnerabilities. And it lasted from I think '94 to '99 or so. And I started to see some of the Microsoft people show up on this list. Paul Leach who was one of the architects of NT and SMB protocol on the internet was there. Michael Howard and others started to communicate there. And MSRC when it showed up in '98 started to actually interact with these people posting things on Bug Track and then of course DEF CON we started to see it wasn't just a hacker conference. By the time '97 rolled around it was starting to become a professional conference. So we had the professionals coming to the places that hackers had set up like Bug Track and DEF CON. The professionals were coming to learn from what hackers were doing. And so much that DEF CON became spun off in to Black Hat in 1997 and became a separate conference because I think Jeff Moss decided that he could get more commercial people to come and pay, you know, $1,500 instead of $150 and come to a professional conference. It was a very good business idea. But it also brought these communities much more together because you had speakers from both communities attending. And at the first -- very first Black Hat we actually had a sit down between the L0pht and Microsoft. And this is a little bit of history that I don't want to get lost, and this is this picture was actually in a magazine called "EE Times." And this writer did an interview with Mudge and Hobbit afterwards. So Mudge and Hobbit were part of the L0pht. Paul Leach from Microsoft. Yobie Benjamin was there. He kind of brokered the deal. He was like, "Hey. I think we can get together." Because he was a hacker, but also worked at Cambridge Technology Solutions as a consultant. And one of the Windows NT marketing executives, Carl [inaudible 00:16:34], was there. And they had a discussion of Hobbit's paper and Hobbit's presentation where he basically was talking about all the weaknesses and the protocol was called CIFS at the time which is basically SMB that will work over the internet. Right? With TCPIP addressing. And things like poor session management and a lot of information disclosure and things that could be improved. Paul Leach wanted to find out like how are we doing this. What are we doing this? Why are we doing this? Right? Because we don't work for Microsoft. Right? We're just doing this because we think, hey, if people are using this protocol that has these weaknesses, people should know about it. Right? People should know about the weaknesses in it. And that was like a new concept. Right? Like this was a new concept for these Microsoft people to come and sort of understand why we were doing, you know, what we were doing. And, you know, we don't have malicious intent. Right? We actually coined the term gray hat at the L0pht because we didn't want to necessarily be seen as people that were consultants. Right? That worked for an IBM that went and did consulting and helped you secure your system. We also didn't want to be seen as a black hat, someone who is just writing exploits to break into systems, writing tools just to break into systems. We were writing these tools that could be used for offense or defense. And we thought that that was necessary in order to get the information out there to everybody. Right? Like open source. Right? ISS as a commercial tool can be used for defense, but not everyone can afford it. Not everyone has access to it. A tool like SATAN everyone can look at, everyone can build upon. So the open source aspect we thought really helped with getting things to be secure. And then we got this beautiful cover on the "EE Times," a picture of us in our loft which was a physical space. That was one of the ways that L0pht was different than a lot of the other hacker groups at the time. We actually had a physical space where we set up a networking lab where we could load software and we could sniff software and we could -- we had debuggers all ready to go. And people could come and actually do vulnerability research on our systems because back then having a bunch -- a setup of computers, having a domain controller, having these different things available was not something that the average person could have at home, you know. Computing is a lot cheaper now than it was back in 1997. And so we talked with "EE Times" about, you know, how we were doing this, how we were setting this up and publicizing. How do you do vulnerability research at scale? And I decided to take a riff off of Wietse and Dan Farmer and say, you know, "We're not securing our site by breaking into it. We're securing our product by breaking into it." And that was the way that we started to think about how do you secure software. Right? You break into it. And so we released a lot of advisories and it wasn't until 2000 that we actually did coordinated disclosure. We released full disclosure and vendors like Microsoft found out on Bug Track just like everyone else. And but as time went on we didn't ship exploit code because we started to think that exploit code was just going a little too far. Right? That made it too easy for people to exploit these problems. And, you know, this is Mudge, one of the colleagues from the L0pht, and at the time, you know, his quote is "We felt that users had the right to know about these vulnerabilities so they could protect themselves, especially when vendors were not taking action." And there was a time when a lot of vendors didn't take action. Microsoft was one of the first to take action, but there was times that other vendors -- I don't want to necessarily mention any names, but they wouldn't take action until enough of their customers got upset. Right? Like the first question was do any of our customers know about this. And you're like, "No. I'm, you know -- no one else knows about it." And that -- and when that question gets asked you know nothing's going to happen. Right? And when a lot of customers knew about something it did get fixed rather quickly. So I had my own personal journey along with the L0pht where I went from a time of, you know, full disclosure including exploit code and then realizing after I started going to conferences, talking to customers, really engaging with the people who are trying to defend their networks and their systems, that there had to be a compromise. Right? There had to be a compromise with we want to get the word out there that these problems exist and this is a systemic problem. Right? We don't want everything to be a silent patch. Right? That no one knows about. Or even knows to fix. And that was really the beginning of the thinking we need to -- we need to think about this. We need to think about a process that can benefit both the community at large of computer users and defenders at large and also researchers and also the vendors. And this really changed in 1998. I mentioned, you know, people were on Bug Track discussing these issues. Vendors started to realize that they were going to have to fix things. And then actually I think it was in '98 Scott Culp I think was the first director of MSRC. He sent a note to me at the L0pht and said, "You know, if you send us the information before, we will fix it and let you know when we fixed it and then you can release your advisory and your proof of concept, your exploit code. But let us have a chance to do this." And it took us a while mulling this over, but Microsoft actually reached out to us and said, "You should do this. You should think about -- you should think about changing the way you're behaving." And Microsoft started to -- I actually looked this up. I said, "Who was the first person that was acknowledged by MSRC of reporting a vulnerability?" And it was George Ganinsky [assumed spelling]. Some of you old timers might recognize the name. He was pretty prolific in the late '90s. In September '98 he actually got an acknowledgment in the security bulletin that he found this problem and reported it. And I have to tell you that went a long way in -- if you worked at a company, like you worked at a consulting company, and you wanted to -- you wanted to put out an advisory talking about how say Microsoft's products were wrong. I can remember later when I worked at @stake convincing our CEO like we should do this. Microsoft will acknowledge us. We'll be in the security bulletin as the people who found this problem. And it made a huge difference to have that acknowledgment happen. And I think the first time it happened was in 1998. So that was a big step forward that the vendors took. But there was a need for standardization. It really kind of was the wild -- the wild west out there. So Jeff Forristal also went by Rain Forest Puppy. He built his own SATAN called Whisker which scanned -- actually scanned applications, web applications, for vulnerabilities. He was the first one to codify this and write this down in June of 2000. He said -- and it was really one sided because he didn't collaborate with vendors. He just thought from a researcher's perspective, "This is what I'm going to send the company before I send them the information. I'm going to send them my policy and I'm going to say this is how I am willing to interact. This is what I expect. These are my expectations. These are the things I will do if you do certain things." And it was the first time that someone thought through how this could actually work because he was disclosing a lot of things to vendors and he just wanted it to go more smoothly. And then he asked for some collaboration. I helped him with version 2.0 that came out in October 2000. And this really was the thing that people started to -- started to follow. And started to send with their disclosures to vendors. And then in October 2001 Scott Culp wrote this I'm not going to call it a diatribe, but he seems like he was upset. And, you know, it starts off code red lions S admin D ramen nimda. This was a bad year for Microsoft. Right? IS5. I mean these things were really, really bad news and you were patching your IAS like on a monthly basis. And this was before patch Tuesday. Right? This was at random times. Right? Because of the worms out there. And I think he was kind of fed up that exploit code was being released and it made it easy for people to do -- to write worms. And, you know, he was right there. But one thing that this article does is it uses the word responsible and he says, "We can and should discuss security vulnerabilities, but we should be smart, prudent, and responsible in the way we did it." And if you read more in there the responsibility was really people finding these bugs need to be responsible with this information. And it kind of at the time rubbed the researcher community the wrong way because we're like, "We have to be responsible. You guys actually shipped this software and people paid for it." Right? Like where's the responsibility on the vendor side to make sure that this doesn't happen? Right? It was all the responsibility had to be on the researcher, on the finder side. And that kind of rubbed people the wrong way and this word responsible really became -- became loaded. And that's why we don't use it today. We use coordinated. But I used responsible. When I worked with Steve Christey in 2002 we said, "Let's take what Rain Forest Puppy had done and let's talk to vendors. Let's take -- let's get the vendor side of the equation and write a disclosure policy and process that is balanced between researchers and vendors." And so we just didn't want to write this like RFP did and just publish it. We wanted it to have more impact by being something that was sanctioned by some sort of body. And we looked and there really wasn't anything out there that was a neutral body around standards that we could access at the time besides the IETF. So we said, "Let's -- let's consult with some different vendors." We consulted with Mary Ann Davidson at Oracle. We consulted with CERT. We said, "How do you process this? How do you handle this with your vendors?" And we took some of the things from RF policy and we wrote this IETF RFC and we submitted it. And the IETF just didn't want to touch this at all. They were like, "What are you doing? Like this is a -- this is two contentious -- this isn't some -- and this is something that's business process oriented. This isn't really a technical process." And our argument was this is -- this is the safety of the internet. Like what is the IETF doing, the Internet Engineering Task Force doing, if they can't come up with a process that will help with the safety of the internet? And they just punted it. Right? They would not approve it. And you can still go look it up. It's out there. But it didn't get approved. And because it didn't get approved it was sort of like, "Hey, maybe this isn't so good." Right? So we actually kind of failed in this one. If we had just published it on our own it might have been better. But since it wasn't approved that was kind of that was bad. And we called it a responsible vulnerability disclosure process which I think was a big mistake because then the researcher community was like, "Responsible? Isn't that the thing that Microsoft wants finders to be? Responsible? I don't like your policy either." So that's why, you know, today we use the word coordinated instead of responsible and that's sort of a legacy of that. But something good came out of this and what came out of that was the Organization for Internet Safety which only lasted for a couple years. And this was something that -- where Microsoft and Scott Culp actually reached out to me and said, "You know, you came up with that IETF draft. What if we come up with an organization and we get more security consulting companies and security product companies teaming up with more vendors and we get behind what you did in that responsible disclosure process?" And so the Organization for Internet Safety was formed. It was made of security companies like @stake, Bindview, Foundstone, Guardant ISS, and then software companies like Caldera International which was a Linux distribution and a Unix distribution, and Microsoft, Oracle, SGI. Symantec and Network Associates I guess are both kind of software companies and security companies. And we basically came up with a standard and we all agreed to use it. So the companies that are listed there all agreed to use this vulnerability disclosure process. So we had people discovering vulnerabilities and people who were fixing problems all using this one standard. And this one, this one kind of stuck because it just basically when the ISO standard was finally developed which started in 2008 I think Katie Moussouris who was a former @stake employee who came over to Microsoft was the representative for Microsoft on that ISO standard. And today we actually have an ISO standard around vulnerability disclosure. But you can see it took like a good 10 years to get there and there was some bumps and bruises along the way. Now I think that this process of coming together around disclosure was one of the most important processes bringing hackers and vendors together so that we could really work together and hackers could be trusted by vendors. And this led to the professionalization of hacking. Those security boutiques, @stake, Guardant, and Foundstone were basically full of people from the vulnerability research community. They were full of hackers if you anyone who worked at those companies or maybe some in this room did work at those companies. And it was an important part of legitimizing what we were doing was to have the standard process that everyone agreed on. And this led to the professionalization of hacking. It wasn't just an independent thing. It wasn't an academic thing. It wasn't a hobby. It was a profession. And you could actually earn money doing the things that people used to do independently. You could earn money doing vulnerability research. You could earn money doing penetration testing. And this journey is documented in this paper by Matt Goerzen and Gabriella Coleman. You might recognize Gabriella Coleman's name. She wrote sort of the definitive book on Anonymous. Came out in 2015. She's an anthropologist. She's a professor at Harvard. And I was interviewed for this paper. It's very good. It talks about -- it talks a lot about this journey that hackers took to become professionals. Now we had our own flavor of this at the L0pht where we said, "Hey, we can become professionals. We can make this our day job." And, you know, we did sell some software. We sold L0pht Crack. We did do some penetration testing as consultants. We did some work for the SEC. We did some work for some consulting companies trying to hack their networks. But it was -- it just wasn't paying all the bills. Right? We couldn't have -- we couldn't all be full time and we struggled with our business model. Frankly we were all techno nerds that didn't have any business experience. No one had founded a company before. And we just didn't know how to build that sales and marketing and go to market and a business structure that could actually turn this in to a business. So what did we do? Because we couldn't start a company ourselves, we decided to join a company. We decided to join with this company that was still in stealth mode called @stake which was some security consultants had started this company. It was kind of a spin out from Cambridge Technology Partners which was a big consulting company that had a tiny little security practice. And we joined up and this of course was the big news when we launched the company in January of 2000. These guys from the L0pht who testified at Congress and used hacker names, we were still using our hacker names at this time working at the company. And so this is how the press took it. Right? Right? We're using good hackers to battle bad hackers. Which isn't necessarily wrong. Right? I don't know if we were a scraggly band. We did have long hair. But this was -- this was -- this was one of the first sort of shots across the bow of the professionalization of hackers. And, you know, the hacker community there was a lot of people who didn't like this. They didn't like that we were now available for hire from the very companies that, you know, we were exposing weaknesses in. We were called sell outs. Right? It was like when that, you know, small band signs the big record deal. Right? You're a sell out now. You're under the control of the big -- the big record label. You can't do all the things you would have done before. And to some degree that's true. Right? And, you know, our dream was to do what we loved, right, as a full time job. And I think we all know that that dream isn't perfect. Right? There are limitations and constraints when you start doing things and charging people, you know, a couple hundred dollars an hour, right, to do something. They have certain expectations. So we launched as a security consultancy. And a different kind of consultancy. Up until this point in time if you were a security consultant like we'll say in 2000 you probably worked at one of the big accounting firms and you were doing checklists against compliance criteria. Right? You weren't doing pen testing. You weren't doing code reviews. Right? You weren't doing threat modeling. Or you were a security consultant that worked for a product company and you figured out how to sell more of your yellow boxes and configure those yellow boxes. Right? We all -- we all know that there are security people that work at product companies. This was something different. We didn't sell accounting or compliance. We didn't sell products. We were going to look at your network or your products the way an attacker would and tell you how to fix that. And so we did our own vulnerability research so we could keep publishing and figure out new vulnerabilities and new types of -- new types of technology that was coming along so we knew how to test it. Things like appliances, things like WiFi, all this is pre cloud so obviously when cloud came across we had a lot of things to look at. We built our own attack and testing tools so we had capabilities that no one had. And of course we secured applications by breaking into them. And then Guardant and Foundstone soon followed us doing something very similar, hiring the same exact kind of vulnerability research people. And then in 2002 Bill Gates penned the trusted computing memo and, you know, the worms kept coming. The vulnerabilities didn't get fixed fast enough. There were too many vulnerabilities to fix fast enough. Something had to drastically change. Right? And Microsoft was really the first company that said, "We need to do a drastic change." Right? We need to stop everyone from coding. Everyone's got to read "Writing Secure Code" by David Le Blanc and Michael Howard which I think was published the year before this. And then they're going to write secure code. Right? Well, it's definitely not that simple because we still have these problems today. You can't just read a book on secure code and know how to do this. And you needed more than to just know what code not to write. Right? That's good to know, but really you need a process. Right? You need a process to secure the application as it's being built. Right? That's what you needed. And I think soon after stopping all the development, having everyone read the book, they realized that this wasn't enough. They needed outside help like Tom was saying. They needed people who had figured out how to make a process around building secure software. And they needed people from these consultancies to come in and work alongside the developers and teach the developers how to do this by doing it and also securing the product at the same time. Right? Securing the product is actually the most important part and then of course people would learn by doing along the way. A funny kind of story is Foundstone actually got in first before @stake. Foundstone was here. They were helping Microsoft. But they got fired. And they got fired because they wouldn't stop wearing their Foundstone polo shirts that said, "Foundstone" across them which just made it very clear that there was all these Foundstone people securing the software and Microsoft wanted a more collaborative look. They didn't want it "Foundstone is coming and doing this." And luckily for @stake Foundstone got fired and they said, "We need one of these other boutiques to come in." And @stake then got the deal to help secure IS6 which is really the first product that went through this process. It was completely rebuilt from scratch, complete new code base. Andrew Cushman was the general manager on that product and I think he later worked at the MSRC after that. And so we pulled up our best team to come here and help assess ISS6. It was myself and Christien Rioux who was DilDog, wrote Back Orifice also from the L0pht. He's also the co-founder of Veracode with me. Window Snyder and Frank Swiderski who would later be hired by Microsoft after this engagement and they wrote the book "Threat Modeling," Microsoft threat modeling book based on the work that we had done. We had refined a lot of the processes during this engagement. And Dave Aitel which he founded Immunity after leaving @stake. And so these are some of the things that we did that were process oriented. Right? We -- I think the threat modeling was a huge one showing how to do data flow diagrams and threat modeling with the architects and the designers of the software. We show -- we had a learning lunch, showed how to exploit heat overflows because people just didn't really understand how that was possible. And I think that helps developers see that these mistakes actually lead to this. Another really important thing is I think this is the first project where software was fuzzed. At the time Dave Aitel was writing Spike which was the first open source available fuzzer. I'm sure the NSA had something like this. And fault injection was something that QA people had been doing for a lot, but a fuzzer tries to do things specifically that are going to cause security problems like long strings and, you know, delimiters and things like this which are going to cause problems. Dave put that together and we found bugs in ISS6 with the fuzzer. So that was the first time that was used. I was using the Sysinternals process explorer to show how you could find runtime attack surface, but sometimes the developers didn't even know about it because they were just linking in a library that was running in their process and it was opening a name pipe and they didn't even know that, that name pipe that needed to be secured. So understanding like from a hacker's viewpoint you look at the runtime environment. You don't just think about the code or the design. And you find things like that. And maybe we helped with Mark Russinovich's company actually getting bought by Microsoft later because they liked the tool so much. But we really helped come up with that SDLC which was later, you know, codified in a book by I think Steve Lipner and others around how do you -- how do you secure software while you're developing it? So I feel like we were -- we definitely contributed to that. And I think Michael Howard gave us a credit in Secure Coding Two around the threat modeling process that he talked about in the book. He said, you know, he learned some stuff from @stake. And so I think around 2003 after this happened and these engagements happened, we did other products too after that. So we must have done a good job. It really kind of changed the landscape at Microsoft. You know, security testing became a requirement as part of building. People -- companies started to have product security response teams. People started to look at the book, the Microsoft SDLC book, as a model. Later became an ISO standard. And then, you know, bug bounties started to crop up. It's a little known fact that Firefox or Mozilla had the very first bug bounty back in 1995, but they were like the only one until like the early 2000s when other commercial companies got on board with bug bounties which I think was also a huge step. So just want to close out here with, you know, how are we learning to secure products from hackers today? And like this is just like how we do it today. Right? We learn from hackers through our vulnerability disclosure programs. Right? They send us information about a bug and we learn how they found that bug if we ask how they found the problem. We learn from academics. Right? It used to be people like Hobbit writing papers about securing an internet protocol. Now the academic community is doing this. We see a lot of papers around how to secure LLMs. Right? And transformers now. And how to trick machine learning and things like that. There's a very active publishing around these problems, but academics weren't doing it back then but they are doing it now which is great. And there's a bazillion conferences now. Right? It isn't just DEF CON and Black Hat. Right? There's BlueHat. Right? Where we can all come together and work together. So the world has really changed from what it was before and I just wanted to let everyone know how did we get here. Right? Before people start to forget the history and some of us who were at the first BlueHat aren't here anymore. So thank you for listening to me and I hope you have a great BlueHat. [ Applause ]

Wendy Zenone: Thank you for joining us for the "BlueHat Podcast."

Nic Fillingham: If you have feedback, topic requests, or questions about this episode --

Wendy Zenone: Please email us at bluehat@microsoft.com or message us on Twitter at msftbluehat.

Nic Fillingham: Be sure to subscribe for more conversations and insights from security researchers and responders across the industry.

Wendy Zenone: By visiting bluehatpodcast.com or wherever you get your favorite podcasts. [ Music ]