The BlueHat Podcast 8.21.24
Ep 35 | 8.21.24

Michael Howard on Secure by Design vs Secure by Default

Transcript

Nic Fillingham: Since 2005, BlueHat has been where the security research community and Microsoft come together as peers.

Wendy Zenone: To debate and discuss, share and challenge, celebrate and learn.

Nic Fillingham: On "The BlueHat Podcast", join me, Nic Fillingham.

Wendy Zenone: And me, Wendy Zenone, for conversations with researchers, responders, and industry leaders, both inside and outside of Microsoft.

Nic Fillingham: Working to secure the planet's technology and create a safer world for all.

Wendy Zenone: And now on with "The BlueHat Podcast". [ Music ]

Nic Fillingham: Welcome to The BlueHat Podcast", Michael Howard. Michael Howard, thank you for joining us.

Michael Howard: Thank you, you're very welcome. I'm very happy to be here, thank you so much.

Nic Fillingham: We've been trying to get you on the podcast for a little while because you, and we'll touch on this a bit later, but you were at the very first BlueHat, you were a part of that very first BlueHat. This is "The BlueHat Podcast", and so naturally this is a great place to have you on and share some stories, and we will do that. It's just me today. Wendy Zenone, my co-host, is unable to join for today's episode, so you'll just hear my voice. But Michael, gosh, where do we start? How about a quick intro or reintroduction to folks that maybe don't know who you are? Who are you and what do you do here?

Michael Howard: Yeah, man. Okay, so I've been at Microsoft now for 32 years.

Nic Fillingham: 763 years.

Michael Howard: Yeah, it feels like it. So it's only 32 years, but here's the crazy thing, and I tell this to people all the time. I still feel as fresh and as excited today and energized as I did 32 years ago. Because when I first started, I was actually at Microsoft New Zealand doing support for Windows 3.X and the Microsoft C++ compiler. And I loved it back then, and I love this stuff now. So, yeah, 32 years.

Nic Fillingham: How big was the Microsoft team in New Zealand back then? There must have been a handful of people?

Michael Howard: I like the way you said that, New Zealand. Such a New Zealand way of saying things. It was, like, 10 people. It was crazy.

Nic Fillingham: 10 people?

Michael Howard: Yeah, I don't know, I was like employee number nine or something back in the day, yeah. I'm actually the longest-serving Microsoft New Zealand employee.

Nic Fillingham: Oh, well congratulations.

Michael Howard: Thank you.

Nic Fillingham: Do you get a golden Kiwi? Or what do they give you for being the longest-serving New Zealand employee?

Michael Howard: No, you get a big bottle of L&P.

Nic Fillingham: Oh, beautiful. That sounds wonderful.

Michael Howard: Yeah. And, no, actually, no, in all seriousness, I think it's interesting looking at the number of people who are in cybersecurity across Microsoft who have been around for a long time. Don't get me wrong, there's some newbies too, but for the most part, there's a lot of -- it's a very deep bench, let me put it that way.

Nic Fillingham: Got it. And you have just recently -- you're obviously still at Microsoft, but you've recently changed roles. Can you tell us a bit about where you are now and maybe a little bit of where you've come from?

Michael Howard: Yeah, that was a really hard move, I'm going to be totally honest with you. I moved from Azure Data, working on security in that team. So I was doing things like Azure SQL Database, SQL Server, Cosmos DB, PostgreSQL, and MySQL, mainly on the security engineering side of things, you know, threat modeling, coding, static analysis, dynamic analysis, root cause analysis, you know, blah, blah, blah, right? Education, the whole kind of nine yards. And, yeah, as of about three weeks ago, I moved over to John Lambert's team working in Mystic, although what I'm doing is not really Mystic-ish stuff, although I'll be taking a lot of stuff that I learned from the Mystic team and sort of folding that back into Azure. But yeah, actually a fun little fact. So, oh God, this must have been about six, seven, maybe two months ago. I got a message on a Saturday evening from John Lambert on Microsoft Teams, and it was literally, quite literally a one-liner, When can you start working for me? That was it. That was the introduction. That was a job offer, apparently.

Nic Fillingham: John does things a little differently, and that's -- and a friend of the pod. So that's awesome.

Michael Howard: John and I have actually known each other for a long time. We worked on Windows XP servers back two together back in the day. So yeah, we've known -- we go a long way back. We know each other very well.

Nic Fillingham: Got it. And then over the last 30-plus years, you've obviously done a lot of things. Very accomplished author. I think folks, a lot of folks probably know you from your books. Feel free to sort of touch on that if you would like. And then for me, really what I want to talk about is sort of your history with BlueHat and the very, very first BlueHat. But I guess before we jump down that rabbit hole, so books, you've written lots of books. You had Bill Gates write the foreword for one of them. You've got a new book out. What's happening in the world of, what do you call these? They're not instruction manuals. What are these books that you write, Michael?

Michael Howard: Well, they're books, aren't they? I mean, that's what they are.

Nic Fillingham: They are books, but I guess, like, what's the category? If I go into a fictional Barnes and Noble, I'm going to find them under computer science?

Michael Howard: I don't know. You know, I've seen them in security sections and I've seen them in computers sort of programming sections. There is a really strong focus on secure, you know, design, development, testing, all that sort of good stuff, right? That's just always been my -- my thing. So yeah, if you were to look in the, I don't think there would be a single section, like "app dev security", because I don't think that exists in most bookshops. But yeah, it'd be like computer science or something. But yes, I mean, those -- probably the book that sort of put me on the map is "Writing Secure Code". And there's a lot of really interesting stories around "Writing Secure Code". So, you know, David LeBlanc and I, so David was in Office and I was in Windows, and back then both teams didn't particularly like each other. And we -- we came at things from a different perspective, right? Because the joke back then was Office would basically do whatever Windows wasn't doing and vice versa. They did things a little bit differently, which is actually good from a security standpoint because we could document things from a different perspective, like, the Windows perspective and the Office perspective, but also just the general industry perspective, right? So we wrote "Writing Secure Code". And part of the reason why we wrote it was kind of funny. John and I were having a coffee together and he said, "Have you ever noticed we're getting asked the same questions time and time again?" I'm like, "Yeah." He said, "Why don't we write a book?" I'm like, "Well, what do you mean?" He said, "So let's just write a book. And that way we can just tell people to read the book. And then that way we just focus on the really hard problems." I liked that idea. I thought it was a really good rationale for writing a book. So that came out, writing time for the Windows Security Push and the SQL Server Security Push. DevDev had already had theirs. That's a whole other story. That was the year before. That was, like, November 2001. And that's when "Writing Secure Code" had just come out. In fact, I had a meeting with Bill in December that year, and I gave Bill a copy of "Writing Secure Code". And then we'd learned so much from the various security pushes. Because it's really interesting if you take a development team and you infuse them with more security expertise, they take their domain expertise and infuse that with security. So we ended up writing a second edition, based on a lot that we'd learned from the Windows security push and the other pushes. One of the chapters is on, like, canonicalization, internationalization, localization issues. That chapter was written almost exclusively by the internationalization team in Windows. They'd written a white paper, and I ended up taking that white paper, reading it, you know, embellishing it, adding more examples, making it, you know, human readable. You know, when engineers write white papers, they tend to be a little bit, you know, obtuse. So yeah, so we ended up writing a second edition of "Writing Secure Code". Before it came out, I was talking to David, I said, "You know what? So Bill read 'Writing Secure Code', so why don't we get Bill to do the forward for the second edition?" And David was a little hesitant about it. I said, "Look, dude, I mean, seriously, if we don't ask him, that's like getting an answer of no back, right? Just exactly the same result." So I said, "Why don't we just ask the question anyway?" So I sent him an email. I said, "Hey Bill, you know, 'Writing Secure Code', blah, blah, blah, I've got the second edition coming out. Would you mind, you know, please, honest, you know, we'd love it if you did the forward for the second edition." And he said yeah. So yeah, so we ended up getting the forward written by Bill for the book, which apparently is the first time he's done a forward for a book since -- and I could be wrong here, but Gordon Letwin's "Inside OS2" is the -- I believe, is the last time he'd written a forward for a book. So yeah, so "Writing Secure Code" I think was, you know, an important book. It was also, with "Writing Secure Code", it came out roughly the same time as McGraw and Viega's "Building Secure Software" came out. And ours was, like, a Windows-centric kind of book, even though there's a lot of, you know, generic stuff in there, and a lot of general stuff. And Gary and John's book was more Linux-y, open-source-y, with a strong emphasis on crypto, which was John Viega's stomping ground. So, yeah, I've written a few books, but that's probably the most well-known. Steve Lippner and I wrote "The Security Development Lifecycle" book together, the SGL book. Yeah, the latest book to come out is "Designing and Developing Secure Azure Solutions", which I did with a couple of colleagues at the time, Simone Curzi in Italy and Heinrich Gantenbein, who's in Chicago. So yeah, actually fun fact about "Designing and Developing Secure Azure Solutions". When my last book came out, whatever that one was before the Azure one, I'd actually promised my wife I would never write another book. I'm like, "Oh no, I really want to write this other book. I really, really do." So I decided to tell my wife that I was going to write this other book in front of the kids, so that way she couldn't get mad at me [laughter]. But she got mad at me anyway.

Nic Fillingham: I was going to say, that doesn't feel like a good gamble.

Michael Howard: No, it's not a good gamble. But I'm really proud of the book. It's a good book. It's a fun read. And it's really, really practical.

Nic Fillingham: So thinking back to those -- those initial books, so "Writing Secure Code" and "Security Development Lifecycle", looking back on them now, what percentage, if any, has been superseded or is sort of no longer relevant? And there's, you know, a ton of, you know, addendums now required in 2024. I would assume not much, right?

Michael Howard: It's interesting. There's a lot of the principles are still true today. There's a principle, like people say to me, "Hey, I'm a developer, what should I learn?" And they think they should learn Rust or they should learn some library or something, or they learned crypto. My comment is always the same, and that is you should never trust input. That comment is in "Writing Secure Code" all the way back in the day, right? Twenty-something years ago. And that is still valid today. Input validation, input trust issues are still, you know, a huge cause of many kinds of issues -- I mean, many kinds of vulnerabilities. I mean, not all. I mean, if you're going to email passwords around, I mean, that's got nothing to do with, you know, with memory corruption or anything like that. But that's still valid today. One of the big sections that's of no use anymore in "Writing Secure Code", second edition, is some of the.NET security stuff. Because the code access security stuff is essentially -- I say essentially, I believe it's gone completely. It's fully deprecated. So none of that rule, like, link demands and all this sort of stuff that, you know, back in the day, it just is not valid at all. But a lot of the principles are still completely valid. I mean, there's no real reference, in fact there's no reference whatsoever to cloud-based systems in any book except "Designing and Developing Secure Azure Solutions". The rest really don't talk about it at all because it wasn't around when -- when those books were written.

Nic Fillingham: And I could see that as both a -- sort of a double-edged sword, right? It's good in one sense that those principles remain and, you know, a book that's 10, 15, 20 years old obviously still has a lot of relevancy. But then it also must be frustrating in some ways that those core concepts continue to need to be learned. We aren't able to, I'm using air quotes here, but sort of solve some of them and move forward, that they're sort of no longer relevant. Feel free to disagree with that if you don't think that's a good statement. I don't even know where to start with that one [laughter].

Michael Howard: Okay, let me start. Look, I'm going to be honest with you, and this is a horrible thing to say but I'm not going to sugarcoat it. And that is that we're hiring really smart people out of school, and in many cases out of industry, and they don't know the fundamentals. They just don't. And that -- that needs to be fixed. Look, don't get me wrong. There is obviously an important place for industry, right, to, you know, to educate its workforce on security fundamentals. And by security fundamental, I don't mean, Hey, tell me how RSA works. I mean, you know, how would you apply RSA to mitigate specific threats, right? That's a scalable skill, not how it works. But we're just not teaching the fundamentals in school at all. And that is absolutely terrifying. So what happens is when, you know, kids come out of school, for example, and we move into big tech or move into Microsoft or whatever, we've got to go on the assumption that they essentially know nothing about what it means to build -- you know, design, build products that are going to be, you know, massively exposed on the internet. And that's a huge problem. So, you know, is, you know, input trust still as important today as it was 20-something years ago? Yeah, absolutely. You know, people are just not validating input for the most part, and that leads to a whole grab bag of vulnerabilities and just other stuff as well. I remember looking at a feature, this is a while ago now, and the processor was running on Linux, and it was running as root. I'm like, Why is it running as root? They said, Well, what's wrong with running as root? What do you mean, what's wrong with running with root? I mean, you know, least privilege in my book. Now, obviously I'm kind of blinded here, because it's all I work on every single day, but that's kind of important, you know? Ignore the books. It even goes back to, like, Saltzer and Schroeder, right, from the 1970s. And one of the principles they have in there is exactly that, is least privilege. This is something that's been known about for a long, long, long time. And here we have people still don't know. So, that's actually part of what I'm doing, I will be doing in John Lambert's team, is working on that kind of culture stuff and education, and a lot more stuff as well. But that's going to be a big part of what I do.

Nic Fillingham: I would love to jump here, you mentioned the culture word, and so I would love to jump to your history with BlueHat. "The BlueHat Podcast" has been going for a couple of years. This is episode 35. Folks are listening to it right now. We'll hopefully be aware that the next BlueHat has already been announced. Hopefully the call for papers is open by the time you're listening to this and perhaps even applications to register. It'll be back in Redmond October 29th and 30th, 2024. So we're very excited for that. This will be the 23rd edition. So the 23rd instance of The BlueHat Conference happening in Redmond. And Michael, you were there for the first one. And I think that, you know, there's a lot of stories that we've heard on the podcast already about the kind of content that was presented in those early BlueHats, the kind of people that were there, the ethos. And I'd love to hear from you about that. I'd also really love to hear about culture. I'd really love to hear about what was it like in that time just before BlueHat and during that first BlueHat, and perhaps immediately after. Because there was this sort of reality check that I understand -- I wasn't there, I understand it was a big part of sort of what was happening. Like, we need these outside perspectives to tell us what's actually going on in the industry and give us that sort of reality check. You just mentioned culture, and to me that's a question about culture, or that's part of sort of culture of Microsoft and of the industry. What do you remember from that time that you can share with us?

Michael Howard: Yeah, so the very first BlueHat, I mean, Window Snyder was heavily involved. What was interesting is I did talk, and I talked about essentially mitigations. So if you look at the Secure Future Initiative, which is what's underway right now, and it has, you know, echoes of Trustworthy Computing back in the day, they are different, and there's good reasons why they're different, which I'm not going to go into right now. But, by the way I am going somewhere with this. So, you know, Secure Future Initiative and back in the day, Trustworthy Computing, had this thing called Secure by Design, Secure by Default. And there's also, you know, Secure Operations in the case of SFI. Secure by Design and secure by Default are kind of interesting. Secure by design is all about getting things right. Get the code right, get the designs right, and make no mistakes. Secure by Default is recognizing that you never will. You never will get everything correct. And there's good -- there's actually very good reasons for that. Ignoring the human equation just for a moment. When you ship a product or you deploy a product, that product at best is a subset of the security best practice of the day. It's kind of frozen in time, right? And then new, you know, new exploit types evolve, new ways of attacking systems, new weaknesses that, you know, were never thought about in the past. And so your product, you know, essentially, kind of from a security perspective, degrades. Perhaps not the correct word, but regresses. Not because it's got less secure, but because new issues are found. So my focus, even though I was a big fan of working on Secure by Design, like, Hey, don't use strcpy, don't use strcat, don't use str and cpy, don't use str and cat, don't use S, printf and all the evil brethren. You know, don't use those, which is obviously the right thing to do. And if you're writing C code, you know, today if you're writing modern, you know, C++ code, you should write modern C++, which removes you from all of that goop completely, but that's another discussion. But I also spent a lot of time on secure by default, which was recognizing that people will make mistakes in their code. Therefore, we must put mitigations in the operating system. We must put mitigations in the compiler to mitigate, to generate, you know, more secure code. I could talk about that stuff until the cows come home. Perhaps better static analysis tools, perhaps better libraries, perhaps. Here's a really good example. So I used to do all the root cause analysis back in the day of MSRC issues. It really wasn't my job, but I figured someone's got to do it. Now it's actually a science, you know, something that's actually done, you know, by people whose job it is to do root cause analysis. But the reason why I did it was to see if there were patterns of vulnerability, and if there were, how do we mitigate that? Like, so if you were like, you know, I don't know, let's just make something up. Let's say you've got, well, I was actually not making it up at all. This actually did happen. Let's say you've got a whole bunch of calls to memcpy, if you're familiar with C, where the buffer sizes are wrong. The buffer sizes are wrong or incorrect, or incorrectly calculated, whatever. So the problem with that is, like, so how do you fix the problem? And we wanted to ban memcpy, and so we couldn't actually ban it though, because there was no good replacement. So we worked closely with the standardization people and we worked closely with the Visual C++ team and they came up with a thing called memcpy underscore s, which is a safer version of memcpy. And don't get me wrong, you can still get it wrong, but anyway, at least it makes you think about it rather than memcpy doesn't. And so we couldn't actually ban memcpy until we had a good replacement. You can't just tell somebody, Hey, that code is really, really bad. You need to, like, just make it better. Well, if there is no way of making it better, you can't really just say the code sucks, right? You just can't. You've got to have a good -- security people are really good at telling people, Hey, your stuff sucks. But that's not helpful, right? That's just not helpful at all. You need to say, Hey, don't do it that way. You need to do it this way instead. So once we released memcpy underscore s, we could then start saying, Hey, here's a better library function you need to call. If you see a call to memcpy, you need to replace it with memcpy underscore s because from this point forward, memcpy is banned. And so, by the way, the way I got here was saying that there's different ways of mitigating things like compiler settings, operating system settings, better libraries. That was the memcpy story, better static analysis, better dynamic analysis. The static analysis of choice right now at Microsoft is CodeQL. I would love to see the research community really start to build queries using CodeQL to help people. Like, if they have access to some code, say some open source code or some Microsoft open source code or Linux, whatever, think about if they find an issue, can they build a CodeQL query to actually find that issue? Because that way anyone in the world who uses CodeQL can just take that query and query their own code. So I would love to see the research community creating more CodeQL queries, because that would just help absolutely everybody. Yeah, so my talk was basically on the Secure by Default stuff. Mainly two things, Windows Defenses, so address-based layout randomization, no execute, and a bunch of others that have come on since then, like secure exception handling, and a lot more, you know, has come on since then. And then also in the compiler, right? So things like GS, stat-based memory corruption, detection, support for ASLR, support for NX, and a lot more. One of my favorites actually is when the C++ compiler is creating code around the operator new, it actually checks for overflow automatically in the compiled code, which is something that you would normally have to do manually. And so the compiler, it's only, like, 17 instructions or something, it's really tiny. And yeah, so things like that, you know, just essentially generating more secure code just automatically, as opposed to expecting developers to do the right things all the time. Because, you know, people make mistakes. So that was my first talk.

Nic Fillingham: What do you remember from the audience? I mean, these were -- you were presenting to internal Microsoft employees. There were no external attendees at that point, although there were external presenters. What was the response? Was it mixed? Was it dismissive? Was it celebratory? And again, the reason I'm asking is I'm just really trying to wrap my head around sort of what the culture was like at that time and how it's changed, but then also how did it change and what can we sort of do to continue to push it further down the road of sort of positive cultural sort of evolution.

Michael Howard: So David LeBlanc said something to me years ago, which is as true today as I think it was, you know, back then. Which is, Developers want to do the right thing. Sometimes you've just got to tell them what the right thing is. You know, you can't really argue with that point. It's pretty good. And so really, that talk was me saying, Hey, if you're writing code, you need to make sure that you're using these libraries. You need to make sure that you're using these coding constructs. You've got to make sure you have these compiler and linker flags. You know, we'll take care of things on Windows because you're going to deploy on Windows, you know, back then. So, you know, in general, the commentary was very positive. And part of the reason is because we tried to make it as frictionless as possible for developers. You can't just say, Hey guys, you've got to do this, which by the way means you're rewriting your code. That's just not going to happen, right? It's just not. If you've got 15 million lines of code that's been around for 27 years, you're probably not going to rewrite it over a weekend. You're just not, right? So what can you do to incrementally improve the security posture of the code for the minimal amount of effort? And I'm a big fan of that. In fact, to this day, I wrote a paper the other day for some team within Azure, and I made the comment of saying, Look, whatever we do has to be really frictionless. You know, you don't want to be an impediment to someone doing the right thing. So in general, the commentary was, you know, that I got from people was really positive because we'd actually done all the right work. The other thing that I think was important is we didn't just say, Hey, don't use strcpy. You know, use strcpy underscore s back then, right? It wasn't just that. It was also, By the way, when you compile your code, there is a flag where we will issue a what's called C4996, which is a deprecated function warning back then. I think it was just a warning back then. Basically, if you compile your code and you see a C4996, if you want to get rid of it, replace, you know, for example, strcpy with strcpy underscore s, str and cat or str and cat underscore s, printf with s printf underscore s, and a bunch of others. So we made it really easy for you to find where you had problems in the code through compiler warnings, but we actually went one step further than that. The compiler -- by the way, just before I carry on, I do realize I'm down in the weeds here. Are we okay with that?

Nic Fillingham: Oh, this is great. Oh, fantastic.

Michael Howard: Okay, okay.

Nic Fillingham: Yes, no, no, no, keep going, please.

Michael Howard: Sorry. Sorry. So one thing we also did, we tried to make things even easier for people, which was if the compiler -- let's say you got a call to strcpy, right? So strcpy copies a string. Well, it copies a series of bytes up until a trailing null. And let's say it knows the -- you've got a call to strcpy, and the buffer is called buff. Let's call it buff. And let's say that at compile time, the compiler knows the size of buff. If it does, because it's a constant, then the compiler will actually -- and this is actually -- I loved it. I thought it was magnificent. Some people thought it was sacrilegious what we did. But we would actually automatically change your code for you. We would actually -- because we knew the buffer size, and so we would actually change the strcpy to a strcpy underscore s and put the buffer size in there automatically. And the C4996 warning would go away as well. Some people didn't like that. I loved it, because basically the compiler is doing the right thing, is doing the right thing and making the code safer. But some people said, Well, you're actually changing what my code does. My book, compilers do that all the time, you know, optimizing compilers rewrite your code all the time, right? So you write, you know, you might write 20 lines of code and the compiler may actually admit 30, but it runs 17% faster, right? Because the compiler knows how to lay the code out. So we were changing people's code for them under the covers, but I liked it. I thought it was -- and you could actually turn it off as well if you wanted to, but we don't want people doing that.

Nic Fillingham: So I think, if I could summarize here, the big takeaway was, you know, developers and engineers want to do the right thing. And so when you present essentially research findings or, you know, yeah, let's call it research findings to them. If you don't simply just say, Here's where you're doing it wrong, but you say, Here's where it's being done wrong, and here is a somewhat frictionless change that you can make in order to do it better. So instead of doing strcpy, it's strcpy underscore S, then you almost immediately are sort of getting support from your audience, because you're not just telling them that they're wrong. You're acknowledging that you understand what they do and saying, Here, we worked out sort of a better way to do it that really isn't much different from what you're doing right now.

Michael Howard: Yeah, 100%. And you touched on researchers there. And by the way, I will touch on culture as well. I haven't ignored that. If you look at a lot of the things that we added to the compiled code and also to the OS, they were a direct outcome of things that have been found by researchers. Let me give you a really good example. So CodeRed, which, so fun fact, I was actually the security PM for IIS at the time CodeRed hit, when it was a bug in index serving code that was actually in IIS. It was shipped as part of the default product. That's a whole 'nother discussion about attack service of products.

Nic Fillingham: Can you quickly remind us on what CodeRed is slash was?

Michael Howard: Yeah, CodeRed was a worm. It went around the internet. It took advantage of an interesting bug. It was a memory corruption bug in index serving code that was actually in the web server. So you could actually do index lookups, like, files in the, you know, in the operating system through a web UI. And so the index server code or part of it was actually -- it was a called ISAPI, which is some IIS code. It was written in C and it was, there was a memory corruption, there was a memory overflow, a buffer overflow problem. It was essentially a size mismatch between a countof and sizeof. It was actually a unicode string and it counted rather than countof, it copied sizeof, which was twice as big, so it ended up over overflowing a buffer. So I was the, actually the IIS security PM at the time, so that bug actually happened, that worm actually happened on my watch while I was in IIS. Sometimes you learn by baptism of fire, right? So one of the changes that we made, the attack was actually quite elegant and it was essentially a bounce attack. It would actually bounce off somewhere else in memory and it would actually corrupt a -- take advantage of corrupting an exception handler on the stack. So we ended up adding secure exception handling to the compile code and then into the OS as well. You had to do it in two places. Yeah, so that was a direct outcome of CodeRed. If you look at things like ASLR, you know, that was because researchers were finding bugs where there was, you know, data at specific locations in memory, which means you could exploit it, you know, directly. And so by randomizing it, it raised the bar on that. NoExecute, which was called DEP at the time, Data Execution Prevention. That, again, was just a direct outcome of researchers telling us, you know, Hey, you shouldn't be executing data pages. And when you add the two together, NX and ASLR, they're actually stronger than the sum of the parts, or them individually, I should say. So, yeah, that was direct feedback from researchers.

Nic Fillingham: And when you say feedback from researchers, are we thinking like in today's structure where a security researcher is submitting findings in some way to MSRC, it gets accepted, it gets turned into a case, there's some sort of root cause analysis, and then learnings are, you know, essentially derived and then they go out to the engineering teams to go fix it?

Michael Howard: Yeah. I mean, in a nutshell. Don't get me wrong, there are multiple incoming sources, right? And then we see these patterns. Some of them are from researchers, some may be a little more nefarious. But for the most part, yeah, it's a combination of things. We say, Hey, you know, this particular kind of anti-pattern takes advantage of this API. We need to fix that. We've seen it five times this year, for example, or this particular sort of anti-pattern or this issue takes advantage of predictable memory locations, you know? And after seeing those a few times, like, Okay, we need to fix this. And I'll be honest with you, you can't just make massive changes just on the fly. Because, you know, when you've got a, you know, a billion, 2 billion, whatever the number is, devices running that product, you know, even if you upset 1%, it's a lot of large numbers, right? Small percentage of a very large number is still a large number of affected customers. So we had to be very, very careful about how we deployed some of these things. And that's why when one of the 27 reasons why, you know, Vista was late was because we did actually make a lot of the investments in Vista around especially ASLR and tighter use of NX, and, you know, least privilege, non-admin accounts and that sort of stuff.

Nic Fillingham: I thought Vista was late because they were trying to make the start button a circle and they just couldn't.

Michael Howard: Well, okay, 28 reasons.

Nic Fillingham: That was funnier in my head. Let's come back to culture again. I feel like I keep bringing you on little -- I've asked the culture question, and then I keep pushing you off in another direction. Maybe this is a good story time. You know, tell us about some of the folks that were there, the people that were not Microsoft employees that were brought in to speak at that first BlueHat, and just, you know, the amazing things they were telling us, but then also the reaction to the folks in the room, both sort of up and down the leadership chain.

Michael Howard: Yeah, not necessarily the first BlueHat, but certainly BlueHats, you know, in general. To me, bringing in external researchers was an example of cultural change within the company.

Nic Fillingham: Oh, you've got a point.

Michael Howard: Right? The -- you could argue that in some cases, some of the communications, you know, between a company, not just Microsoft, any company, and people who are finding bugs in your product can sometimes get a little adversarial. Let's be brutally honest, right? You got someone telling you your baby's ugly. Well, you know what? Your baby's ugly. And so sometimes, you know, there can be pressure to fix a bug. The problem with that is, you know, if you've got a product that's being used in, you know, n number of languages and n number of supported versions, you know, it can be hard to make sure those fixes are done and done correctly. And sometimes that may lead to patches going out, you know, perhaps a little bit later than the finder expected. You know? That's just life. So to me, bringing in these finders, these researchers, was a pivotal moment for Microsoft because it meant, Hey, you know what, we need to listen to these people. They know what they're doing, and we need to learn from them. And every single session I ever went to and listened to an external researcher, the audience was in awe. They were aghast at what they saw. So let me give you a really good example. So I remember when, so David Litchfield, so back in the day, there were really sort of, like, three people finding security bugs in databases. There was David Litchfield in the UK, Alex Kornbrust in Germany, and Cesar Cerrudo in Argentina. And they were finding the bulk of bugs in databases. And he, David, had found the bug that led to Slammer. He put a proof-of-concept out. That was the last time he actually put a proof-of-concept out, was because it did lead to Slammer. The bug is very simple. I actually used the bug, the code example to show how that code, if it existed today, would be squished by, you know, a dozen different SDL requirements, and how it would just -- it wouldn't get through. So I still, you know, and I'm publicly sharing that code many, many times. Just basically, anyone cares, you just a call to S printfF. That's all it is. And so David released a proof-of-concept that turned into Slammer. We knew Slammer was coming a couple of hours before it actually really hit, because we saw a lot of activity on UDP 1434, which is the SQL Server management port. So when people say SQL Server was affected, it actually really wasn't. It wasn't the core engine that was affected. It wasn't TCP 1433. It was UDP 1434. It's still a SQL Server product, but it's not the core engine. It's sort of the management fluff around it. So David was talked to on the day before BlueHat. He spoke to a whole bunch of execs, and in the audience was Paul Flessner, who was heading up SQL Server at the time, and a bunch of other execs, you know, Allchin, Valentine, you know, the usual rogues gallery. They were all there, and they were completely eating out of the palm of David's hand. They were just with absolute bated breath, had so many questions for David about all sorts of things. Then the next day was the real BlueHat session, when David was going to talk to, you know, Microsoft employees. What amazed me is SQL Server was one of the big teams at Microsoft. About 75 to 80% of the entire SQL Server team was in the audience to listen to David. And to me, that was a huge indication of a big cultural shift within SQL. It wasn't a case, I'm not saying this ever happened, but it wasn't a case of them sticking their head in the sand and hoping the problem goes away. I'm not saying that happened, I don't believe it did. And the fact that 80% of the people turned up to listen to David means they wanted to learn what the problem was. And what happened since then, the SQL Server team did a lot of work. And don't get me wrong, Slammer was a real wake-up, but they did a lot of work, not just in the code. They rewrote big chunks of the code, but they also, whenever they installed, they would install with just, like, a minimal set of features. You've got to realize, if you don't need UDP 1434, why install the functionality behind that, right? There's no need to if you don't need it. But if it is there, then it can be whacked, right? Because the code is running. But if it's not there, it can't get whacked. So they ended up turning that sort of function, a lot of other functionality, off by default. That same mantra was true in IIS, right? So again, you know, on my watch, I was on IIS 345 and the beginning of 6. 345, you know, had a lot of security features, Kerberos, certificate services, the whole nine yards. TLS, client-side, smart cards, the whole works, right? AD integration. But all that functionality, a lot of it was enabled by default. But there was a big cultural change based on what we'd seen. It wasn't just a case of getting the code right. It was all about being secure by default as well. So we turned a whole bunch of features off in IIS6. And in fact, IIS6 by default, when it runs is just a boring old static web server. It does nothing. You've got to opt in for everything. And so that to me is a really good example of the whole secure by default mantra, which is now in SFI and was in trustworthy computing back in the day. So I think attacks, there's a few things that have to happen, right? So first of all, there were issues, right? There were many, many issues and we had to do something. And it wasn't just a case of the teams having to do something, it was also management. In other words, Bill saying this is a priority. And when the Trustworthy Computing memo came out, that really was the rallying cry behind myself and many, many others at Microsoft to roll our sleeves up and get stuck in. And we follow those mantras of Secure by Design, Secure by Default, not just code quality issues and design, but also shipping with low attack surface and with, you know, sort of functionality disabled by default if it wasn't required. In fact, I'm a big fan of that today. I think everyone should do that, is if you, you know, look at what 90% of the people use and just ship with that functionality. And, you know, the other 10% can turn on the features that they need. You know, we're doing that in Windows, right? So SMB is finally going away in Windows, Windows Server 2025, I think. I'm sure that you can turn it back on or deploy something if you need it, but it's not there by default. And, you know, that'll cause some headaches for some people, but it can't be attained by default because it's not there.

Nic Fillingham: Thinking back to that session, David Litchfield's session, so, and then thinking about sort of the guidance you gave a little bit earlier in our conversation about, you know, engineers, developers want to do the right thing. They don't want to be just told their baby's ugly or where there's bad code. They want to be sort of told, okay, So here's how you can make it better, and here's the sort of the least friction, the most frictionless way to do it. I'm just fascinated. Did David sort of essentially do the same thing? Did he sort of say, here are, here's my research, here are the issues that I've found, here's how I think you should at least maybe think about mitigating this? Or did David sort of present some ideas for how to sort of make it better or how to sort of mitigate these issues he discovered?

Michael Howard: Yeah, I think all the researchers do. And that's one thing I like. I do like this about David, right? He's not just the case of a person saying, Hey, your baby's ugly, then walking and stepping off stage. You know, e says, Your baby's ugly, but you know, here's where to put the lipstick. You know, he [laughter]. That's a terrible analogy. He's got lipstick on a pig, right?

Nic Fillingham: Lipstick on a baby.

Michael Howard: Yeah.

Nic Fillingham: Does that make it better?

Michael Howard: No, I don't think it does. That's a bad analogy. But the point is that he would also say, Look, you know, here are issues you're seeing. You know, this kind of thing needs to stop. Here's a better way of doing things. And here are sort of mitigations. And I think for the most part, a good researcher will do the same, right? But it's the same with me, you know, inside of Microsoft, right? I'm not just going to tell you, You can't do that. I'm going to say you can't do that, and then follow it up with how to do it. And if for some reason the "how you need to do it" can't be done, which I have come across, here's something that's an okay, you know, silver medal, right? Here's the second-less sucky thing to do. We see this right now in Azure, right? So there's a big push to get rid of credentials, like, just going away. The solution there for processes, for example, is to use managed identities. But some features in Azure, because of where they reside in a stack, can't use managed identities. So there's an ultimate solution for that. Is it as good as managed identities? No. Is it way better than managing the credential yourself? Oh, heck yeah. Right? Because that way the infrastructure takes care of rotation of the credentials, auditing access, the whole nine yards, right? Including from a compliance perspective. Whereas if you're rolling your own, you probably got it wrong anyway. But yeah, I think for the most part, good researchers will say, Don't do ABC, do D, E, and F instead. And if you can't do D, E, and F, then here's a, you know, GHI that is reasonable enough.

Nic Fillingham: Yeah, I mean, I set you up to sort of reiterate that point because I think anyone listening to this podcast, you know, if you want to attend The BlueHat Conference, we'd love for you to come, but we'd also love for all researchers out there and researcher-adjacent folks to submit papers to present at the conference. And I think that that's a really great tip, that if you are going to write a paper or submit something as part of a call for papers for the conference is, you know, make sure where you can, obviously summarize what you've found, you know, call out the issues that you've discovered. But where possible, try and provide guidance and solutions and ideas for how to sort of go and tackle it and make it better. It feels like that is sort of, like, the key between, you know, someone, as you say, just sort of pointing out, I don't want to use the ugly baby analogy, but the key is, you know, for acceptance from the folks that have to go and sort of, you know, either fix the product or move the entire industry forward is they want to be pointed in the right direction. They want to do the right thing, as you say. So as you're constructing a paper or an abstract for a call for paper, what have you found? What's the issue with it? And then obviously what are some ideas that you would recommend to go and implement a fix or completely mitigate it in full? Would you agree with that?

Michael Howard: Yeah, I do. And in fact, when I'm reviewing papers for security conferences, I'm not going to say I'm going to ignore a paper that's just, like, Hey, your baby's ugly. Because the thing is that often grabs people's attention, right? It's like the click-baity headline. Whenever I present at conferences, I may spend 1/4 of it explaining the problem, but the other 3/4 is how to fix it. All the different ways you can possibly fix this. Some are practical, some are not. Some might be practical for some people, but not for everyone else. That's fine, but a big thing to me is actually understanding the issue and the root cause of the issue so that you can actually know why you're fixing things. So I would definitely say, Yeah, if you're presenting an abstract or a paper, you know, do say, Hey, we found an awful, baffle gloop, and here's how, you know, we've come up with, you know, novel ways of mitigating awful, baffle gloops in the wild. I think that would -- because again, you want to progress -- you want to progress the industry, right? And mitigations and how to solve things, and frankly, rethinking how things are done, you know, is at the root of everything that we do in security, right? Because it's just an ongoing cat-and-mouse game of chess. You know, that's what it is, that's exactly what this is.

Nic Fillingham: Cat and mouse -- cat and mouse game of chess. I love that.

Michael Howard: You like that? I'll roll those two together.

Nic Fillingham: And I really think we need a better analogy than just calling your baby ugly and putting lipstick on. So that's maybe, you know what, if you listen to this episode and you've got a better analogy for us, let us know. I don't even know if we have comments. You can email us, bluehat@Microsoft.com. Michael, we are coming up on time here. I was going to open the floor here. You've got a new book coming out, if you want to talk about that, obviously you co-host The Azure Security Podcast. I believe you guys are still going strong there.

Michael Howard: We are. Actually, we are, in August, we will record episode 100.

Nic Fillingham: Oh, congratulations.

Michael Howard: And it's a special episode. We will not have a guest. We have a very interesting set of topics to talk about. Very interesting, so yeah.

Nic Fillingham: So that'll come out in August? So it'll be recorded in August?

Michael Howard: It'll be recorded the first week of August. We've still got episode 99 to do, that's being done next week. But yeah, we expect to have that out sort of mid-ish August. Yeah, to be honest with you, we actually had the episode 100 thoughts, like, what we should cover, probably at the beginning of the year, because we knew this was, you know, midway through the year we'd hit episode 100. It's a lot of fun. Like, I'm going to be honest with you, it's a lot of fun doing it. We have so much fun. We come at things from a different perspective. You know, Gladys has her own viewpoints on things. Sarah has her own. Well, everyone knows Sarah has her own viewpoints on things. You know, Mark brings the sort of adult to the room. I come from a mainly app dev cryptographic perspective. We all have different viewpoints on things. And we've had an amazing set of guests over the years. So yeah, I'm -- and even that started as, you know, literally Mark and I were in a pub in Seattle, some tech, I don't know some event. And over a beer, actually I'm actually think mine was a gin and tonic. And I said to Mark, I said, Hey, how do you fancy doing a podcast? And he said sure and that was -- that was basically how it started.

Nic Fillingham: The rest is history.

Michael Howard: Yeah, the rest is --

Nic Fillingham: Awesome. So is it Azure podcast dot? What's the URL, sorry.

Michael Howard: No, you go to aka.ms whack AZ sec pod AZ.

Nic Fillingham: AZ sec pod perfect. And do you think we might see you at the next BlueHat? You might get up to Redmond for that or can we could we rope you into something BlueHat-ish?

Michael Howard: You can rope me in yeah, you paying for dinner.

Nic Fillingham: Oh, I reckon we can shout dinner.

Michael Howard: Shout dinner, oh my God, spot the Australian. You can shout dinner. Yes, you and I are the only two knows that know what that even means. You're going to shout?

Nic Fillingham: Really?

Michael Howard: You're going to shout?

Nic Fillingham: Oh, maybe.

Michael Howard: For those listening, it's a very Australian, New Zealand way of saying someone else is going to pay for it. It's like, who's shouting, you know, who's the beers, you know.

Nic Fillingham: Yeah. I could have said, I'll shout you Tucker, but that would be even more esoteric. What do you like to eat when you're up in Seattle? What do you -- what can't you get in Austin that you like when you're up here in the Pacific Northwest?

Michael Howard: Fish. I love salmon. You can't beat the seafood in Seattle, right? You just can't. Actually, my wife and I were up there not long ago for a friend's wedding, and I ended up spending a week working in Redmond. And I went to a Mexican restaurant, and they had this habanero sauce, and I was like, I'm really excited for this habanero sauce. You know, being in Texas, and basically the habanero sauce was obviously Northwest habanero sauce, and it was basically ketchup. You know, I was really disappointed.

Nic Fillingham: They waved a habanero pepper over the top of the pork.

Michael Howard: I think so, I think so, and that's about it. Yeah, it was pretty bad.

Nic Fillingham: Well, Michael Howard, thank you so much for being a guest on BlueHat Podcast. We'd love to have you back. We'd love to have you at the conference. Where can we follow your shenanigans online apart from the podcast? Are you on the Twixts? Are you on the LinkedIns? Where do people go to find you?

Michael Howard: I'm on the LinkedIn and the X or Twitter, whatever it's called this week. So yeah, it's Michael underscore Howard on Twitter or X. Actually, to be honest with you, I really haven't done that much on Twitter or X. I'm going to ramp that up now that I'm in a different position in John's organization. So yeah, I'll start. I'll take this as a reminder to spend more time on that. But yeah, on LinkedIn as well.

Nic Fillingham: I have one last question for you. Do you have any old OG swag from BlueHat? Did you keep anything? Did you keep anything from --

Michael Howard: No. I don't, no. I've got a whole bunch of Windows, like, the original Windows NT DVDs before it was released from 1991. No, no OG swag from BlueHat.

Nic Fillingham: Alright, well I keep asking that of folks that have been to some of the early ones, what OG swag do they have? But anyway, Michael Howard, thanks so much for being on the podcast. We'll talk to you another time.

Michael Howard: Absolutely, thanks Nic.

Wendy Zenone: Thank you for joining us for The BlueHat Podcast.

Nic Fillingham: If you have feedback, topic requests, or questions about this episode --

Wendy Zenone: Please email us at bluehat@Microsoft.com or message us on Twitter @MSFTBlueHat.

Nic Fillingham: Be sure to subscribe for more conversations and insights from security researchers and responders across the industry --

Wendy Zenone: By visiting bluehatpodcast.com or wherever you get your favorite podcasts. [ Music ]