CyberWire Live - Q4 2020 Cybersecurity Analyst Call
There is so much cyber news that, once in a while, all cybersecurity leaders and network defenders should stop, take a deep breath and consider exactly which developments were the most important. Join Rick Howard, the CyberWire’s Chief Analyst, and our team of experts for an insightful discussion about the events of the last 90 days that will materially impact your career, the organizations you’re responsible for, and the daily lives of people all over the world.
Rick: Welcome to the Cyberwire's quarterly analyst call, my name is Rick Howard, I am the Cyberwire's chief security officer and chief analyst. I also host two of our podcasts, CSO perspectives on the pro side and Word Notes on the ad supported but, more importantly, I am also the discussion leader for this program. I'm joined by my good friend, Merritt Baer, she is principal in the office of the CSO, I guess, CIS for Amazon web services. Joining her is Ben Yelin, the program director for public policy and external affairs at the University of Maryland Center for Health and Homeland Security. I promise I won't make a joke about that again for the fourth time, Ben. He's also Dave's co-host for the caveat podcast, Merritt, Ben, welcome to the show. You can say hello, you can talk. Wave at everybody.
Ben Yelin: Good to see you again, Rick and good to meet you, for the first time on this column, Merritt.
Merritt Baer: Yes. Thanks for having me both.
Rick: So this is the fourth show in the series where we try to pick out the most interesting and important stories of the last 90 days and try to make sense of them. Topics we considered for this show, but didn't chose, for certain reasons, one of them was the Sandworm indictments. If you want any details about that particular thing, I'd highly recommend Andy Greenberg's book on the subject. It's excellent. We were gonna talk about Microsoft and cyber commands take down at Trickbot. Two interesting bedfellows there that you don't normally associate with each other but, we decided to skip on that one and of course, it would be remiss if we didn't mention the FireEyes SolarWinds hack that I imagine will pop up probably at our next quarterly analyst call. So, in the meantime, we'll start with Merritt's topic, we're talking about algorithms and security and how do practitioners get involved with making sure algorithms are doing what they're supposed to do. So, Merritt what should we know about this topic?
Merritt Baer: Yeah, so thanks for having me everyone. In the office of the CSO at AWS I look at how we are connecting the dots to do security, AWS runs on AWS and then I also talk to customers about how they're securing themselves. And I think, Eric Bramwind, one of our VB distinguished engineers, sometimes says, "Security now is just a big data problem." And I think that we feel tempted to, kind of, expect that if we could just get the right data, if we could just get the right signal to noise that would be the panacea of security. But I think that one of the underlying rivers that flows through the ways that we get our information these days is the fact that, so much of what we are doing, when we look at these vast data sets which, of course, cloud allows you to do virtually unlimited analytics and compute on top of those vast data sets that you can now hold onto and there's this capability that what we're actually experiencing is the ways that those feed into machine learning algorithms.
Merritt Baer: And I think one of the interesting ways that I've been thinking about the ways that we could be better about our security is to think through where those algorithms come from and how dangerous or, at least, imprecise it can be that we don't have access to that underlying data and, therefore, we can't validate that it's the right data and, in fact, to the contrary, we know from a lot of research out there that algorithms are biased. So, you look at who's resumes get reviewed a greater rate and it's people who have not ethnic sound names, for example. Or we look at who credit bureaus assign greater weight to and there's ways that this ricochets through our eco-system, but I think this week I was reading a little more about adversarial AI and thinking, in particular about the ways that this can be not only considered from whether your data set is genuinely the right one, to the ways that adversaries could actually try to poison it.
Merritt Baer: And I think that there's one example, there was a derby talk last year where a few guys talked about they had gamed, there was a proof point algorithm that would score your email on a spam likelihood and these guys were able to basically inject their own data. They created a copycat based on what they had found in the outputs and then they gained, and were able to basically, steer or poison that. So, I think, this is, in my view a next corner to look around in terms of not just for your algorithms, looking at the right data sets, but also, how are potential adversaries deliberately tampering with the security of your security?
Rick: So, Merritt, this is a new development, 'cause it' wasn't just five years ago that we were still having trouble just collecting the telemetry of our network devices on print? We didn't have enough hard drive space to collect all that and, like you said, with Amazon and other cloud providers, we can essentially, put all of our information up in the cloud for relatively cheap price, I mean, it's almost free at this point, right? So, the ability to have all that data, and then to run machines running algorithms on it is a new thing and the security teams are just now coming around that whole idea.
Merritt Baer: Yeah, exactly I mean, this is, in some sense our homework, the fact that we just have access, we have more data, we have the ability to run more analytics and I think that's just a new normal and, don't get me wrong, I'm not urging security teams to stop looking at data. This isn't something where we avoid it, but, you don't cut off your nose to spite your face, but it's just something to be aware of and something to be more conscious and more critical of when you're looking at what data you are accepting as your baselining. And, by the same token, even as just a citizen walking around in the world, being conscious of the way that this can play into the algorithms you interact with. And I think there's a lot of ways that we can feel the impacts of this. And very few ways that we can control the input.
Rick: So, the security researchers now, like the ones you mentioned at Recon they're just now getting their hands around what we might be doing for this on a very niche use cases, right? But, do you think you envision a place where this is gonna be a network defenders purview where it's part of our skillset that we know how to do this?
Merritt Baer: Yeah, it's interesting because there are a couple of academic papers we can give the links to Cyberwire listeners if they're interested, but one of them refers to these as no-box adversarial attacks, which is a reference to black box and white box and the general feeling right now is that these are academic papers. That they're identifying the possibility of doing this rather than identifying real world scenarios where this has happened, but I think that we should just expect that these days because of the nature of cloud computing, abstracting away the code, so now you interact with your cloud provider or you are interacting on a console, so, you've got infrastructure with code and by that I mean you take your [PHONE] terraform and you spin that up and, of course, there are real world assets that are spinning up in response to your code commands.
Merritt Baer: But, you, as the user, interact with it as JSON files and whatnot, so, this is where it is infrastructure as code and then from that you can abstract away to security as code until that line between the academic and the real world actually starts to become where the operational lies. So, it's to become almost not important. I mean, I think if you can assume that if it's possible, then it's happening.
Rick: So, while we're talking about this, Jen, can you put up the first poll question for the audience? And, by the way, all you audience members, if you have a question about anything we're talking about today, just put 'em over in the chat channel and we'll try to answer at least some of them. Alright? So, the first question to everybody here is, do you trust your anomaly detection software if you're running it? So you guys go ahead and please check on that. What do you think you're doing in your own organization. So, Ben, let me turn the question over to you as our resident legal expert. Is there some legal angle here that we need to think about or is it all purely technical, learn how to understand machine learning algorithms?
Ben Yelin: So, good question. I always like to find the legal angle to all these topics and I think this one is more on the technical side. AI, in general, is making it's way into the legal world because it's been so useful for criminal prosecutions. Using machine learning as part of county police departments, state police departments, to try and profile the person that you think is gonna be a likely criminal and to make actionable decisions based on that algorithmic learning. I have concerns about it. I mean, one thing that I think this discussion helps illustrate for us is we're still, kind of, in the infancy of the field of algorithms and algorithmic learning and I think if all the other systems that we use for law enforcement purposes are the subject of inherit biases, these are things that have built up over hundreds of years, potentially, thousands of years.
Ben Yelin: I think, and I wonder what Merritt's perspective is on this? Is it already too late to change that in the world of machine learning and algorithms? Or, has the train gone too far down the tracks on that? In other words, this would be the one domain left where we might be able to get it on the ground and say, "Let's stop this before some of these biases start to become institutionalized."
Merritt Baer: Yeah, it's interesting, and I have a law background as well so, the point where I became anti-death penalty was in law school when I looked at the numbers of who gets convicted and how they get sentenced and just the disproportionately across race and other factors, but race as a primary one. And then, you look at policies like the war on drugs that were deliberately crafted to disproportionately impact swathes of the population. So I think that it's hard for me to look at the use of algorithms in sentencing and think, "Oh great, we're gonna take what's already come before and train a model to recreate that," to your point. So I think that that's a fair point, but I also feel like there's a little bit of that optimism or that sweetness that drives.
Merritt Baer: I think the reason that folks want to use algorithmic software is the same reason that they want to have sentencing guidelines. The idea is underpinned by some fundamental desire to standardize and make more fair the process itself. So, I can empathize with that, but I think it just speaks to the need for us to be inquisitive, to know what data goes into those models, to be deliberate about the ways that we track what outcomes result. You know, I would love it if we saw more about what actual outcomes result from the ways that we choose to do sentencing. I have not seen anything that speaks to the fact that we are doing criminal justice better in America and we disproportionately imprison our own population.
Merritt Baer: So I would love data, but I also think that, on another level, even outside of the criminal justice system, the idea of a recourse for an individual who's walking through the world and interacting with these algorithms whether it's through their likelihood of getting approved by a mortgage and the ways that redlining has worked historically, or, there are all these ways that I think just that access to the data set that underpins the decisions that you're now experiencing, as a human in the world, to me that's a legal repercussion of this same underlying problem. But, I think what I've found, at a new lens on it, where today I was really thinking about it in the context of the insights that your security team is trusting, and the vendors and the kinds of inaccessibility to that underlying data set and what does that mean for how much and and when we should trust it?
Rick: I read Malcolm Gladwell's, Talking to Strangers, last year and there's a section in it about how judges make decisions about whether or not they should give bail to potential crime doers and they almost always get it wrong, right? And they tried algorithms to get it right and they get it right way more than the judges get it right. But, the point he was making, was that the algorithms are gonna do what we train it do and if there's nobody watching it, okay, then we can get into trouble like you were talking about Merritt with some of those bad things that are going on right now. So, what I hear from you, yeah go ahead.
Merritt Baer: Smarter folks than I have written books about algorithmic bias and I think it's really worth doing homework in that area. But I hadn't, until thinking recently, I hadn't put the lens of how this really impacts operational security and the extent to which we depend now on the insights we can extract from that raw data and how, to your point, the fact that we are human means that we are imperfect, right? So we should, on some level, expect biases. Dan Kahneman would say that biases are just shortcuts and we have them for a reason. So, in a sense, it's not a dirty word, it's something to sit with and be aware of and, by that same token, to feel galvanized by and to feel a call to action. We should be questioning our own insights.
Rick: We got a question from the audience, this is from, the fast and the curious 17, fantastic screen name by the way, it goes right along with what you were saying, Merritt? This is her question. How are all algorithms the same, or different from what we already know about our network? So, that implies that this is a security function that we'd be looking for this.
Merritt Baer: Yeah, so actually, this draws on my previous mental thread here about the distinction between the academic and the operational is blurring away now. So, one of the things that I like to talk about is our automated reasoning group. This is a team at Amazon, like math PhD's, and they use formal reasoning or formal methods which is higher level maths to describe what we can know about a subject. So, let's say an IAM permission, identity and access management, these policies can get really hard to purse and if you were to play out every possible outcome it would be really difficult, or impossible, for a human to analyze those possible escalation paths and outcomes and I think that it would almost be like a fools errand, because those permissions can be changed at any time and so on and so forth.
Merritt Baer: Well machines can do this and so we have, for example at Amazon, we have a tool called Zelkova that is under the hood for our IAM access analyzer and VPC access analyzer and it allows you to know, definitively, which IAM policy is more permissive than another. And it's doing that by pursing the computer code of what those permissions say. Similarly, we have a tool called, TIROS, that will analyze network reachability, so, do you have an Internet facing end point? Well you can have a really long set of resources and role policies, but a computer can purse that and can totally definitively. And these are interesting because in contrast with algorithms where we have abstracted away from the raw data, a set of things that we think to be true, therefore, we grab a higher level analysis from there.
Merritt Baer: The automated reason group is describing the world as it is. These are insights that are based on fact. I mean, if you believe in math, then it is true. [LAUGHS] I'm not joking, 'cause I do sometimes have conversations with customers who don't understand logical separation in contrast with physical separation and the fact that, if you believe in math, then it is as effective, or, more. And I think that there's for example, this goes back, at least as far, as euclidean mathematics; where you don't go out and try to count every prime number, we just know the infinite of primes to be true. And so, by that same logic, we can know certain things and I think that that ability to have real knowledge in real time, is another benefit of cloud computing, in part because you've abstracted to that, you know, security as code and, in part, because you're using math.
Merritt Baer: You know, in a sense, machine learning is like a mix of stats and math and probability and they always said like machine learning is the anti-fight club, you do talk about it. The first rule is you talk about it. So, I don't mean to say that machine learning is not useful at all, but, I do mean to say that we should apply a lens of skepticism, we should be looking at the underlying data sets, we should be analyzing what biases went into our selection of those or how they could be corrupted, polluted, drifted. And then, that is different, than just statements about the reachability of our networks, or about the permissibility of our policies.
Rick: Well, that's all good stuff and I think we could talk about this for the next 25 hours. But let's move onto the second topic, good stuff Merritt. Ben, you have a constitutional case that we need to think about here and how it might affect us in the future. So, give us that rundown.
Ben Yelin: So, I assumed that on the first day of my employment with the University of Maryland, I signed some sort of acceptable use policy for technology. I don't remember signing this but it probably said, "Here's what authorized access is, as it relates to your employment." So, perhaps it allowed minimal incidental computer use, whatever that means. I'm gonna go ahead and admit, on this call, that my computer use on some days has not been incidental as it comes to watching sporting events or playing around on social media. So, when I first saw this case that I'm about to discuss, Van Buren V United States, this peaked my interest because, at issue here, is whether, as a society, or as a federal government, we're willing to criminalize unauthorized access in a way that would be, frankly, very over broad. So this was a case that actually went in front of the Supreme Court two weeks ago for oral arguments. It concerns an individual in Georgia, a Mr Van Buren, he's a Georgia State employee.
Ben Yelin: As part of his employment he had access to the State driver's license database. He had a friend who worked for the FBI and that friend, basically, paid him off to say I need you to search, you know, so and so for me, as part of my employment. So this exceeded his authorization to that network, or, to that particular database. So Mr Van Buren was indicted, he was prosecuted, they actually went through a criminal proceeding. And he'd appealed his conviction saying that his conviction relied on an over broad interpretation of the computer fraud and abuse act. So this is an act, dates back to the 1980s, it was intended to protect against hacking. So it criminalizes unauthorized access to networks where devices that are not your own. It has this vague language about exceeding authorized use on devices or networks that you do have access to.
Ben Yelin: It is something that was brought up at oral arguments. So, what the justices said in their questioning is, they actually referred to all of these as the parade of horribles, so, what would happen if we had this over broad interpretation under the computer fraud and abuse act of what counted as unauthorized access. What the government was saying is, "If we don't have a very broad definition, then a lot of behavior that should be criminal will not be criminal. So, if there's somebody for example who works for the office of personnel management and they have access to somebody's social security information or address or other private information, if they exceeded their authorization on their network and searched for the ex-girlfriend or ex-boyfriend, if you had a more relaxed definition of unauthorized access, then that potentially could be allowed." So, it was deciding which slippery slope we don't want to go down, was the tenet of this case. [COUGH] Excuse me.
Ben Yelin: So in terms of what I think we can expect, to me, a majority of justices seemed more inclined to be sympathetic to Van Buren's position, or his attorney's position, that the government's definition really would be over broad, it would turn the computer fraud and abuse act into something that was not intended when it was drafted in the 1980s and that's a prohibition on a lot of, frankly, very normal behavior that we all engage in. And Justice Gorsuch said, she was a conservative justice said, "This is part of a broader pattern of the federal government's trying to pass [COUGH] excuse me, federal statutes to criminalize additional behavior." So, it's not just this statute, it's part of a broader pattern. Justice Sotomayor, one of the more liberal judges, seemed to agree with this line of reasoning as well. Justice Alito seemed a little bit more inclined to be sympathetic to the government's position.
Ben Yelin: He was talking about how the attorneys for Van Buren are saying, "We have other ways to criminalize people searching their ex-girlfriend's records when they're at a government place of employment. We don't need to make the computer fraud and abuse act over broad." What justice Alito was saying is, "I don't know what those statutes are and I would like to have some way of criminalizing that type of behavior." So, the bottom line is, if I had to guess and prognostication is always dangerous in this field...
Rick: That's what we do on this show, Ben, okay? That's what we do.
Ben Yelin: Even though we've been warned against it. I would guess that the majority of the court is gonna sign with Van Buren. [COUGH] This is a very important decision because there's been a circuit split on this issue. So, depending on where you are in the United States, different rules apply as to how broad computer fraud and abuse act is. So, it's good that eventually, in 2021, we'll get some finality on this. So, I just thought that was a really interesting case, something that could potentially have very profound impacts on all of us, people who are part of this field, or not part of this field.
Rick: So Jen, let's throw up Ben's poll question.
Merritt Baer: This is a perfect example of authentication versus authorization.
Ben Yelin: Right.
Rick: That's very good. [LAUGHS] Ben, we got a question from the chat room, this one's from, silk road rider, okay? When do you expect the Van Buren case to be decided?
Ben Yelin: That's a very good question. And it's another fools errand to try and guess the time line of the Supreme Court. The term ends at the end of June in 2021, generally, decisions that are controversial take a little bit longer, you have to have the justices right the majority opinion, then you have to have dissenting justices weigh in, then the majority has to respond to the dissent, so in that sense, it could take a little longer. I would guess, by March or April, we get a decision on this just because that's usually around the time frame for these types of high profile cases. So, I don't know what you're gonna get first, a decision on Van Buren, or the COVID vaccine, I'm frankly craving both, so, two great case [UNSURE OF WORD] together.
Merritt Baer: Ben, any personal guesses on how the new justice Barrett will come out?
Ben Yelin: So, as somebody who's not as ideologically inclined with justice Barrett, I thought she asked really intelligent questions during oral arguments but didn't really tip her hand as much. I mean, you'd think she'd be a little more sympathetic to the government's argument, just based on her political ideology on the court, but she was asking very skeptical questions of the government's attorney. So, I would not be surprised if she sided with Van Buren as well. It's always refreshing to see these cases where it's not strictly ideological. You have some vagueness in this federal statute and this is what the courts are for. They have to make a call on to how to interpret this law and the call is just gonna be incredibly important because it's gonna change the way we interact with our devices and our networks.
Ben Yelin: And I think it could make, if they did rule for the government, I think it could potentially have a chilling affect on a lot of the activities we do. We'd be much more afraid of being criminally prosecuted at the federal level for exceeding our authorization for using our brother-in-law's Netflix password when we're not supposed to. And I think that could have a pretty profound impact. So, I realized I strayed a little bit from your question there.
Merritt Baer: Well no I mean, Ben, I think we're leaning more towards prosecution, and then we come back towards the same systematic biases that play out, right?
Ben Yelin: You're absolutely right. So this was a great question that was raised at the oral argument is, the government, basically, made the case that, well, we're not actually prosecuting that many CFAA cases, "It's not like we're going around the country looking for these. We're not gonna bust your grandma from going on Facebook on a device which she was not supposed to access. That's not what we're here to do." But Van Buren's attorney said, and what the justice, I think, were very amenable to is, You never know what prosecutorial enforcement is gonna look like in the future. You could have a justice department that prioritizes criminal prosecutions under the computer fraud and abuse act as a policy priority. And so, if you have this loaded gun then and you have this over broad interpretation of unauthorized access, then, whether these cases are being prosecuted now is immaterial to how they might be prosecuted in the future.
Merritt Baer: Well, and put differently, also, we already know that the law gets disproportionately applied to those who don't have resources for an expensive attorney, for example.
Ben Yelin: Absolutely and that's why it's so interesting to hear justice Gorsuch talk about the over criminalization of federal statutes, that we have this broad pattern of using federal law to criminalize a lot of different things, in a lot of different domains. Traditionally that has not be the role of the federal government. I mean, I think justice Gorsuch's ideological view is that criminal cases in the vast majority of circumstances, unless they're concerning things that are inherently inter-state, should be done at the state and local level. And I think that's what you're getting at is part of the reason that he shares this view which is that, if you have these tools that are over broad, you don't know what justice department is gonna get a hold of them and whether they're gonna use these tools disproportionately against people who don't have the resources to defend themselves. And I think that's a major concern about the government's argument in this case.
Ben Yelin: That they're not gonna be prosecuting powerful people at Wall Street first who exceed their authorized access, they're gonna find people like Mr Van Buren, who's a state employee in Georgia, and prosecute him in federal court. So I think that point's very well taken.
Rick: So Ben, from the chat channel, a couple of people have asked versions of this but this is from, cuddly goblin, do I need to change my behavior in anticipation of this decision? Are we gonna start seeing more work from this do you think?
Ben Yelin: So, that's a great question. Right now, you don't have to change your behavior on it and, in all likelihood, no matter how the court decides, you are probably not gonna have to change your behavior, unless you think that your employer, or somebody else, is gonna be a complaining witness against you. This is one of those things that even if it's a law in the books, it's still more likely than not, that you will not get prosecuted. For the time being, you certainly don't have to change your behavior, this is a case that's not gonna be decided and even in the jurisdictions where there is a broader definition of the CFAA, we are not seeing, right now, a parade of federal law enforcement going in and arresting people for exceeding authorized access.
Merritt Baer: Well, I would say, put it back to the question asker; what is your behavior on your device? [LAUGHS] Because there are probably some you should change but--
Ben Yelin: Yeah, I guess I should include the caveat so to speak, which is not a plug for our podcast that there are some illegal things that you should not be doing, that might get you prosecuted under federal law, some of the obvious things that we discuss on the parade of horribles from the government's perspective. If you are a state employee and you have access to HIPAA protected public health information, you probably don't wanna look at somebody's blood test results and share those publicly, even though you have access to that particular database. But I think in terms of the other side of this which is criminalizing things that most of us would not consider illegal, or most of us would never thing would be illegal, I think that's the side where, at least for now, you probably don't have to be changing your behavior significantly.
Rick: I find that I'm in two minds on this right? I am conservatively, say, privacy advocate. I am for, any time we wanna get government more out of our business, I think that's the right thing to do, okay, but, when I put my CSO hat on, I wanna see everything, okay? I want to see it all so I can help defend the enterprise, so, I am split on this, so Merritt, I wonder if you could help me try to bring those two ideas together.
Merritt Baer: I think, from what I understand, the court's reasoning here, it's interesting how criminal computer constitutional issues often bring together the far left and the far right. And I think that you see this in child pornography cases, for example, which is where I started my career in security, which is one of those margins to find the center areas, but you often see slippery slope arguments and I'm just over the slipper slope, no, we also just stop rolling when we want to. And I think that in some sense, this is what we're supposed to do, right? We're suppose to have the conversation, as a a community, about the values we want to live by and what constructs we wanna put around the use of the Internet and I think, justice [UNSURE OF NAME] gave a talk at her graduation, I don't know, ten years ago, look it up on the Internet, where he talks about how all of the bill of rights cannot possibly exist, all at once, in perpetuity, they are a tension of one another. So you're, for example, in the context of child pornography, you're right to free speech is then outweighed by other interests, right?
Merritt Baer: So, what seems like an unwaivering declaration, no, no law shall unbridge. Then it becomes tempered, because we're, like, whoa, we're human, we don't like this thing that has come up and we have a compelling interest and the government is probably the right entity to pursue this, all of which are questions, not foregone conclusions. So I think that this is the right process, we should have these conversations about the values by which we want to live and the ways that humans are using the Internet and the ways that we see promise there. And I think, you know, one the dangers here is that we, sort of, submit to the [UNSURE OF WORD] dark, quote unquote, debate where we say, "Oh, well what about the fact that bad people do it?" It's like the Oliver Wendell Holmes, Bad Man Foregone conclusion thing? And I just thing that we should actually be looking at the ways that we want to create the world that we live in, because technology is human made and so, in a sense, by participating in this landscape, and, by the way, this is the part where I urge folks who are listening who are artists and architects, who are not yet a security person but want to be, or, who feel like they don't fit the mold that's a good reason to get in the mix.
Merritt Baer: Because we are creating this landscape and I think that the more of the stuff that life is made of, that we have, as we look to the values that we want to live by and the conceived notion of security that we value, the more healthy and robust we will have it.
Ben Yelin: I think that's a great point, I mean, the last thing I'll say on it is, we run to this problem where the wheels of technology move so much quicker than the wheels or law making. So congress passed the statute in the 1980s, it's gone through very few revisions since then, for something that covers a topic that's changing constantly. In an ideal world, in my view, it would not be the supreme court that shapes the parameter of what unauthorized access means. That should be something that congress does. What I think the court recognizes and what I recognize is that congress has trouble doing things, it takes them a long time to revisit these statutes, especially when they have other priorities, and I just think it's an institution that moves at the speed of molasses. But ideally, we do have what I think the process would be is we should do this democratically, as a society, we should decide, as you say, what should these parameters be and our elected representatives should represent those views.
Rick: Alright we're gonna have to leave it there, Ben, good stuff as always. Let's move onto the third topic, which is my topic. I've chosen, for my particular subject, the most prepared for non-event of the last two decades, it's the cyber election security topic. And so you all know the 2016 presidential election, when we all learned that Fancy Bear or APT28, or unit 26 one, 65, from the Russians, from GRU outfit, they penetrated the democratic congressional campaign committee and the democratic national committee in order to find embarrassing documents to support a massive influence operation. So, we were all concerned that that kind of thing was gonna escalate in this 2020 presidential election, not just from Russia, but from all of our competitors on the international stage.
Rick: So, after the elections this past month, it looks like nothing happened, okay? It is the biggest non-event since Y2K back in the 1990's, when we all saw a potential disaster on the horizon and spent bucket loads of resources to prevent it, which we did, and it looks like that's what's happened here. So, first, I just wanna pause, just for a second, okay, and give a tip of the hat to all the city, county and state officials who worked their backsides off, right? With no resources, okay? In the middle of a pandemic to make sure that the apparatus was resilient, I am really awed. I really had second thoughts about this after the 2016 election. I thought we were going down, but they did a phenomenal job and then, second, another tip of the hat for Chris Krebs and his cyber security infrastructure, CISA for co-ordinating that effort. Okay, I've been involved in co-ordinating efforts across peer groups before, on a much smaller scale, and that's a really hard job to do.
Rick: I can't even imagine doing it at the national level, right? So, the bottom line is, as far as we know today, there's been no significant successful cyber attacks, aimed at the US election apparatus for the 2020 election. Nor has there been electronic fraud, and the reason we know that is, 95% of the states systems collect paper records and they all match the electronic tabulation, so, for doing his job, Chris Krebs gets fired, okay? Which I guess is normal for this administration, but there has been influence operations. Not much from our international adversaries, but from within, we have been tearing ourselves apart. You know, the bottom line is, that our nation's state enemies didn't have to really attack us in cyberspace to do damage, the best course for them to wait for, for us to do it to ourselves, which we have, right? So, my question to you all then, this is talking about cyber security, it looks like we did a great job shoring up those systems, are we done?
Rick: Did we do such a great job on protecting our international election infrastructure that we don't have to do anything else going forward? Merritt, what do you think about that?
Merritt Baer: Oh gosh. So, I mean, I should give the disclaimer that most of my conversation today has been in my personal capacity and this is not any kind of formal Amazon opinion. I mean, I think that election security is just like other security challenges, it's a process, right? And one of the challenges is dealing with infrastructure, which we all know to be a challenge across a lot of things from other elements of infrastructure. One of the challenges is coordination among distributed, ugly systems, that are hard to update and coordinate and automate. One of the challenges is the perception and the marketing and conversations that we're having in living rooms and, like you say, the kind of ways that we saw influence campaigns, they didn't have to be saying, Go Russia, they could be sewing seeds of racial distrust.
Merritt Baer: In a sense though, I see these as lucky problems to have because no-one in China gets to discuss whether or not they like the president and no-one in Russia gets to have a conversation about what job they do or in North Korea. You just get conscripted and there's very little line between government and civilian and there's very little line between civilian government and military or none. And so, I think that, yes, these elements of our society, the fact that we have free speech and tension between companies and free market and they were weaponized against us but, what's the alternative? I think that it will be an ongoing process, I'm really happy that we are feeling more optimistic.
Merritt Baer: I think the collective consciousness that came out of this was a sense of renewed faith in legitimacy and I personally feel optimistic about just a more smooth sailing future. But I'm still a security person, so I picture all the next increments and the ways that those will ricochet around our infrastructure and also create important work that we need to do to protect more vulnerable populations. I always wanna take the next to say, and how will this disproportionately impact communities that are already vulnerable? And I think that's a real challenge in security. So, I see it as a process.
Rick: So Jen, let's put up the third poll question and while we're doing that, Ben, let me ask you this question, right? Since we were so good this time, do you think there's any impetus now to build a national online voting system. I know we've all been talking about it and how hard it would but, just the will to do it, do you think there's anything there?
Ben Yelin: So, I actually think one of the advantages of our current system is it is federalized; meaning, we have 50 separate state election systems, 50 separate databases and while coordination and integration can be good, it also means it's not as easy for our foreign adversaries to take down our entire electoral system. They have to do it, if not in 50 states, in a handful of important swing states that could sway the election. So, I actually think that's one of the advantages of having the elections largely carried out at the state level. We've seen, particularly in 2016, where you saw some foreign adversaries get into state voter registration records, etcetera, it's much easier to patch that up at the state level, with coordination from the Chris Kreb's of the world, than it is if we had a national system that were breached and it cast doubt on all 538 electoral votes.
Ben Yelin: So I think one of the blessings, perhaps, and maybe this is not something that was intended is that we've kind of put our eggs in a bunch of different baskets because this is a system that's so federalized.
Rick: So the very definition of resilience, okay? But others would say, Ben, that is a security through obscurity, right? That we've buried all these voting systems all around that nobody can understand how it works let alone us, right? So I'm not sure that's the best way. But Merritt, let me come back to you, there's all kinds of arguments against allowing people to vote online, because of bad guys. Do you think there's any compelling reason for us to try push that for the next presidential election?
Merritt Baer: Sure yeah, so, I've talked with a lot of smart election security experts and they say, right now, we are not there. So, I believe them. But I think at some point we will and then there will be a lot of things to consider like how do we identify ourselves? And how do we verify that we are who we know each other to be? And how do we verify the integrity of the vote as cast? None of these are new questions, they're just in a few form and I think that we bank online, we interact with our doctors online. The pandemic has really forced us to be more tactical about the ways that we can do things remotely but, I don't think this is a new problem. We have known for a while that we want to have the capability for remote work and for remote social interaction, big brother thing, and this is a good thing, it's good for folks with disabilities, it's good for folks who have accessibility issues because of caretaking, which, by the way, are disproportionately women.
Merritt Baer: It's good for access and I have faith that this will bring about, maybe a more healthy democracy if we can make it easier for folks to participate. We saw these lines for folks to wait in during a pandemic to go vote in person and I just envision a world where we can have the kind of, as you know Rick, the kind of defense in depth and really intelligent use of canaries and other sophisticated tools to have a really good estimation of our confidence in the security of our voting. And I think, yes, it's just a matter of time before we will be there.
Rick: So Ben, there's been a number of court cases working their way through, they're all about lost of momentum, is there another court case on the horizon that is potentially something we should worry about?
Ben Yelin: I think the Kraken has been released all over the country in terms of court cases. There are a couple of outstanding cases, because it's like a whack-a-mole, they'll lose then they'll keep filing additional cases, they filed a case in New Mexico, a state president Trump lost by something like 300, 400,000 votes. What we know is that the supreme court has been extremely hesitant to weigh in on any of these cases so far. They've weighed in on two, or declined to weigh in on two. So there was the case brought by a couple of Pennsylvania legislators, which would have voided the votes of millions of Pennsylvanians who voted by mail and the supreme court said, "We're not touching that." And there was a similar case, this one probably even more extreme, brought on by the Texas attorney general, signed by 18 other attorneys general and the majority of the house of the republican conference, saying that Texas suffered a justiciable injury because other states allowed for uncontrolled mail invaliding thus diluting the votes of Texans.
Ben Yelin: And it seems that nine out of nine justices decided that was not a justiciable issue and at least two justices weighed into say that, while we would've heard the case for procedural reasons, we wouldn't have granted any of the requests for relief, which is a very diplomatic way of saying that this case is crazy. So I would predict that we're at the end of the line here for these judicial cases. There have been a lot of instances where you've had judges that are more amenable to hear the arguments. So they'll say, "Alright, I'll give you three hours." We saw this in Wisconsin with a federal Trump appointed judge, who said, "I'm willing to hear your evidence." And the president's team, I think, Rudy Giuliani, being at the head of that team, basically said, "Well, we have nothing else to say, we're just gonna stipulate the facts that are already in the record."
Ben Yelin: So, I think that this process has run it's course. But I guess I should never say never.
Rick: Alright. Well like you said, I don't think we're done talking about that particular one, so, let's just open it up to the crowd for just general purpose questions. Anything that we've talked about during the conference is up for grabs, or, anything on your mind, please ask it in the chat room, okay? I've got the first one here, for all panelists, okay? This is from Willy Vasquez, he's working on his PhD at the University of Texas, at Austin, here's his question; are there any new insights from recent major cyber events, or did these happen because people ignored existing recommendations? I guess he's saying, we've had all these bad attacks, is anything new going on here or is this all more of the same? What do you think, Ben?
Ben Yelin: A little outside my niche of expertise, as just a little that I've read about this Firestick incident, I think there might be an element of lack of leadership at the department of homeland security and CISA that might be to blame for that particular cyber incident. But, from a broader sense, I'm probably not the best person to shine light on that subject.
Rick: Any thoughts from the Amazon side Merritt? Anything you can share with us?
Merritt Baer: The FireEye, not Firestick, Ben. [LAUGHS]
Ben Yelin: Yeah, I don't know why I said Firestick?
Rick: I think they might change the name to Firestick after all this is over.
Merritt Baer: Nothing to do with us. So first of all, Willy, congratulations and feel free to put doctor before your name when you get your PhD. But, I think that it seems silly to do this over and over again but we do it. I personally love the next corner to look around and Rick and I are sitting here geeking out over malicious hijacking of black boxes, which we now call, no-boxes, but you're right, ultimately a lot of this is just basic hygiene and I think that those next-gen attacks are indicators that we can look to, like harbingers, that we might want to think about and also just be aware. One of the reasons I wanted to talk about the ML security is a lot of these are invisible to us as practitioners and I think that that feels insecure in a sense.
Merritt Baer: There are certain things that computers do better. So, I think yeah, there's no replacement for cyber hygiene, patch your shit and get off your old Windows machines and be, yeah sorry, I guess, parental advisory for all the seven year olds listening to this, but yeah, do all. Absolutely there's no replacement for that and the shops that are really well coordinated they have a mechanism for that, good intentions are not enough and this is why they created patch two data as a community, right? No-one feels like patching, it might break things, and the longer you go, the worse it gets and it's like dieting, or something, you're like, "What's 5 lb now?" But that's not the right approach, right? You know, it's easier to lose 5 lb than 30. Start somewhere, start doing it.
Merritt Baer: And by that same token, start automating more. We have a phrase at Amazon, get the humans away from the machines. There are things that machines are really good at doing. Get your processes to a point where they can be mechanized and that will be an indication of your maturity as a security shop. And then, you can get to the higher level stuff, the data about your data, the insights about your insights. These are higher level things and you're right, Willy, at some level, a lot of what we see day to day is really just harkening back to the need to do the basics.
Rick: Well, I'll jump in here too. Willy I will join in with Merritt congratulations on pursuing your PhD and this is one of my pet peeves that the security community doesn't do very well. We know, pretty much, what all the bad guys are doing, I'll direct your attention to the Miter attack framework, they've listed all the known tactics, techniques and procedures, for every know bad guy in the world. Now, that's probably 95% complete, but 95% is really good. If you could put prevention controls in place for all 95% of that, you're not gonna get attacked, okay? Guaranteed. But we have trouble doing that, so we do everything manually, like Merritt said. I highly recommend we all figure out how to automate that process but check out the Miter attack framework, that's where the answers are. Even if a bad guy comes up with something new, that their framework hasn't seen yet, that's just one thing, okay?
Rick: Adversaries have to string a bunch of things together to be successful, just because they have some new zero day, or some new piece of mal code, that does not mean that guarantees their success. If you have blocks in place ofr other things within the string, then you're gonna be successful, alright? So, this thing is what Merritt said, it's been going on for years and years and it's tough to get this right. Let's move to another question, this one is from Joe O'Brien, he works from Alchemy, his question is; how does the CISO measure security? How is risk quantified? Merritt, I'd be interested in how you talk to your customers about this when they're thinking about going to Amazon and they are worried about cloud security. How do you tell them to get their hand around that kind of number?
Merritt Baer: Well, one of the benefits of going to cloud is that you have a really holistic view of your network, right? You're no longer worried about a rogue server under someone's desk. You can actually know and this is why, I think, I [UNSURE OF WORD] over the automated reasoning group because it's such a luxury to be able to deal with higher level math instead of Joe from IT who walked out with a thumb drive. You can think more in terms of the permissions on a grand scale and because of that, you can automate and because of that, you can deploy, upscale and you can do security upscale. And so I think we urge our customers to get comfortable with it. To get over the traditional security mindset, which is like every change is a risk. You know what? Staying in place is a risk. And there's that [UNSURE OF NAME] quote that says, "One day, the risk of staying in the bud was greater than the risk of flowering."
Merritt Baer: Right? Start to bloom. I think that growing up will take some effort and you'll have to get your executives to buy in on the fact that there is a different way of approaching the way you know about your risk, but it's a more clear view of all your assets and all the things you can know about your network, and, by the way, security is a reason to move to the cloud but it doesn't have to be the driver. It should be the bottom line thing that your industry does. So, if you're a health care institution, you're in the business of health care, this will just free you up to feel confident that your security can be there but, ultimately, entities move to the cloud because it gives them economies of scale in their core business. And I think that that, ultimately, is the driver here and no-one would do it if they couldn't feel like it could be secured, but, the security team being a part of the innovation engine, is just a real boom to your organization.
Merritt Baer: And getting your executives to buy into that vision, will buy you the use of metric flex of business as bottom line instead of how many times you've been breached.
Rick: That is a great place to end this conversation, good answer Merritt, so ladies and gentlemen, we are at the end of this thing. On behalf of my two colleagues, Ben and Merritt, thank you guys for participating in this. If you all like what you heard, you can get more of this over at the Cyberwire dot com, about the pro side and the ad supported side, so come check that stuff out and come visit us at the next Cyberwire quarterly analyst call, thanks everybody, we'll see you next time.