Today's show features an extended interview with Martin C. Libicki. He holds the Maryellen and Richard Keyser chair of cybersecurity studies at the U.S. Naval Academy. His most recent book is Cyberspace in Peace and War. Topics include the differences between cyber war and cyber espionage, the possibilities of a cyber Pearl Harbor or Cyber 9/11, and the risk of nations overreacting to cyber attacks.
Dave Bittner: [00:00:01:06] Just the other day my ten year old son came to me and said, "Daddy, do you think for Thanksgiving this year instead of cold turkey sandwiches we could have a real turkey?" And I said, "Son, if enough people sign up at patreon.com/thecyberwire, maybe we can have a real turkey." I'm kidding, of course, we don't have cold turkey sandwiches for Thanksgiving. We have grilled cheese. Patreon.com/thecyberwire.
Dave Bittner: [00:00:32:16] Our podcast team is taking a break this week for the upcoming Thanksgiving holiday. But don't fret, we've got a brand new extended interview for you today. And you can still get your daily dose of cybersecurity news on our website, thecyberwire.com, where you can subscribe to our daily news brief and get all the latest cybersecurity news. Stay with us.
Dave Bittner: [00:00:57:03] A few words from our sponsors at E8 Security. If you've been to any security conference over the past year, you've surely heard a lot about artificial intelligence and machine learning. We know we have. But E8 would like you to know that these aren't just buzz words. They're real technologies and they can help you derive meaning from what an overwhelmed human analyst would see as an impossible flood of data. Go to e8security.com/cyberwire and let their white paper guide you through the possibilities of these indispensable emerging technological tools. Remember, the buzz around artificial intelligence isn't about replacing humans, it's really about machine learning, a technology that's here today. So, see what E8 has to say about. And they promise you won't get a sales call from a robot. Learn more at e8security.com/cyberwire. And we thank E8 for sponsoring our show.
Dave Bittner: [00:01:55:01] My guest today is Martin C. Libicki. He holds the Maryellen and Richard Keyser Chair of Cyber Security Studies at the US Naval Academy. His most recent book is Cyber Space in Peace and War.
Martin C. Libicki: [00:02:08:10] Cyber war is something that I define as the systemic use of cyber space operations, notably cyberattack, for political advantage. In much the same way that war can be described as the systematic use of tools, of force, of the armed force for political advantage.
Dave Bittner: [00:02:27:12] It seems to me, particularly our leaders have been hesitant to draw lines in the sand when it comes to cyber war. They don't want to define exactly what it is.
Martin C. Libicki: [00:02:38:24] I can understand that. Because, if you start defining what it is, at least in our political tradition, you have an obligation to actually carry it out. I think there's some levels of confusion. We tend to – and by "we" I mean pretty much everybody – tend to confuse cyber espionage, which is a legitimate state activity subject to a few caveats, and cyber attack, which is not a legitimate state activity. In cyber espionage, I'm reaching into your system and I'm grabbing information, maybe once, maybe on a continuous basis. In cyber attack, I've affected your ability to use your systems. I can do so by making it difficult for your systems to connect to the Internet, by making it difficult for your systems to operate at full capacity, or operate at all, by changing the instructions or the data that your system holds.
Martin C. Libicki: [00:03:33:11] In other words, it's sort of like the difference between espionage and attack. Espionage, I'm just learning something about you. Attack, I'm making it difficult for you to do certain things. I am destroying, or at least delaying your use of things that you own. And I think cyber attack is best defined as something similar.
Dave Bittner: [00:03:51:03] One of the chapters of your book is called, "What the government can and cannot do". And it begins by talking about should the government should do anything? Can you take us through that argument?
Martin C. Libicki: [00:04:00:23] I was educated as an economist, and one of the tenets of economics basically says that if the private market can do something well, the government shouldn't step in, because it's not likely to improve matters. If you talk about protection, there are all sorts of risks that organizations have. And by and large, organizations tend to be competent to assess the risks, and then decide what mitigations to employ in order to make themselves better off.
Martin C. Libicki: [00:04:29:05] For instance, if one of the risks is weather, we have roofs, we have wind guards, we have all those sorts of things to protect us from weather. We have many devices, such as locks and fences to protect us from crime. So, the question is, to what respect is cyber protection going to be any different from all the other protections against risks which we rely on other folks to employ?
Martin C. Libicki: [00:04:50:19] And part of the answer is, that if you're an organization and the consequences of having insufficient cyber security are consequences to you, - for instance, I can't do things, or I can't do things reliably, or I can't keep secrets - then I would argue that it's pretty much up to you in order to protect yourself. Just as it would be up to you to protect yourself in most of the other realms that we talk about.
Martin C. Libicki: [00:05:18:11] Now, that being so, there are a number of useful things that the government can do. They could sponsor research and development; they could collect intelligence and provide it in terms of threat assessment to organizations; it can, and should, prosecute cyber crimes. If the cyber crimes are actions of states, there are certain state activities that can be carried on, such as sanctions, and the list can go on and on. But, essentially, all of these are adjuncts, all of these are aids to an organization, or an individual's responsibility to protect their own system.
Martin C. Libicki: [00:05:53:13] Let me just add one other thing. One of the reasons that I think it makes sense to focus the responsibility on the organization or the individual, is that the organization or the individual are the ones that have the best understanding of their own system and their own approach to cyber space.
Dave Bittner: [00:06:11:01] It's interesting, because we think of so many of the cyber adversaries as coming from overseas, and so I could understand some people thinking that the federal government had a responsibility to, for lack of a better description, you know, protect our virtual borders.
Martin C. Libicki: [00:06:28:22] Yes, I'm aware of that. If you try to take that in the most literal way possible, you start thinking of a gigantic federal firewall. And then you start thinking of all the problems with firewalls that they have in the first place. They can't catch things they haven't seen before; they're not very good at protecting against zero days; they're not very good at defending against attacks that come in through hardware; they're not very good at protecting against attacks that are already in your system before the firewall goes up; they're not good against the insider threat; they're not good against the class of attacks in which the inputs are legitimate, but the behaviors aren't expected, such as structured query language injection or SQL injection.
Martin C. Libicki: [00:07:08:02] And it turns out to be a large panacea. The US government is spending way more than half a billion dollars a year with a huge firewall just around the .gov domain. Bad folks still get through, and it's costing us an arm and a leg. And if you extract it to the entire country we're talking tens of billions of dollars, which is something which is frankly unaffordable.
Dave Bittner: [00:07:31:19] Who do you maintain it should be the responsibility for fighting these cyber wars? Is it traditional military? Or other parts of government?
Martin C. Libicki: [00:07:39:22] It's got to be both public and private. In other words, the private enterprise buys the cyber security products and services. Attends to its own architecture. Attends to its own authentication mechanisms. Carefully segregates the things that are high risk and are more protected from the things that are low risk and are less protected. And the government basically uses tools of statecraft to discourage other countries. It uses the tools of criminality in order to target individuals who might be part of the cyber war effort. And I mention this because a lot of actions that take place by nation states are actually actions that take place by criminals who are working with a nation states. I'm thinking in particular of Russia.
Martin C. Libicki: [00:08:27:14] The federal government, as I mentioned, can use intelligence it gathers to inform. The federal government can improve the basic cyber security through indirect methods, such as research and development and standards. The federal government can encourage the growth of a talented cybersecurity labor pool. There are a lot of tools that the federal government can employ.
Martin C. Libicki: [00:08:49:17] But to actually go out and defend the electric power grid is something that the federal government does not really have the capabilities or the information to do. One of the tools of statecraft is, if there's no other way to bring the problem down to manageable level, to threaten to do unto others, as they are doing unto us. Sometimes that works well and sometimes it doesn't work well. It depends who those others are.
Martin C. Libicki: [00:09:14:15] I used to joke about this almost 20 years ago. If the North Koreans decided to take down the New York Stock Exchange, there would be no point in us threatening to take down the Pyongyang Stock Exchange because the North Koreans have forgotten to establish a stock exchange. Similarly, if your threat is from cyber terrorism, the threat to carry out hacking attacks on terrorists is probably going to be less than fully persuasive.
Dave Bittner: [00:09:40:12] You have a chapter on attribution. One of the things that caught my eye was, you touch on the subject of when can countries be blamed for what started within their borders? We know attribution is difficult. And this notion of perhaps a lone person could, for example, cause damage to our power grid, and the asymmetrical nature of that strikes me.
Martin C. Libicki: [00:10:08:23] There is a great deal of asymmetry, but I think that over the last ten years that it's become relatively more difficult for lone individuals to do serious damage. That most of what we're seeing that is of serious concern is either carried out by countries, or nation states as we sometimes call them, or well organized criminal organizations.
Martin C. Libicki: [00:10:31:16] Let us say that we've found a cyber attack coming from Peru. I like to use Peru as an example, because it never seems to bother anybody, and I have very Peruvians in my audience when I say this. We can't say to Peru, "Okay, you did it." Just as we can't necessarily say to Mexico, "Okay, you did it, because our drug economy's affected by the Mexican cartels." What we can ask Peru to do, however, is join us in targeting those folks who have carried out the attack. Which is a combination of asking Peru to use its investigative methods to figure out what's going on, and to cooperate with the United States as the United States uses the investigative methods that it owns itself. Finally, as we continue going through this process, and continue getting more information, and we continue getting necessary cooperation from the Peruvian government, at some point we may say, "Well, Joe did it," and we'd like to bring Joe up for trial. Or we'd like to have Joe brought up for trial in Peru, depending on their laws and customs.
Martin C. Libicki: [00:11:41:10] At that point, I think it would be a good idea to expect cooperation from the Peruvians. Countries vary in how much cooperation they give to the United States in this issue. For instance, we have a great deal of cooperation with the European countries, particularly those in NATO, and the neutrals as well. We don't get any cooperation from Russia. We sort of get some cooperation from China, because we put them on notice that their state is responsible if they don't. We get zero cooperation from North Korea, and zero cooperation from Iran. In which case, you've take a cyber crime issue and you have to ask yourself, does this rise to the level of something we can call a national security issue? Or at least something whose concern that we can elevate? And that is an intensely political question.
Dave Bittner: [00:12:30:20] What about things like the Tallinn Manual?
Martin C. Libicki: [00:12:33:19] The Tallinn Manual, I think is a very good rundown of the applicability of existing laws around conflicts into cyberspace. The people who put it together are competent. I think they know what they're doing. But the problem is that when you go through the Tallinn Manual, you find out that a lot of the key questions about cyber space aren't covered by the Tallinn Manual at all.
Martin C. Libicki: [00:12:56:06] For instance, the Tallinn Manual basically says that cyber espionage is acceptable state activity. That no country has a basis within international law to object to cyber espionage. Why is this true? Because we have likened cyber espionage to traditional espionage, for which there are no treaties whatsoever. Actually, for which there is no international law whatsoever. There may be bilateral treaties. But, one of the things that the United States has been insisting on since 2010 is that cyber espionage pursued for the purpose of economic advantage is not considered legitimate state activity. We pursued this with the Chinese, and in 2015 we got an agreement with the Chinese in which they promised to cut it out. In fact, we took the agreement into the group of 20 nations, and they all said, "Yes, this is a very good idea."
Martin C. Libicki: [00:13:47:14] So here we have our first exception. And our first exception is that cyber espionage is okay unless you're using it for commercial advantage. Now, I would maintain that the United States, or at least would have under the Obama administration, have come close to a second norm. Which is to say that cyber espionage is okay, unless you use it to help criminal activity. After the OPM hack, there were a lot of concerns in the United States that the information would be sold on the black markets. And, if you recall, OPM's palliative, after they were hacked, was to offer everybody credit monitoring. As it turned out, in all likelihood, the information was stolen for espionage and counter-espionage purposes, and there is no evidence that whoever took it – which is to say the Chinese – actually used it for criminal enterprises. If they had, in an alternative world in which the Chinese have sold the information on the criminal markets, I think we would have objected very strongly to the Chinese.
Martin C. Libicki: [00:14:48:01] But, there are other countries for which their association with criminality is very troubling. One of them is Russia. If you recall, about a year or so ago, we indicted four Russians for complicity in the Yahoo! Hacks. Two of them were private citizens, two of them, however, were employees of the FSB, part of the Russian intelligence community. And with North Korea, it's probably not even embarrassing that their country goes along and steals information from people. So that's norm number two: you can't use cyber espionage for criminal purposes.
Martin C. Libicki: [00:15:23:24] Now, norm number three, which we're sort of feeling our way towards, has to do with the relationship between cyber espionage and political activity. This is the basis under which we retaliated against the Russians for their role in the DNC hack. Because, if you remember, the basis of that was that they carried out cyber espionage on the Democratic National Committee, and then published the material. If they had simply carried out cyber espionage on the Democratic National Committee, I don't think we would have had any cause to raise the issue with the Russians, because the Russians would have said, correctly, "That you do similar things." Countries examine what other countries have in their secret spaces. But when they put it out in WikiLeaks and DCLeaks, they were turning the information into unwanted political influence
Martin C. Libicki: [00:16:12:05] The Tallinn Manual says that cyber espionage is fair game. But you can see the United States moving towards norms in which we say, "No, there are a lot of exceptions here." It is not always fair game. You have to play under certain rules. But these rules are in no way codified under international law, because what happens when you move from regular espionage to cyber espionage is in many ways several orders of magnitude difference in the effects you can produce.
Dave Bittner: [00:16:39:15] We'll have more of my conversation with Martin C. Libicki after a short break.
Dave Bittner: [00:16:48:08] Now, I'd like to tell you about some research from our sponsor Cylance. Good policy is informed by sound technical understanding. The crypto wars aren't over. Cylance would like to share some thoughts from ICIT on the surveillance state and censorship, and about the conundrum of censorship legislation. They've concluded that recent efforts by governments to weaken encryption, introduce exploitable vulnerabilities into applications, and develop nation state dragnet surveillance programs will do little to stymie the rise in terrorist attacks. These efforts will be a detriment to national security and only further exhaust law enforcement resources and obfuscate adversary communiques with a massive cloud of noise. Backdoors for the good guys means backdoors for the bad guys. And it's next to impossible to keep the lone wolves from hearing the howling of the pack. Go to cylance.com, and take a look at their blog for reflections on surveillance, censorship and security. And we thank Cylance for sponsoring our show.
Dave Bittner: [00:17:54:01] In the foreword of your book you talk about how, back in the 90s, I believe, you didn't really anticipate that industrial control systems would be hooked up to the Internet. As you look towards the horizon now, what do you suppose we're going to see in the next 20 years?
Martin C. Libicki: [00:18:12:03] People talk about the Internet of Things. People talk about the rise of Artificial Intelligence. Both of them create more scope for cyber mischief. The Internet of Things creates scope for cyber mischief, because a hacker can make things go haywire. Everyday notions is that if you haven't guarded your Internet of Things and you're connecting to the Internet, it can be an access point to your entire network. I think there was a story several months ago about a corporation that got hacked by people who accessed its fish tank.
Martin C. Libicki: [00:18:46:16] The rise of Artificial Intelligence has an unanticipated and under appreciated effect on cyber security, because what it does is it creates new forms of vulnerability. If you think about an element of Artificial Intelligence as taking inputs from the outside world and turning into decisions. As a programmer, you hope that your Artificial Intelligence in fact does this correctly. But inevitably, there are going to be inputs that cause unexpected outputs, whereas the odds that these inputs come up accidentally or randomly, may be very small. If I'm an adversary and I want to mess with your system and you're using Artificial Intelligence, I'm going to look for inputs into that system that make your own system behave in ways you didn't expect, and in ways that are harmful that you didn't expect.
Martin C. Libicki: [00:19:38:08] What Artificial Intelligence does is that it creates a new set of vulnerabilities, not necessarily that allow what's called remote code exploitation, but ways to make a system go haywire. Ways of making a system misbehave in serious ways.
Martin C. Libicki: [00:19:55:22] The other element of Artificial Intelligence is that if machine learning becomes important, the tried and true method of taking a badly infected system and restoring it to factory conditions isn't going to be a costless one anymore. Because, if you return all the way to factory conditions, you lose all the learning that you got in the meantime. I think Artificial Intelligence is going to have a very serious effect on cyber security. Now, that's the bad news. The good news is that I hope people are getting conscious enough of cyber security to be able to make intelligent decisions about what to connect to what, and who to connect to what kind of process.
Martin C. Libicki: [00:20:34:11] Furthermore, I'm very encouraged by the iOS model of cyber security. IOS as a platform is at least an order of magnitude harder to get into than Android, and probably two orders of magnitude harder to get into than a personal computer, because it's got an architecture which does not absorb third party code very easily, except down some narrow paths. When I take a look at iOS, I'm not suggesting that we all do everything on iPhone, but I am suggesting that there are techniques that we could use which, at a small or modest discomfort, can actually make us considerably more secure.
Dave Bittner: [00:21:13:12] You know, from time to time you hear people bring up the notion that perhaps we'll have some cyber Pearl Harbor, or a cyber 9/11. Do you have any thoughts on that?
Martin C. Libicki: [00:21:22:13] I would make a differentiation between a cyber Pearl Harbor and a cyber 9/11. Pearl Harbor was an event that took place in the military. And Japan's motive in trying to disable the Pacific fleet was to allow them to be able to conquer the countries of Southeast Asia without US interference. And it by-and-large worked. We were unable to get engaged in much battle until the, basically, Coral Sea, which was well after Japan had conquered most of the countries they were interested in Southeast Asia. I think something like that is quite possible, but it's going to be a function of the geo-strategic decision at the time. So, I think a cyber Pearl Harbor is one of those threats that the Department of Defense should take seriously.
Martin C. Libicki: [00:22:09:04] But, a cyber 9/11 is a different animal. A cyber 9/11 would be something like taking down the entire grid at once. Cyber 9/11 is implausible for two reasons. One, particularly when you're dealing with the United States, you're dealing with a very heterogeneous infrastructure, in which a lot of things have to go wrong at once in order to have a geo-strategic effect. And the other is, you don't have an encore. Okay, I took down the US power grid. Now what? What good did that do me? What did that allow me to do that I couldn't do before? With a cyber Pearl Harbor you can answer that question, but with a cyber 9/11 there is no good answer to that question.
Martin C. Libicki: [00:22:46:14] Now, what do I see as two possible futures for the country, or if the cyber world gets dark? One of them is what I'd call a Cyber Three Mile Island. In other words, it's a cyber event that convinces people that the way they've been going down the road in cyber security isn't going to work very well any more. In other words, as with nuclear power, we took a look at Three Mile Island, we stepped back, and we didn't install a new power plant for well over three decades. I can imagine a Cyber Three Mile Island in which we say, the architectures that we've been using for designing systems and building systems and relying on systems has a serious flaw and we've got to step back and consider how we're going to become dependent on computers, and on what terms.
Martin C. Libicki: [00:23:32:00] The other possibility comes from the NotPetya incident, which was enormously expensive. And because the hackers were fairly clever, didn't get nearly as much notice as WannaCry, because it came right after WannaCry, and it looked like it was another ransomware attack, but in fact it wasn't a ransomware attack at all. It was a very disruptive and destructive attack on particular industries, almost randomly chosen. It cost several billion dollars, and that's just to a handful of companies we know about, not the ones that we don't know about.
Martin C. Libicki: [00:24:06:07] In other words, if you think of a cyber 9/11, you think about people going after the hard targets of society, the critical infrastructure. But if you think of a NotPetya, you think of people going after the office automation and data processing parts of society, which aren't nearly as hardened as our infrastructure is, and is therefore subject to a lot more damage.
Dave Bittner: [00:24:27:16] Our editor here at the CyberWire, John Petrik, likes to make the point that perhaps rather than a cyber 9/11, we could have something like cyber Tonkin Gulf incident, which was a naval skirmish that led to the United States being more directly involved in the Vietnam War. What are your thoughts on that?
Martin C. Libicki: [00:24:44:23] If you recall the Tonkin Gulf, particularly if you're a little cynical about that, we took a small incident and made it the justification for something we wanted to do anyhow. So, the potential that people will use a cyber attack against them – one who's attribution, by the way, is anything but clear cut – and sort of wave the flag, can't be dismissed. I hope this doesn't sound like a distinction without a difference, I'm more worried about a cyber Sarajevo. In other words, something takes place in cyberspace – maybe accidental, maybe inadvertent, maybe deliberate – in which the other side overreacts. For instance, they get into their head that somebody might want to start a war, particularly somebody wants to start a nuclear war, when it turns out that the only thing that happened was an accident.
Martin C. Libicki: [00:25:32:05] If you have complex systems running, your sensors are running in your intelligence community, they're going to fail from time to time. And if your first suspicion is, you know, my enemy did that to you, you can imagine that your reaction is going to go out of control. I don't know how likely that is, but I think it's something that we have to worry about. And in many ways it's the opposite of worrying about cyber, because the way we worry about cyber is, "Oh my god, I can't let somebody do cyber badness to me." But the way we get into a cyber Sarajevo is to say, "Oh my god, somebody did cyber badness to me and if I don't respond, it's only going to get worse, so I've got to do something."
Dave Bittner: [00:26:07:10] That's author Martin C. Libicki. The book is Cyber Space in Peace and War.
Dave Bittner: [00:26:15:04] And that's the CyberWire. Thanks to all of our sponsors for making the CyberWire possible, especially our sustaining sponsor Cylance. To find out how Cylance can help protect you using Artificial Intelligence, visit cylance.com.
Dave Bittner: [00:26:27:08] The CyberWire podcast is produced by Pratt Street Media. Our editor is John Petrik, social media editor is Jennifer Eiben, technical editor is Chris Russell, executive editor is Peter Kilpe, and I'm Dave Bittner. Thanks for listening.
Copyright © 2019 CyberWire, Inc. All rights reserved. Transcripts are created by the CyberWire Editorial staff. Accuracy may vary. Transcripts can be updated or revised in the future. The authoritative record of this program is the audio record.
Artificial Intelligence & Machine Learning. This technology is popping up in everywhere in cybersecurity. Aside from sounding cutting-edge, what does it mean? What value does it add? Find out exactly how cool AI and machine learning are, and how small nuances in how each is used can make a big difference from E8, at e8security.com.
Cylance is revolutionizing cybersecurity with products and services that proactively prevent, rather than reactively detect the execution of advanced persistent threats and malware. Learn more at cylance.com