The CyberWire Daily Podcast 8.9.19
Ep 903 | 8.9.19

Voting machine security. Airliner firmware. Attribution and deterrence in cyberwar. Monitoring social media. Broadcom buys Symantec’s enterprise security business. Policing, privacy, and an IoT OS.

Transcript

Dave Bittner: [00:00:03] Are voting machines too connected for comfort? Airliner firmware security is in dispute. Attribution deterrence and the problem of an adversary who doesn't have much to lose. Monitoring social media for signs of violent extremism. Broadcom will buy Symantec's enterprise business for $10.7 billion. Amazon's Ring and the police. A CISA update on VxWorks vulnerabilities. And human second-guessing of AI presents some surprising privacy issues. 

Dave Bittner: [00:00:38]  Now I'd like to share some words about our sponsor, Akamai. You're familiar with cloud security, but what about security at the edge? With the world's only Intelligent Edge Platform, Akamai stops attacks at the edge before they reach your apps, infrastructure and people. Their visibility into 178 billion attacks per day means that Akamai stays ahead of the latest threats, including responding to zero-day vulnerabilities. With 24/7/365 security operations center support around the globe and over 300 security experts in-house, Akamai surrounds and protects your users wherever they are - at the core, in the cloud or at the edge. If you're going to Black Hat USA this year, visit Akamai at Booth 1522 to take part in their Crack the Code challenge. Akamai - intelligence security starts at the edge. Learn more at Akamai - that's akamai.com/security. And we thank Akamai for sponsoring our show. 

Dave Bittner: [00:01:43]  Funding for this CyberWire podcast is made possible in part by ExtraHop, providing cyber analytics for the hybrid enterprise. Learn more about how ExtraHop Reveal(x) enables network threat detection and response at extrahop.com. 

Dave Bittner: [00:01:58]  From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Friday, August 9, 2019. 

Dave Bittner: [00:02:06]  Vice reports that, contrary to various government assurances, voting machines in the U.S. made by Election Systems and Software have, in fact, sometimes been connected to the internet. County election officials who desire faster tabulation and reporting of votes establish wireless connections to SFTP servers behind a Cisco firewall. These connect with backend systems that actually count the votes. Typically, votes are recorded on a memory card and physically delivered to a tallying location, but in some areas and under some circumstances, the machines are configured to report remotely. 

Dave Bittner: [00:02:42]  Vice says that such connections are intended to be brief matters of a few minutes, but Vice's investigation concluded that in some cases, the systems remained connected for months. Thus, voting may be less air gapped than many officials had imagined. The possibility of direct manipulation of votes, of course, is a more serious matter than the influence operations Russian intelligence services have conducted during recent elections. 

Dave Bittner: [00:03:08]  Both Boeing and the U.S. Federal Aviation Administration dispute claims made this week by IOActive that the 787 Dreamliner's firmware is vulnerable to cyberattacks on flight systems. The aircraft manufacturer told PCMag that IOActive did not have full access to the 787 systems and that Boeing's extensive testing confirmed that existing defenses in the broader 787 network prevent the scenarios claimed. The FAA says it's satisfied with the assessment of the issue. 

Dave Bittner: [00:03:40]  IOActive, which presented its research at Black Hat this week, did not claim to have a proof of concept, still less that they had found any actual exploitation in the wild, but they do think there's a possibility that an attacker could pivot from in-flight entertainment systems to flight control avionics. It's important to note that this is not the same vulnerability CISA warned against last week. That warning concerned the CAN bus and small general aviation aircraft. The Dreamliner is a different kettle of fish. 

Dave Bittner: [00:04:12]  Elsewhere at Black Hat, Mikko Hypponen, chief research officer of F-Secure, shared some thoughts on the distinctive features of cyberwar. His observations, as reported by Fifth Domain, are worth some reflection. What distinguishes cyberwar from kinetic war is, he thinks, the fundamentally difficult nature of attribution in cyberspace. Hypponen said, quote, "cyber weapons are cheap, effective, and they are deniable." False flag operations are common, and attribution is usually hedged about with reservations. 

Dave Bittner: [00:04:43]  There may even be doubt as to whether a cyberattack has even taken place. Consider - a missile launch is an unambiguous event, and the ones our fire support desk has witnessed cannot be mistaken for anything other than what they are, nor is it that difficult to tell where the missile came from. But with a cyberattack, it can be unclear whether an attack has even taken place, and even after you've determined that there has been an attack, attribution can be difficult. In most cases, the best companies in the threat intelligence business can do is present convincing circumstantial evidence. That's fine for cyber threat intelligence, but it's problematic when a responsible government is considering going to war. 

Dave Bittner: [00:05:25]  This problem is closely linked to another - the difficulty of establishing deterrence in cyberspace. For deterrence to work, the adversaries must have some relatively realistic appreciation of what the opposition can do, what its capabilities are. That's one reason for the Cold War traditions of military parades in Red Square or news footage of tests on the Pacific Missile Range. Cyber capabilities are inherently more difficult to assess. You may not even know that a particular kind of attack is possible, let alone that the opposition is capable of delivering it. 

Dave Bittner: [00:05:58]  We have no idea what offensive capabilities other nations have, Hypponen said, so what kind of deterrence do these tools build? Nothing. We note, as Dr. Strangelove put it back in the heyday of nuclear deterrence, deterrence is the art of producing fear in the mind of the enemy, but the whole point of the Doomsday Machine is lost if you keep it a secret. 

Dave Bittner: [00:06:19]  Turning to specific nation-states, Hypponen singled out North Korea for a particular mention in dispatches. Making all due allowance for the difficulties of attribution mentioned above, Pyongyang does things no other government attempts, like engaging in hacking for financial gain. Part of what explains North Korea's high level of activity and relative recklessness, Hypponen argues, is that the country has very little to lose, and that makes it a different kind of threat actor. 

Dave Bittner: [00:06:47]  With calls for increased attention to evidence of threats in social media, the FBI has issued a request for proposals that asks contractors to propose tools that could effectively monitor Facebook and other social media for signs of impending criminal or terrorist violence. Facebook, the Wall Street Journal says, isn't entirely happy with the idea. It has been under fire for the way it handled personal data, and Menlo Park has been on the defensive over privacy for a long time. The last thing Facebook needs is this sort of help from the feds. But with the White House convening a social media summit to come up with ways of controlling violent extremism online, the bureau is likely to continue leaning forward in the foxhole. 

Dave Bittner: [00:07:30]  Some significant industry news has broken. Broadcom will acquire Symantec's enterprise security unit, including, CRN says, the Symantec brand for $10.7 billion in cash. Seeking Alpha calls this Broadcom's next move in its bid to become a major infrastructure technology provider. Symantec will retain its consumer-facing Norton LifeLock business. 

Dave Bittner: [00:07:55]  NBC News has a story out on the ways in which Amazon's products are being used by police departments in the U.S. Most of the discussion surrounds the company's smart doorbell Ring, which, in addition to the ringing, keeps an eye out for the ringer. Data captured by Ring has been fed to police departments and have arguably help them solve burglaries. Most would regard this as a good thing, but the implications of creating an American panopticon from the bottom up trouble some observers. This is especially so at the points where familiar technologies intersect with unproven innovations. Ring's networked video security cameras are increasingly used in conjunction with controversial and possibly error-prone facial recognition software. 

Dave Bittner: [00:08:39]  One critic, University of the District of Columbia law professor Andrew Ferguson, put the objections this way to NBC News. Quote, "I am not sure Amazon has quite grappled with how their innovative technologies intersect with issues of privacy, liberty and government police power. The pushback they are getting comes from a failure to recognize that there is a fundamental difference between empowering the consumer with information and empowering the government with information. The former enhances an individual's freedom of choice. The latter limits an individual's freedom and choice," end quote. 

Dave Bittner: [00:09:13]  The U.S. Department of Homeland Security's CISA has issued an updated warning about vulnerabilities in Wind River's VxWorks, the widely used industrial IoT software. CISA says that 11 vulnerabilities could be exploited to allow remote code execution and that the level of skill such exploitation would require is relatively low. Wind River is addressing the problems in VxWorks, and users are encouraged to apply the patches and mitigations the company is offering. 

Dave Bittner: [00:09:42]  And finally, Microsoft's use of humans to perform quality control on some of its services has received the same sort of scrutiny Google, Apple and Amazon have attracted. Microsoft's Skype service and Cortana digital assistant are listened in on from time to time, but Microsoft says its contractors listen to Skype calls and user interactions with Cortana only after receiving user permission. 

Automated Voice: [00:10:06]  I can hear you. Excellent. 

Dave Bittner: [00:10:13]  And now a message from our sponsor, ObserveIT. 

Unidentified Person: [00:10:18]  Great party, huh? 

Dave Bittner: [00:10:20]  Yeah, yeah. Great party. Could you excuse me for just a moment? Hey, you. What are you doing? Oh, no. Looks like another insider got into our systems when we weren't looking. I am going to be in so much trouble with the boss. 

Unidentified Person: [00:10:39]  Did someone say trouble? I bet I can help. 

Dave Bittner: [00:10:42]  Who are you? 

Unidentified Person: [00:10:43]  To catch insider threats, you need complete visibility into risky user activity. Here. I'll show you how ObserveIT works. 

Dave Bittner: [00:10:51]  Wow. Now I can see what happened before, during and after the incident, and I'll be able to investigate in minutes. It used to take me days to do this. 

Unidentified Person: [00:11:00]  Exactly. Now, if you'll excuse me, I think there's a cocktail over there with my name on it. 

Dave Bittner: [00:11:06]  But wait. What's your name? Oh, well. Thanks, ObserveIT, and whoever she is. ObserveIT enables security teams to detect risky user activity, investigate incidents in minutes and effectively respond. Get your free trial at observeit.com/cyberwire. 

Dave Bittner: [00:11:34]  And continuing our coverage of Black Hat, joining us is Justin Harvey. He is the global incident response leader at Accenture. Justin is joining us from the show floor at Black Hat, where he has a very spotty phone connection, so we apologize in advance for the audio quality of our connection. But Justin, what are you seeing there? What is the overall tone that you're sensing on the show floor itself? 

Justin Harvey: [00:11:59]  Well, the tone is - it's all about visibility, detection and response. In years previous, we've seen different point solutions being deployed, but this year, the theme is all about shining light under the rock in order to find the adversaries. 

Dave Bittner: [00:12:17]  And so how does that express itself? What sorts of things are people out there talking about and offering? 

Justin Harvey: [00:12:23]  The various solutions that we're seeing out here that are focusing on visibility is applying it to endpoint, applying it to your networks and applying it to your identity. And it seems like all of these vendors are using the P-word, Dave, but are all talking about platforms. What can they do to increase visibility and integrate with other solutions? I think that those of us in the industry have been saying for many years that there is going to be an investment bubble. But we walk in the door, and we expect Black Hat to be smaller than it was last year. 

Justin Harvey: [00:12:56]  But this is - 19,000 people are attending Black Hat this year from over 110 countries, and this is their 23rd year. And I have to tell you that there is no slowing down in the market. It is very hot. People are very excited. And, in fact, one of the net new things that I'm seeing is a focus on training, a focus on career management, particularly being inclusive and diverse in the workforce. 

Dave Bittner: [00:13:23]  And how is that expressing itself? I mean, are you seeing more diversity out there on the show floor? Is there more representation of different types of folks out there? 

Justin Harvey: [00:13:32]  Definitely a representation of a larger swath of diverse attendees, but we're also seeing it in some of the booths here. There is a big focus on women, and there is a big focus on diversity. So what can these companies do to attract talent and enhance and shepherd their career, if you will, get them the right training, get them the right support in order to succeed in cybersecurity today? 

Dave Bittner: [00:13:59]  What sorts of things, as you walk around on the show floor, do you have your eye on? Is there anything you're hoping to find out, anything you want to learn or get insights on? 

Justin Harvey: [00:14:07]  Well, I have two objectives attending Black Hat this year. First is I wanted to see what sort of OT, or operation technology solutions, are out in the market today. And there's not a lot of that out here, Dave. We are seeing companies like Nozomi and ForeScout that have these asset inventory solutions - passive network solutions that are mapping OT networks out. But I'm not seeing a lot of the vendors here talk about the convergence of information technology and operation technology or, in essence, the ability to marry the digital with the kinetic world. 

Justin Harvey: [00:14:41]  For the last decade or in 15 years, they've been very segmented. If you are in IT, you're dealing with business systems. If you're in OT, you're an engineer. You're not a technologist. And I think the industry is just now waking up to OT and critical infrastructure and figuring out how to bond those two together. At RSA this last year, we saw OT with one of the big things. Now, we're here, and we're not seeing that. I'm also not seeing a lot of emphasis on the small and medium businesses. It seems like if you bring in less than 50 or $100 million in revenue, there's not a lot of solutions out there in the market for you. And I think that's really worrisome to both of us in the industry. 

Dave Bittner: [00:15:24]  All right. Well, Justin Harvey, thanks for joining us. And safe travels home, again. 

Justin Harvey: [00:15:28]  Thank you, Dave. We missed you, and I look forward to seeing you at one of these events again. 

Dave Bittner: [00:15:32]  All right. We'll see you soon, Take care. 

Dave Bittner: [00:15:38]  Now it's time for a few words from our sponsor BlackBerry Cylance. You probably know all about legacy antivirus protection. It's very good as far as it goes. But you know what? The bad guys know all about it, too. It will stop the skids, but to keep the savvier hoods' hands off your endpoints BlackBerry Cylance thinks you need something better. Check out the latest version of CylanceOPTICS. It turns every endpoint into its own security operations center. CylanceOPTICS deploys algorithms formed by machine learning to offer not only immediate protection but security that's quick enough to keep up with the threat by watching, learning and acting on systems' behavior and resources. Whether you're worried about advanced malware, commodity hacking or malicious insiders, CylanceOPTICS can help. Visit cylance.com to learn more. And we thank BlackBerry Cylance for sponsoring our show. 

Dave Bittner: [00:16:38]  My guest today is Tim Tully. He's chief technology officer at Splunk. His team recently published a report titled "The State of Dark Data" which sets out to reveal the gap between AI's potential and today's data reality. I asked Tim Tully to outline their findings when it comes to how the U.S. is approaching AI versus China. 

Tim Tully: [00:16:59]  As part of the dark data report, you may have seen that sort of there's a higher level of general acceptance in terms of sort of the role of AI in society. And I would probably tend to agree that they're a little bit ahead of the U.S. in that regard. I think a lot of that, you know, largely has to do with sort of, you know, where they focus their time and where they spend their time. And I think, you know, it shows up both as being perhaps more - societally, more acceptable but, also, I think emphasized a bit more in school, particularly the earlier. 

Dave Bittner: [00:17:30]  Yeah, I think that's an interesting point. I mean, there's that whole situation, where I suppose, if you're a citizen in China, you may not have the same options that we have here in terms of opting out of data collection. 

Tim Tully: [00:17:43]  Yeah, that's certainly true. And, you know, a lot of that sort of is societal. And then part of it is also sort of just government law, if you will, and then, also, what is acceptable I think, which goes back to the societal piece. What you see as being everyday norm perhaps in China as a citizen is slightly different than sort of your level of expectations as a citizen of the U.S. or Europe. 

Dave Bittner: [00:18:03]  Can you give us some insights as to - when someone is gathering up a bunch of data or using a dataset for a project involving AI, how do they go about establishing what the standards are for that data? What's involved in there? 

Tim Tully: [00:18:23]  Yeah, I mean, there's sort of like the technical piece, which is - you know, if you're doing supervised learning, you definitely want to make sure it's labeled, right? I think what you're trying to allude to is more of what goes beyond sort of level of acceptability of sort of violating perhaps, like, even basic human rights, not to sound too, like, hyperbolic, you know? The way I would perceive it is you try to think about privacy and PII, first of all. I sort of see data privacy as being super, super important. My background is - you know, I've been doing big data going all the way back to 2003 and then, you know, obviously, over the last decade, increasingly more AI space. 

Tim Tully: [00:18:56]  But to the extent that you can, ideally, you stay away from PII as much as possible, right? Whether it's, you know, full names or birthdays or Social Security numbers or what have you, you know, you try to either mask out the data or one-way hash it or what have you and try to anonymize it to the extent that you possibly can and focus more on the models and the training of the models rather than sort of, like, the actual depth of what the data represents per se and then, perhaps, come back and map in the models later. But you definitely want to try to stay clear of, you know, exposing or even having access to that kind of private data as much as you can. It's a slippery slope in my mind. 

Dave Bittner: [00:19:32]  What about even having biases in the data itself of knowing ahead of time that, you know, this data may be leaning in one direction or maybe oversampled with one type of data than the other? 

Tim Tully: [00:19:45]  I think that's a skill that a lot of AI practitioners have to learn over time. And I think, increasingly, the literature is doing a better job of sort of introducing a notion of biases upfront. I think it's one of those things that you sort of just - you start to figure out over time. It's not necessarily an inherent skill that people are born with, and it - you know? It's a tough problem. Otherwise, you know, you probably wouldn't be asking. Yeah, it's certainly hard. There's no tried and true way, really, to stay away from this sort of experiential kind of thing. 

Dave Bittner: [00:20:15]  You know, in terms of staying competitive and trying to get an advantage over other nations around the world, what sort of directions do you think we have to take? 

Tim Tully: [00:20:25]  Schools and colleges in particular need to do a much better job of focusing more on sort of the realities of machine learning being everyday. I think we've historically bolted on ML to what we've done. And I think ML needs to be thought of as being pervasive in everything that people do in their computer science education and in the background that they establish. I mean, that's sort of the approach that I'm taking with our products right now within Splunk. 

Tim Tully: [00:20:49]  Historically, a lot of companies, and Splunk included, have sort of thought of machine learning as being more of an afterthought or something that we sort of have bolted onto the product. And the approach that I've been taking over the last couple of years gets to think of machine learning and AI as being sort of ingrained in everything that we do and almost automatic, right? It's not sort of a feature that, you know, we just want to put out there and market as saying, oh, it's AI-powered. That should - it should just be implicit. People should just understand it should be completely argumentative to your experience as a user. 

Tim Tully: [00:21:17]  And so, in the same way, that has to sort of show up in the curriculum, right? Like, there has to be ML in sort of all the courses that people take in one way or another, whether it's a security course or a networking course or what have you. It has to be completely ubiquitous in the same way that, say, like, UI development is done within teams. One way I try to explain to people my teams is, you know, there shouldn't necessarily be just one set of people thinking about UI in the same way that there shouldn't just be one set of people thinking about ML, right? It has to be completely pervasive. 

Dave Bittner: [00:21:47]  Could you give me the example of a situation where there would be ML applied to something where, perhaps, folks wouldn't have thought it would be there before? 

Tim Tully: [00:21:55]  One of the things we're working on - and as an example - is what we call sort of automatic source type inference. And what that is using ML to train models to sort of recognize data upfront before you put it into - as you put it into Splunk. And so instead of - as a user saying, hey, this is, you know, adjacent data format that represents some firewall log, for example, Splunk should just say, hey, I see you're putting in JSON data. That's firewall log data, right? And that would be automatically done without you, you know, telling it to use ML or to have previously trained ML. 

Tim Tully: [00:22:27]  It's automatic, and, you know, it happens in a second. And you just - you move on with your day, right? Same - in the same way that sort of Netflix, which is sort of the canonical example of machine learning - right? - where they recommend movies. It's sort of analogous to that in the data space where it's just part of the experience that you just sort of assume is there and works well out of the box. 

Dave Bittner: [00:22:45]  And then I suppose part of the notion here is that, over time, it gets better and better at making those guesses or assumptions for you so that the error rate goes down. 

Tim Tully: [00:22:55]  Yeah, sure because you - you know, you have a feedback loop. 

Dave Bittner: [00:22:58]  What are your tips for businesses that are trying to get a handle on this, who realize that this is something they want to start integrating? What's a good place for them to get started? 

Tim Tully: [00:23:07]  Yeah. You know what? That's an interesting question. I think a lot of times, businesses talk about AI because they have to, right? It's sort of like it's a marketing box that they sort of check. And they talk it up, and then they hype it up. And then they spend all their time sort of planning and thinking about what they're going to do, and then they're sort of stuck in that they're not really doing anything. I think the best - in my experience, what I've seen across my career is - the best thing they do is just sort of just dive in headfirst. You look at a small problem, and you sort of - you look at how to - how to build models and train it and which techniques you're going to use. And then you start to roll them out in an applied way. 

Tim Tully: [00:23:39]  And then you look at the data, and you look at the feedback loop that you create, as you just asked about. And then you sort of expand from there, right? Instead of coming up with this sort of boil-the-ocean expansive AI/ML strategy, you know, start with a small problem, with a small set of full (ph) and just go for that and try to optimize the hell out of that problem and then expand from there. 

Dave Bittner: [00:23:57]  Yeah. So crawl before you walk, I guess. 

Tim Tully: [00:23:58]  Yeah, yeah. I think, oftentimes, you see a lot of companies just trying to, like, come up with this, like, grand unification theorem around ML or AI. And I - just that never works. 

Dave Bittner: [00:24:07]  That's Tim Tully from Splunk. The report is titled "The State of Dark Data." 

Dave Bittner: [00:24:17]  And that's the CyberWire. Thanks to all of our sponsors for making the CyberWire possible, especially our supporting sponsor, ObserveIT, the leading insider threat management platform. Learn more at observeit.com. 

Dave Bittner: [00:24:30]  The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our amazing CyberWire team is Stefan Vaziri, Tamika Smith, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Nick Veliky, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Peter Kilpe. And I'm Dave Bittner. Thanks for listening. We'll see you tomorrow.