The CyberWire Daily Podcast 10.5.17
Ep 449 | 10.5.17

NSA breach announced today (occurred in 2015, discovered in 2016) may be final nail in Kaspersky Lab's coffin.


Dave Bittner: [00:00:00:13] A big thank you to all of our Patreon supporters. We've got lots of people signing up every day. The next person could be you. Go to and find out more.

Dave Bittner: [00:00:14:02] Sensitive NSA files appear to have been obtained by Russian intelligence services and there are claims Kaspersky software was the gateway to compromise. The Las Vegas massacre investigation expands to consider possibility of accomplices. A new password stealer is out in the wild. The NFL Players Association data is exposed. The FCC was mostly advised by bots on net neutrality and bots who haven't benefited from DeepMind's ethics class.

Dave Bittner: [00:00:45:08] As our sponsors at E8 Security can tell you, there's no topic more talked about in the security space than Artificial Intelligence, unless maybe it's Machine Learning. But it's not always easy to know what these could mean for you. Go to and see what AI and Machine Learning can do for your organization's security. In brief, they offer not a panacea, not a cure all, but rather an indispensable approach to getting the most out of your scarce, valuable and expensive human security analysts. Let the machines handle the vast amounts of data. If you need to scale your security capability, AI and Machine Learning are the technologies that can help you do it. So visit and see how they can help address your security challenges today. That's And we thank E8 for sponsoring our show.

Dave Bittner: [00:01:47:02] Major funding for the CyberWire podcast is provided by Cylance. I'm Dave Bittner in Washington DC today, with your CyberWire summary for Thursday, October 5th, 2017.

Dave Bittner: [00:01:57:13] We are at the Newseum in Washington DC, attending the 2017 RFUN conference with our partners at Recorded Future.

Dave Bittner: [00:02:05:05] Just a few hours ago, the Wall Street Journal broke the story of a major security incident at the US National Security Agency. Russian intelligence services are said to have obtained highly classified material related to both network attack and network defense from a machine belonging to a contractor on which the sensitive information had been placed.

Dave Bittner: [00:02:24:18] The most interesting aspect of the story is that the hackers targeted the contractor after "identifying the files through the contractor's use of a popular antivirus software made by Russia-based Kaspersky Lab." Remember, the story is just breaking and so details are likely to be clarified and corrected later.

Dave Bittner: [00:02:43:00] The breach is said to have occurred in 2015, but wasn't discovered until "Spring of last year," presumably this means Spring of 2016. To put this on a timeline, NSA would have discovered the problem weeks before the ShadowBrokers began leaking what the Brokers assert are Equation Group hacking tools. It's also shortly before the Summer 2016 arrest of Hal Martin, the NSA contract worker who was allegedly found to be hoarding highly classified material in a shed at his Glen Burnie, Maryland, home. The material the ShadowBrokers have leaked appear to date to 2013 or so. It's unclear whether this latest revelation is connected to either the Brokers or Mr. Martin's case.

Dave Bittner: [00:03:22:07] The US Government a few weeks ago directed Federal agencies to get rid of Kaspersky security products from their networks, or at the very least demonstrate some very good reason why they should continue to use them. Administration accounts of the ban, issued by the Department of Homeland Security, have all concentrated on Kaspersky's requirement under Russian law to cooperate with security, intelligence and law enforcement agencies, and that indeed would seem to be sufficient grounds for booting their products from Government networks. This latest development would appear to indicate that there are indeed other grounds for suspicion of Kaspersky Lab and its products.

Dave Bittner: [00:03:57:22] Kaspersky has long maintained its innocence of nefarious cooperation with the Russian organs. It's possible their products may have been subverted without their knowledge. It happened to Avast, after all. But few of the initial reactions to this latest story seem to credit that explanation. The news is still fresh and breaking, however, and we'll be following it closely. However it plays out, it's bad news indeed for the US Intelligence Community and the National Security Agency in particular.

Dave Bittner: [00:04:24:21] Zscaler has discovered a password stealer spreading through a compromised website. The malware is delivered by VBScript, which after downloading the malicious payload, downloads a decoy document, terminates Microsoft Word processes, installs the payload through PowerShell, and removes document recovery entries of Microsoft Word.

Dave Bittner: [00:04:44:21] There's a Quaker State angle to the exploit. The decoy document represents itself as a "public service" message from the Pennsylvania Department of Public Welfare. It even helpfully contains advice on mitigating spam, and includes spam mitigation instructions. The malware steals passwords from Armory Wallet, Chrome, Firefox, CuteFTP, FileZilla, Putty, Electrum wallet, and WinSCP Passwords.

Dave Bittner: [00:05:12:09] In the US, the Department of Homeland Security decries a growing public learned helplessness over cyberattacks and data breaches. One case of data compromise that has been confirmed occurred in the US. It was discovered by the security firm Kromtech, and like several other recent cases, comes down to an enterprise leaving an unsecured database exposed on the Internet. In this case, the enterprise in question is the National Football League Players Association. About 1200 player and agents had their personal information compromised in an unsecured Elasticsearch database.

Dave Bittner: [00:05:45:24] As I mentioned at the top of the show, we are on location at the RFUN Conference in Washington DC, hosted by our partners at Recorded Future. It's been a day full of interesting programs and speakers focused on threat intelligence. And we have the opportunity to speak with Joe Coleman, Cyber Threat Intelligence Analyst at PepsiCo.

Dave Bittner: [00:06:04:15] The breadth of a company the size of PepsiCo, you have shipping, you have manufacturing, you have HR, and you must have an eye on all of those things?

Joe Coleman: [00:06:15:17] We have to. There's no room for error, or no room for not being able to see. I'm very well trained in military intelligence. I've studied this back, forth, sideways, I've been to combat about it, so now explaining that concept to civilians in a corporate environment, that is challenging. So, one of the things that is perhaps the biggest challenge for veterans such as myself, is putting it into those terms that people can understand, such as, instead of we're talking about the enemy combatants are doing this, we have to look at it from a risk perspective. What is the risk of this happening? How can we prevent or mitigate that risk? Those are big questions and that's a big thing for, I'm sure, a lot of folks in DoD and the services listening to CyberWire, like I know they do definitely down at Fort Meade. So, there's job pro tip for them. Be able to translate your skills into civilian speak. That's probably a good big pro tip.

Dave Bittner: [00:07:20:23] That comes up a lot, and also the notion of exactly what you touched on, of being able to communicate, not in terms of threats of being, you know, red, yellow or green, particularly when you get to the board level, of dollars and cents of risk. You know, what is the actual risk to the company here, in a way that people who are used to talking about risk can understand?

Joe Coleman: [00:07:43:02] Yes. It's about having a Rosetta Stone to put it in context. Having that Rosetta Stone from being able to translate say, the military term priority intelligence requirement, which is basically what are the top risks to my company, label that something else. Label it risk assessment, or possibilities, or something along those lines. That's what we have to be able to do, is have that Rosetta Stone language that, one, we use internally within the intelligence section, or within the fusion center, and then you have to have something to translate that to the business side. And that's where I think we as intel analysts right now are not doing a great job at. We're not explaining that. And I can only speak for my own personal experience, we're not really doing a good job with that and it's because we're not really translating that well. That they're not seeing the value, or sometimes you have the issues where you don't want to dilute the term intelligence. Because if you look at what's going on now, you have intelligence as a cloud, as an AI, as whatever it may be. We want to be able to preserve what intelligence is, because it is a discipline, you know? It's been around for 6,000 plus years.

Joe Coleman: [00:09:02:10] It's like, machines do a very awesome job of correlation, at least machine correlation, and do a great job of organization, putting things into somewhat of a context. But it's the person with their experience with repeatable analytical tradecraft, which is something a lot of people go to school or they have some intuition about, they put that together and they're able to take information into intelligence. I look at it as a formula. So, information plus analysis equals intelligence. These are the things that we want to be able to translate to the business. We're not dealing with mortars and IEDs and all that, but we are dealing with people who want to take information and commoditize it. And that's probably the biggest thing that we see a lot just in cyber intelligence is, let's take something that may seem innocent and seemingly harmless, but when we combine that with other information, we get something that's worth a lot of money.

Dave Bittner: [00:10:05:09] That's Joe Coleman from PepsiCo. We'll hear more from him on an upcoming episode of the Recorded Future podcast.

Dave Bittner: [00:10:12:00] You will no doubt recall that the US Federal Communications Commission sought public comment on its proposed revisions to net neutrality regulations. So far, so good, right? And what better way to get comments than online, right? Digital democracy that Ross Perot or Arthur C. Clark would love, right? Well, not right, unless we're extending the franchise to AI. Of the 22 million comments on net neutrality the FCC received, data analytics firm Gravwell says only 17% appear to be genuine. The other 83%? Bots.

Dave Bittner: [00:10:48:16] Google's DeepMind AI shop is convening a panel of experts in ethics and various allied fields to help allay fears voiced by Elon Musk, among lots of others who've also drunk deeply of the Terminator franchise's well, that Artificial Intelligence is going to be the death of us all. The idea is to design in goodness from the get go, so the AI won't turn out evil. Sort of the way Microsoft's edgy teeniie chatbot Tay did. We're in no particular position to either discount Musk's fears or cry victory for DeepMind's robotic pelagianism, but we will watch their deliberations and recommendations with interest.

Dave Bittner: [00:11:29:13] Now I'd like to tell you about a new infographic from our sponsor Delta Risk. Delta Risk is a National Cybersecurity Awareness Month Champion. As we kick off NCSAM, they've put together a handy 31 day cybersecurity calendar full of tips to help the public protect themselves and their communities online. Throughout the month of October, Delta Risk will post additional infographics and blogs that address weekly NCSAM themes, to educate and spread awareness around important cybersecurity topics. You can view the infographic by visiting Delta Risk LLC, a Chertoff Group Company, is a global provider of cybersecurity services to commercial and government clients. Learn more about Delta Risk by visiting And again, that link for the infographic is And we thank Delta Risk for sponsoring our show.

Dave Bittner: [00:12:34:12] And I'm pleased to be joined once again by Justin Harvey. He's the Global Incident Response Leader at Accenture. Justin, welcome back. We talk about insider threats quite a bit here on the CyberWire, and you wanted to make the point that perhaps some businesses aren't giving them the attention they deserve.

Justin Harvey: [00:12:50:18] Yes, I think that there's a systemic problem here in the industry and the systemic problem is that many organizations, they're thinking about the bad guys that exist outside of their network. And what we've seen is a very marked spending trend to build up the walls even higher on the perimeter. Dave, you've done a lot of great work with interviews, talking about how that's actually a bad thing to continually build up the perimeter. That you actually need to build up the perimeter and build layer defenses so that if attackers do get in, then there's not a soft inside. But it's still, companies are thinking in terms of bad guys coming from the outside in. And they address insider threat or employees or partners or vendors who already have access within their environment through business processes. But there is a growing trend of more employees that are downloading tool kits, they're downloading means to circumvent these controls, the business process controls, on the existing systems they have, in order to accomplish a nefarious mission, or in order to do something against corporate policy. Really, one of the better ways to address that sort of behavior is to formulate a strong insider threat program.

Dave Bittner: [00:14:26:01] You used the term nefarious there and certainly there are people who are inside organizations up to no good. But I think you'd agree that a lot of people just want to get their job done, and if IT says no to them, like you say, they're going to find a way around that.

Justin Harvey: [00:14:41:19] The risk we see with that, and my mind immediately went to Shadow IT. The things like Dropbox, not used for corporate purposes. Or installing their own software that could, or may be against company policy. The pitfall with that approach is that those sorts of technologies have an inherent risk, or risk of data exposure that corporate IT, or the corporate security programs, may not know that you're running that. Therefore, there is the insider, meaning the employee, who just wants to get their job done, who feels like they need to install a file sharing application like that. If there's a new vulnerability, or if they perhaps take their laptop home and are using that both for personal and work, it's very easy for them to be either phished, or very easy for them to be exposed, so that adversaries could gain a foothold onto their system and then "ride in with them" when they go into the main corporate network.

Justin Harvey: [00:15:47:19] So, sometimes, like you had said, insiders may not be malicious. They may not be looking to do something nefarious. And that's why at Accenture we consider both insider threat to be direct and indirect. Meaning willful and accidental.

Dave Bittner: [00:16:05:12] So, how do you find the balance between putting appropriate restrictions on people, but now slowing them down so much that they're going to seek out ways around the restrictions that you put on them?

Justin Harvey: [00:16:16:20] Well, that's the, that's the $50,000 question that we struggle with all the time in cyberdefense and cybersecurity. I will say that the advice that I give to my clients is that really focus on drawing security in as early as possible. And what we've seen historically is a company wants to, let's say, put out a new app. They want to put out an app that accesses sensitive data, that does various thing for their customers. And in the old days, I mean five to ten years ago, heck, probably people are doing this today, the Dev team would get together and build their requirements and they would build it all the way up until they were ready to go to production and then the change management process would say "Well, do you have security sign off?" And then they would have to go back to security and say "Can you please approve this? We have this business imperative" et cetera, et cetera.

Justin Harvey: [00:17:13:01] The new is, as you've heard a lot, is to use something called DevOps or an agile approach to development. Very iterative, changing stuff on the fly. And our advice, or one of the big pieces of advice that we give customers is embed security within that DevOps process, so that very early on you have a security leader, or you have a security team member that can be part of those daily scrums, that can be a part of the normal development process. So when it does get to production and/or when they are looking at various means to secure that or to put in the proper business processes to prevent a risky situation, it's already built in or baked in to that development process.

Dave Bittner: [00:18:03:08] All right. Good advice, as always. Justin Harvey, thanks for joining us.

Dave Bittner: [00:18:10:04] And that's the CyberWire. Thanks to all of our sponsors who make the CyberWire possible, especially to our sustaining sponsor Cylance. To find out how Cylance can help protect you using artificial intelligence, visit

Dave Bittner: [00:18:23:24] We've met a lot of interesting people here at RFUN. Thanks to our friends at Recorded Future. One of the people we met here was Elise, who wanted to say a special hello to her dad Antonio McCutcheon. Antonio, she bet me that you couldn't get a shout out on the CyberWire. Well guess what? There's your shout out. Thanks for listening.