The CyberWire Daily Podcast 11.6.19
Ep 965 | 11.6.19

App developers had access to more Facebook Group data than intended. Election security and disinformation. DarkUniverse described. Millions lost to business email compromise.


Dave Bittner: [00:00:03] Facebook closes a hole in group data access. U.S. authorities seek to reassure Congress and the public concerning the security of election infrastructure. Disinformation remains a challenge, however, as the U.S. prepares for the 2020 elections. Criminals catch Potomac fever as they use politicians' names and likenesses as an aid to distributing malware. Kaspersky outlines the now-shuttered DarkUniverse campaign. And Nikkei America loses millions to a BEC scam. And good dogs go after bad guys' data storage devices. 

Dave Bittner: [00:00:41]  And now a word from our sponsor, ObserveIT. The greatest threat to businesses today isn't the outsider trying to get in. It's the people you trust, the ones who already have the keys - your employees, contractors and privileged users. Sixty percent of online attacks are carried out by insiders. To stop these insider threats, you need to see what users are doing before an incident occurs. ObserveIT enables security teams to detect risky user activity, investigate incidents in minutes and effectively respond. With ObserveIT, you know the whole story. Get your free trial at That's And we thank ObserveIT for sponsoring our show. Funding for this CyberWire podcast is made possible in part by McAfee, security built by the power of harnessing 1 billion threat sensors from device to cloud, intelligence that enables you to respond to your environment and insights that empower you to change it. McAfee, the device-to-cloud cybersecurity company. Go to 

Dave Bittner: [00:01:52]  From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Wednesday, November 6, 2019. Facebook, which has been working to rein in developers' access to data, has found that an oversight in its Groups app gave video streaming and social media management app developers access to private group member data like names and profile pictures. About a hundred developers, Facebook said in an announcement it posted yesterday, had retained access to this information. With privacy upgrades the social medium had instituted in April of 2018, a group admin should have been able to authorize an app developer to receive only such information as the group's name, the number of users in the group and the content of the posts within that group. Users - that is, group members - would have had to opt in to provide access to more personal information like profile pictures and names. 

Dave Bittner: [00:02:45]  Facebook is cleaning up this oversight. While the 2018 changes were a response to privacy concerns raised by such incidents as the Cambridge Analytica scandal, stopping all the holes is obviously more difficult than Facebook had expected. The incident is regarded as a bad look for Facebook as it prepares for future rounds of privacy scrutiny. But the company says it's convinced the relatively small number of developers who had the unintentional access didn't abuse it. 

Dave Bittner: [00:03:14]  Some of that coming scrutiny is set to occur in a California court, as the state petitions for more information on the company's privacy enhancements. At issue are internal documents which Reuters says pertained to what was called the Switcheroo Plan, under which Facebook documents divided app developers into three buckets - existing competitors, possible future competitors or developers that we have alignment with on business models. So the suspicion is that the company undertook anti-competitive steps under the guise of enhancements to privacy and user experience. California's attorney general has this morning petitioned the San Francisco Superior Court to compel Facebook to comply with subpoenas for such documents. 

Dave Bittner: [00:03:59]  Yesterday, the U.S. Departments of Justice, Defense and Homeland Security joined the director of national intelligence, the FBI, NSA and CISA to reassure Congress and the public that unprecedented security measures were in place to protect U.S. elections. They warned that Russia, China, Iran and other foreign malicious actors were expected to attempt active interference in the 2020 U.S. elections. Some of those measures were on display in yesterday's off-off-year elections some states held, as state and county election authorities tested new equipment and assessed their security. There have been no reports of effective attacks against this week's vote. 

Dave Bittner: [00:04:39]  FireEye CEO Kevin Mandia sees the cooperation U.S. federal and state agencies are talking about as likely to be successful. He's encouraged by the collaboration among voting device manufacturers and election and security authorities. But disinformation is a different matter. Mandia told CNBC's "Mad Money" that the biggest problem with election security isn't hacked voting machines but rather misinformation disseminated over social media. He said that he was confident that voting machines would be secure during the voting. I'm not worried about vote count, he said, adding - I'm more worried about those influence operations that you don't even know are happening to you. 

Dave Bittner: [00:05:19]  Cybereason conducted an exercise yesterday in which two teams - red and blue, attackers and defenders - simulated a campaign to disrupt an election in ways that would affect its outcome. The attackers did so not by attempting to manipulate vote counts or directly affect voting machinery itself. Instead, they focused on spreading disinformation designed to suppress voter turnout or confuse election officials into disallowing votes. This would seem consistent with other assessments security experts have offered. 

Dave Bittner: [00:05:50]  Vice reports that disinformation relative to the 2020 U.S. elections is already flooding social media. But a great deal of that disinformation is homegrown and seems firmly in the mainstream of scurrilous electioneering that's gone on since the first seriously contested presidential campaign, the 1800 contest between John Adams and Thomas Jefferson. Fool, hypocrite, criminal, tyrant, weakling, atheist, libertine, coward and so on - a lot of that slanging was as lurid and specific, as rich in specious detail as anything woofed or tweeted today. Jefferson even hired a specialist to supervise the slander. Of course, social media have an immediacy and powers of amplification unknown to the two principal authors of the Declaration of Independence. 

Dave Bittner: [00:06:36]  Vice quotes calls for social media to take action against domestic political disinformation the way they have against foreign influence operations, and in particular, call for fact-checking and content moderation. But the success Facebook and other social media have enjoyed against foreign influence campaigns has come through culling them for coordinated inauthenticity. Moderating the content of political speech, as popular as this idea seems to have grown, presents some obvious problems with respect to civil liberties. Not all politically themed campaigns are necessarily concerned with politics. Cisco's Talos unit describes how some of them aren't connected with politics at all, except insofar as politicians' names and likenesses serve as clickbait and phishbait. Criminals are using political themes to help distribute ransomware, screen lockers and remote access Trojans. The most popular politicians among the hoods are President Donald Trump and former Democratic presidential candidate Hillary Clinton. 

Dave Bittner: [00:07:37]  Our own Carole Theriault has been looking into the security of industrial control systems, the systems that are key components of the things that make civilized society possible. She files this report. 

Carole Theriault: [00:07:47]  So Tripwire has recently issued some cybersecurity research focused on industrial control systems. These systems are, like, vital to our life. Right? Things like power stations, electricity grids and oil plants. Now, Tripwire's findings say that more than 90% of ICS security professionals - these are the people that look after these systems - are concerned about cyberattacks causing operational shutdown or customer-impacting downtime. So I've invited Kristen Poulos, vice president and general manager of industrial cybersecurity at Tripwire, to, you know, highlight all the important factors that we need to know in this research. Kristen, thank you so much for joining us. 

Kristen Poulos: [00:08:28]  Yeah. Yeah. Thank you for having me. 

Carole Theriault: [00:08:31]  Now, did I say your last name correctly? 

Kristen Poulos: [00:08:33]  It's Poulos. You were very close. 

Carole Theriault: [00:08:34]  OK. Poulos. I'm very sorry. Tell you what, Kristen. I think - I was brought up in Canada, right? And I think that my hard Canadian living, my childhood, would prepare me to be completely cut off. But I think even a place temperate as the U.K., where I live now, it would be a total nightmare if some critical infrastructure went down. 

Kristen Poulos: [00:08:53]  Yeah. You know, it's incredibly scary to think about the potential out there. But what we were able to collect - and the survey that we conducted was for 263 ICS security professionals, and they spanned a number of different industries. So think manufacturing and chemical and energy. And while they certainly showed a definite concern around cyberattacks and how that could negatively impact their way of life, and safety, and quality and the productivity of their operation, we did find that a lot of them had started making investments in ICS cybersecurity. And that was really promising to see. 

Carole Theriault: [00:09:37]  So what seems to be the problem? What do you think - why are half of them feeling that their current investments aren't enough? Do they see all these holes, and it's just that the board don't care? 

Kristen Poulos: [00:09:49]  Despite there being this high level of companies that have made some kind of investment in cybersecurity, yeah, only half of them think that their investments aren't enough. You know, at first, that's kind of promising to hear, right? Because it means that these organizations are thinking that cybersecurity isn't just a project, but rather, it's a journey or a program that they need to maintain and sustain. But what maybe wasn't as good to hear was that almost 70% of those companies, those same companies, believe that it would take a significant industry event in order to convince their organizations to spend more. And so as a member of this community, that's very alarming to hear because we don't want there to be a catastrophic event in order to convince the boards to spend this money. 

Carole Theriault: [00:10:43]  So basically what I'm hearing here is they are reluctant to make the investment. They need a huge, catastrophic event to happen somewhere for them to pull up their trousers and go, OK, we really need to take this seriously. And it's these cybersecurity - those responsible for cybersecurity that you spoke to, that are actually sounding the alarm. 

Kristen Poulos: [00:11:02]  That's exactly right. 

Carole Theriault: [00:11:04]  Scary stuff. What can we, the average Joe in the street, do with this information? Can we lobby, you know, our representatives locally and provincially and statewide? Or... 

Kristen Poulos: [00:11:14]  Absolutely. And as a matter of fact, the more that we can have local legislative bodies and state governments talking about this and talking about some basic cybersecurity compliance mandates, the more it can become common and widespread. We have a good example of this, actually, in North America, where NERC CIP governs our North American utilities, and there is some level of cybersecurity that they need to, each entity needs to be able to meet. And so we found, actually, through this research that it was those individuals in the energy and utility worlds that seemed to be the most aware and concerned. And I think that is because of the standard just drives awareness throughout those organizations. 

Carole Theriault: [00:12:06]  Well, hopefully, this research will help drive awareness. And, hey, who knows? Maybe Tripwire should start a conference just with ICS security professionals coming together so that they can actually share ideas and share, you know, plans on how they can actually limit their risk. 

Kristen Poulos: [00:12:20]  That's a great idea. 

Carole Theriault: [00:12:22]  (Laughter). 

Kristen Poulos: [00:12:23]  Collaboration in this space is so key. I mean, yes, sure, we're all vendors and trying to make solutions that customers can buy. But really, at the end of the day, we're trying to make the world a safer place. 

Carole Theriault: [00:12:36]  Couldn't have said it better. Kristen Poulos, thank you so much for joining us. 

Kristen Poulos: [00:12:39]  Thank you very much. 

Carole Theriault: [00:12:41]  This was Carole Theriault for the CyberWire. 

Dave Bittner: [00:12:44]  Kaspersky, yesterday, published a study of a previously unremarked APT, DarkUniverse, which operated quietly between 2009 and 2017. The researchers see links between DarkUniverse and script found in The Shadow Brokers' 2017 Lost in Translation leak. The APT's victims are located, Kaspersky says, in Syria, Iran, Afghanistan, Tanzania, Ethiopia, Sudan, Russia, Belarus, and the United Arab Emirates. Both civilian and military organizations were targeted. The researchers think DarkUniverse may have a connection to the ItaDuke campaign which targeted Tibetan and Uighur minorities in China. The researchers also think that DarkUniverse shut down when its techniques were blown by The Shadow Brokers' Lost in Translation leaks. 

Dave Bittner: [00:13:35]  Finally, here's a BEC scam pricey enough to send shivers down the back of any CFO. Nikkei America, the New York-based subsidiary of Japan's Nikkei media group, acknowledged late last week that it had acted on instructions received in a business email compromise scam to transfer $29 million to a fraudster account. 

Dave Bittner: [00:14:01]  And now a word from our sponsor, ThreatConnect. Designed by analysts but built for the entire team, ThreatConnect's intelligence-driven security operations platform is the only solution available today with intelligence, automation, analytics and workflows in a single platform. Every day, organizations worldwide use ThreatConnect as the center of their security operations to detect, respond, remediate and automate. With all of your knowledge in one place, enhanced by intelligence, enriched with analytics, driven by workflows, you'll dramatically improve the effectiveness of every member of the team. Want to learn more? Check out their newest book, "SOAR Platforms: Everything You Need to Know about Security, Orchestration, Automation, and Response." The book talks about intelligence-driven orchestration, decreasing time to response and remediation with SOAR, and ends with a checklist for a complete source solution. You can download it at That's And we thank ThreatConnect for sponsoring our show.

Dave Bittner: [00:15:14]  And I'm pleased to be joined once again by Justin Harvey. He's the global incident response leader at Accenture. Justin, it's great to have you back again. I wanted to touch base with you today about incident response, but specifically about automated incident response. Can you give us some insights there? 

Justin Harvey: [00:15:32]  Well, clearly, as an incident responder, I don't want to be replaced by a robot, (laughter)... 

Dave Bittner: [00:15:36]  (Laughter). 

Justin Harvey: [00:15:36]  ...Or automation. But it's very relevant to what we're seeing in the field with commercial and government entities today. And when we look back a few years ago, where there wasn't as much automation, there were large security operations centers that were heavily dependent upon a steady stream of people. You bring them in as a level one, you teach them how to do level one in the SOC. Then they work up and they get promoted to level two, and they're using paper-based playbooks. And they know when an event comes in, oh, I need to alt-tab, and I need to go to this screen and pull this additional information up and augment the data in order to pass it up to the next level. But what we are seeing in the field - and many large corporations and entities are seeing - is that that is not sustainable. It's not sustainable because there are simply not enough people in the industry to fill all of these roles. And because of that, it creates a negative effect where you're always worried about, well, if I hire these people and I train them, are they going to leave? And then it becomes a retention problem. 

Justin Harvey: [00:16:44]  So one way to address that - and also one way to get your response times down is to automate a lot of the rote steps that these lower-level analysts are doing. And typically, in the SOC pyramid, you've got a lot of level ones, you've got fewer level twos and you've got even fewer level threes. And what I'm talking about here, for automated incident response, is really targeting those level one and level-two analysts. And the way that this is manifested is that - let's take a case study. Let's look at, we all know that command .EXE should not be run from Internet Explorer. That's indicative of an attacker that is spawning a process from your browser. 

Justin Harvey: [00:17:29]  Well, if you haven't automated your SOC, that alert that comes through into your SIM might say, command .EXE has been spawned from a browser. And it would require a human to, A., look through their list of alerts, find that alert and then take action on it. And that action could be an additional investigation. It could be quarantining that endpoint. It could even be killing that Internet Explorer process. But what automation enables us to do is put together quite a few rules, if you will, that when the automation piece of the SIM sees that inbound alert, it will automatically trigger, based upon a set of conditions, and do the quarantining, or take care of the killing the process or even inserting a rule into a firewall. And then the higher levels of security operations center workers can just look at the results of those automation steps. 

Dave Bittner: [00:18:24]  In that particular case, is this a matter of the automation sort of buying you time, where it reports to you and says, hey, we noticed this thing. I did these things. Now it's time for you to check it out and see what's actually going on here? 

Justin Harvey: [00:18:38]  Dave, that is a very astute point. It is critical that humans are always overlooking the automation. We can't ever assume that automation is going to take care of everything. There are still certain conditions where automation can either break or not quite do the job. So it's very important for higher-level security operations center workers to check the work of the automated incident response and verify that, in fact, A., it did what it's supposed to do and, B., why did it do that? Why did command .EXE spawn from a browser? And so you're right. It does buy you time in order to stop an adversary. Because the adversary is going to say, well, my command .EXE was killed. What else can I do? And they might try some other steps that might not be covered under the automation. So it's very important to be able to have experienced incident responders look over the roll-up data. 

Dave Bittner: [00:19:35]  All right. Well, Justin Harvey, thanks for joining us. 

Justin Harvey: [00:19:38]  Thank you. 

Dave Bittner: [00:19:43]  And that's the CyberWire. 

Dave Bittner: [00:19:44]  Thanks to all of our sponsors for making the CyberWire possible, especially our supporting sponsor, ObserveIT, the leading insider threat management platform. Learn more at 

Dave Bittner: [00:19:56]  The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our amazing CyberWire team is Stefan Vaziri, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Nick Veliky, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Peter Kilpe. And I'm Dave Bittner. Thanks for listening. We'll see you tomorrow.