The CyberWire Daily Podcast 10.19.18
Ep 708 | 10.19.18

Chinese supply-chain hack story gets vanishingly thin. Twitter downs pro-Saudi bots. SEO poisoning. OceanLotus evolves. Ransomware notes.

Transcript

Dave Bittner: [00:00:03] No one but Bloomberg seems to retain much faith in Bloomberg's story about Chinese supply chain seeding attacks. Twitter blocks bots retailing coordinated Saudi talking points about the disappearance of journalist Jamal Khashoggi. Latvia says it blocked attempts to interfere with its October elections. An SEO poisoning exploits interest in keywords associated with U.S. midterms. OceanLotus has a new trick. Virginia Tech's Mike Horning joins us to discuss social media regulation. A Connecticut town pays ransom, and ransomware hoods take pity on a grieving father.

Dave Bittner: [00:00:46] Time to take a moment to tell you about our sponsor, ThreatConnect. With ThreatConnect's in-platform analytics and automation, you'll save your team time while making informed decisions for your security operations and strategy. Find threats, evaluate risk and mitigate harm to your organization. Every day, organizations worldwide leverage the power of ThreatConnect to broaden and deepen their intelligence, validate it, prioritize it and act on it. ThreatConnect offers a suite of products designed for teams of all sizes and maturity levels. Built on the ThreatConnect platform, the products provide adaptability as your organization changes and grows. Want to learn more? Check out their newest research paper entitled, "Building a Threat Intelligence Platform." ThreatConnect surveyed more than 350 cybersecurity decision-makers nationwide. Research findings include best practices and the impact of businesses due to threat intelligence programs and how organizations who have fully mature programs have prevented phishing attacks, ransomware attacks and business email compromise. To check out the research paper, visit threatconnect.com/cyberwire. That's threatconnect.com/cyberwire. And we thank ThreatConnect for sponsoring our show. Major funding for the CyberWire podcast is provided by Cylance.

Dave Bittner: [00:02:11] From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Friday, October 19, 2018.

Dave Bittner: [00:02:20] Reports of a Chinese supply chain seeding attack continue to look increasingly thin. The U.S. director of national intelligence says that while, of course, the prospect of such attacks is worrisome, the intelligence community can't find any evidence that this one actually happened. DNI Dan Coats said at CyberScoop's CyberTalks session yesterday, quote, "we've seen no evidence of that, but we're not taking anything for granted. We haven't seen anything, but we're always watching," end quote. So the message from the intelligence community seems to be, as NSA's Rob Joyce put it earlier this month, this. Looking for that Chinese spy chip on server motherboards may be chasing shadows.

Dave Bittner: [00:03:01] Former intelligence officials now retired to the private sector second the views of the incumbents. Michael Rogers, until this spring director NSA, told Forbes mildly, I'm not sure I agree with everything I read. One of his Israeli counterparts, Nadav Zafrir, who formerly led Israel's Unit 8200, told the same publication that he wasn't personally aware of anything like the attack Bloomberg described.

Dave Bittner: [00:03:27] One of the most striking features of the episode is the quick, clear and unambiguous denial by the companies said to have been affected by the chip. None of the purported victims have come forward, and the most prominent companies to be named in the dispatches, Apple and Amazon, would find themselves exposed to considerable reputational and legal risk if their vehement contradiction of the Bloomberg reports were false or unfounded.

Dave Bittner: [00:03:50] The company at the center of the allegations in the Bloomberg story, Super Micro, whose motherboards were said to have been salted with spy chips, has replied to an inquiry from U.S. Senators Rubio and Blumenthal with a categorical denial that it sustained this kind of supply chain attack.

Dave Bittner: [00:04:07] Earlier today, Apple CEO Tim Cook told BuzzFeed that Bloomberg needed to do the right thing and retract its account. Bloomberg hasn't done so, instead offering this statement to BuzzFeed. Quote, "Bloomberg Businessweek's investigation is the result of more than a year of reporting, during which we conducted more than 100 interviews. Seventeen individual sources, including government officials and insiders at the companies, confirmed the manipulation of hardware and other elements of the attacks. We also published three company's full statements as well as a statement from China's Ministry of Foreign Affairs. We stand by our story and are confident in our reporting and sources," end quote. No other news organizations or companies we've been able to find have been able to confirm Bloomberg's account.

Dave Bittner: [00:04:56] Thomas Rid of the Johns Hopkins School of Advanced International Studies and author of "Rise of the Machines" engaged in an uncharacteristic Twitter rant. He tweeted, in part, "Bloomberg's big hack story is the single biggest cockup in infosec reporting that I know of. Before somebody says it again, yes, a supply chain hack is possible in theory. That is not the point. Of course it is. The point is that there is no evidence so far for an alleged operation that should, by definition, create hard evidence if it actually happened. So man up, Bloomberg. Face the facts if you think facts matter. Get to the bottom of what went wrong here. Stop wasting the time of so many people behind the scenes, and try to salvage your badly tarnished reputation in computer security reporting," end quote. That's Thomas Rid, and it would seem he speaks for many other security experts.

Dave Bittner: [00:05:48] One would think that concrete examples of the sort of malicious device would have surfaced by now if, in fact, there were a supply chain seeding campaign of this kind. So keep an open mind about the story if you wish, and of course recognize that supply chain security is a serious matter. Sorry, Professor Rid, for saying it again. But also recognize that so far, as disappointed researchers say, there's no joy. A priori possibility is a good counsel of prudence. But as evidence, it's vanishingly weak.

Dave Bittner: [00:06:18] Twitter has blocked a number of bots that were pushing what appeared to be Saudi government talking points concerning journalist Jamal Khashoggi's apparent murder. Khashoggi, who disappeared into a Saudi Consulate in Turkey on October 2, hasn't been seen since. The bots are relatively low-volume operations, which appears to be one of the reasons they've generally escaped notice, flown below the radar as Ben Nimmo, a senior fellow at the Atlantic Council's digital forensics lab puts it.

Dave Bittner: [00:06:46] The bots engage selectively. In this case, they've been using hashtags like, #WeAllTrustMohammadBinSalman or #UnfollowEnemiesOfTheNation. They engage selectively and only on matters of apparent importance to the kingdom's policy. The goal would be, as Nimmo observed to NBC News, to push the kingdom's messaging into trending on Twitter, where the regime's talking points are likely to find new and potentially receptive viewers.

Dave Bittner: [00:07:13] Latvian sources say the country sustained but parried cyber attacks apparently directed at affecting the October 6 elections. Some of the temporarily successful attacks posted pro-Russian messages in social media.

Dave Bittner: [00:07:27] There's some newly observed election-related activity in the U.S. as well. But this seems to be of the ordinary criminal kind, quite uninterested in affecting the outcome of voting. Security firm Zscaler reports that a search engine optimization poisoning campaign, SEO poisoning for short, is in progress. The perpetrators are using keywords likely to be associated with the American mid-term elections to drive traffic to sites that advertise various scams or to watering holes that expose visitors to exploit kits or at least to potentially unwanted programs.

Dave Bittner: [00:08:02] Security firm Cylance reports that Vietnam cyber espionage threat group OceanLotus, also known as APT32 or Cobalt Kitty, has shown renewed activity and upped its game in several respects, including through the use of obfuscated Cobalt Strike Beacon payloads for command and control.

Dave Bittner: [00:08:23] The town of West Haven, Conn., suffered a ransomware attack. Unable to think of any better option, the town decided to pay the $2,000 the hackers demanded. The mayor says the criminals have restored West Haven's access to its data. An effective system of backing up data would have spared them the trouble, expense and humiliation.

Dave Bittner: [00:08:44] And finally, the hoods behind the GandCrab ransomware have released decryption keys to a Syrian man who said they'd deprived him of photos of his sons killed in that country's civil war. The extortionists also sent some ambiguous signals that they might remove Syrian targets from their hit list. We hope a grieving father got his memorabilia back, but we're not going to give the GrandCrab masters much credit for honor among thieves.

Dave Bittner: [00:09:16] And now a bit about our sponsors at VMware. Their trust network for Workspace ONE can help you secure your enterprise with tested best practices. They've got eight critical capabilities to help you protect, detect and remediate. A single open-platform approach, data loss prevention policies and contextual policies get you started. They'll help you move on to protecting applications, access management and encryption. And they'll round out what they can do for you with micro-segmentation and analytics. VMware's white paper on a comprehensive approach to security across the digital workspace will take you through the details and much more. You'll find it at thecyberwire.com/vmware. See what Workspace ONE can do for your enterprise security, thecyberwire.com/vmware. We thank VMware for sponsoring our show.

Dave Bittner: [00:10:10] And I'm pleased to be joined once again by Johannes Ullrich. He is from the SANS Institute. He's also the host of the ISC "StormCast" podcast. Johannes, welcome back. You had some information to share today about DNSSEC root key rollover. What do we need to know?

Johannes Ullrich: [00:10:28] Yes, so DNSSEC is one of those great ideas that never really took off because of some of the technical difficulties in implementing it and rolling it out. Now, one of these issues that has come up recently is the DNS root key. So the way DNSSEC works, essentially, is that you do verify all of your information in your DNS server by attaching signatures to it. And to verify the signatures, you publish keys. Ultimately, these keys have to be signed by the root key for the root DNS zone. And that key is sort of hardcoded in the configuration's part of your DNS server as trusted.

Johannes Ullrich: [00:11:12] The problem is that this key also has to be rotated once every so often. And, well, that time is coming up now. But nobody appears to be - or many people appear not to be ready for this. If you don't rotate this key, then all data being signed by the new, to-be-issued key will be considered invalid.

Dave Bittner: [00:11:36] So what's to be done here?

Johannes Ullrich: [00:11:38] Well, first of all, verify your DNS server configuration. Make sure you either update the key or you have your server configured to automatically do so. And there is an option now to do it. In general, with DNSSEC, there are now a couple options to sort of make it easier to publish your data. Many registrars now support it really just with a quick check of a box. Also Cloudflare now is getting into sort of the DNSSEC business and making it easier for you to actually participate in it and publish your information using DNSSEC.

Dave Bittner: [00:12:14] So do you think we're going to see wider adoption as we go forward?

Johannes Ullrich: [00:12:18] I hope so. Like, the Cloudflare approach looks somewhat promising. They're also trying to automate a lot of stuff, the mechanics behind DNSSEC that have been manual in the past. Like, for example, publishing your information then with your parent zone, like your dot com or dot org zone, your registrar. This was very sort of failure-prone the way this was done in the past. So maybe it'll help.

Johannes Ullrich: [00:12:43] But on the other hand, there are couple of alternatives coming up now because DNSSEC was so difficult to implement that do most of what DNSSEC does but at a much lower cost when it comes to implementing it. Like, for example, DNS cookies is sort of one option I see actually taking off quite quickly recently.

Dave Bittner: [00:13:03] So there's some - I don't know, some other choices out there in the market?

Johannes Ullrich: [00:13:08] Yes. DNSSEC is a very secure, very nice protocol the way it's designed, but maybe a little bit over-designed. So it's kind of almost too secure. It also does cause some problems, like for denial-of-service attacks and the like. These DNS cookies, the nice thing about them is that you really don't have to configure anything on sort of your average DNS server. They sort of just work out of the box. They're not quite as secure and robust as DNSSEC but are probably good enough to solve 80 percent of problem at a very minimum cost.

Dave Bittner: [00:13:45] Johannes Ullrich, thanks for joining us.

Johannes Ullrich: [00:13:47] Thank you.

Dave Bittner: [00:13:52] Now I'd like to share some words about our sponsor, Cylance. AI stands for artificial intelligence, of course. But nowadays, it also means all image or anthropomorphized incredibly. There's a serious reality under the hype, but it can be difficult to see through to it, as the experts at Cylance will tell you. AI isn't a self-aware Skynet ready to send in the Terminators. It's a tool that trains on data to develop useful algorithms. And like all tools, it can be used for good or evil. If you'd like to learn more about how AI is being weaponized and what you can do about it, visit threatvector.cylance.com and check out the report, "Security: Using AI for Evil." That's threatvector.cylance.com. We're happy to say that their products protect our systems here at the CyberWire. And we thank Cylance for sponsoring our show.

Dave Bittner: [00:14:50] My guest today is Mike Horning. He's an assistant professor of multimedia journalism in the Department of Communication at Virginia Tech, with expertise in the social and psychological effects of communications technologies. He and his colleagues recently conducted a study that asked Americans how they felt about government regulation of social media.

Mike Horning: [00:15:10] There's actually something in communication theory that we look at called the third-person effect. This is a theory that basically says that people have a tendency to overestimate the impact of media in terms of how it influences other people, and they have a tendency to underestimate its effect on themselves. So for example, people might, in the past, say, yeah, lots of people are affected by television, but, you know, television doesn't really affect me. We were kind of curious about this question of fake news.

Mike Horning: [00:15:45] So our questions were, were people overestimating the amount of impact that fake news would have on other people and, in turn, also underestimating the impact that it would have on others? So that was the start of, you know, the interest.

Dave Bittner: [00:16:01] So take us through. What did you discover from the survey?

Mike Horning: [00:16:05] Well, we found a couple things. Some surprising, and some not so surprising. You know, the first thing that we found is that, you know, similar to other media influences, we found that people did have a tendency to think that fake news had a greater impact on people - other people. And they tended to underestimate, you know, the impact that it had on them. So that in itself was not a terribly surprising finding. We kind of expected that.

Mike Horning: [00:16:36] But we did take the research a little bit further, and we asked people, if you were concerned with the impact that fake news had on other people, did you want to see more stricter government regulation of social media to protect you from, you know, influences of fake news? We thought that people, if they were more concerned, particularly if they were more concerned of its impact on others, would probably see a greater need to see, you know, more government regulation. And we found that to be actually not true. People said, yep, we are concerned, but we don't want to see a lot of government oversight on social media.

Mike Horning: [00:17:19] The other interesting finding that we did discover is we also asked people, if you were concerned with fake news, how did it influence your news-sharing habits? And when we said news-sharing, we meant all news. So, you know, it could be mainstream news. It could be, you know, nontraditional sites. And what we found is that people who were more concerned with the impact of fake news in their social feed were overall more likely to avoid sharing all news in their social feeds.

Mike Horning: [00:17:53] So we thought that was an interesting finding, you know, on a number of levels. The - you know, the indirect influence or indirect impact of fake news is that it could discourage people from sharing actually legitimate news. You know, secondary impact could be that it could affect the bottom lines of news industries, who in part are dependent on, you know, people sharing that content in their social feeds.

Dave Bittner: [00:18:17] Yeah. It's interesting that I guess news itself maybe has a bad odor on it because of the implication that it might be fake news?

Mike Horning: [00:18:29] It could be that, but it could be that people are having a difficulty knowing what is fake news and what is not. And so, you know, it might be natural for people to just say, well, I'm just not going to share news at all. Rather, you know, be safe than sorry.

Dave Bittner: [00:18:46] I suppose - I mean, we hear so much about people kind of self-siloing in these environments, building bubbles for themselves.

Mike Horning: [00:18:53] Yeah. And that is another challenge, I think, that we - you know, that we are facing. You know, some of that is because of algorithms, you know, in the social feeds that do - basically, you know, it's not a conspiracy, per se. It's just that the algorithms in your social feeds are designed to give you information that you're interested in. So every time you click on a piece of news or a news site, that algorithm, you know, correlates it with other information that you might be interested in.

Mike Horning: [00:19:27] And so very quickly, you can kind of find yourself siloed in terms of, like, the information that you get. You know, and part of it is our own doing. We have a tendency to hide people that annoy us and turn off people who - especially, you know, if we're not politically inclined, or if we are, we have a tendency to gravitate towards those people who confirm our own biases, and we have a tendency to reject those people who don't.

Dave Bittner: [00:19:53] Now, how does all this inform the work that you all are doing there at Virginia Tech in terms of preparing that next generation of journalists?

Mike Horning: [00:20:01] It's something that we certainly talk about in our classes. I teach a class that's actually specifically focused on the influences of technologies on society. I spend a lot of time trying to get students as journalists to think carefully about being fair to different sources. You know, we all have our own biases that - you know, we're always going to be combating those. And I think that's just human nature. And I think that's not so much the problem. The problem is being aware of those biases, and trying to keep them in check and trying to give people the benefit of the doubt when you ask them questions, you know, rather than automatically assuming the worst in someone.

Mike Horning: [00:20:45] And I encourage my students to ask more questions and listen more thoughtfully than anything, I think. I think a good journalist needs to do that first and, you know, ask people, well, why do you think that and have you thought about, you know, this or that? And engage people in meaningful conversations rather than sort of this, you know, combative back and forth desire to prove you're right all the time.

Mike Horning: [00:21:11] We have other areas of research where we're trying to help be a little more proactive in addressing that problem. I'm working with a colleague in computer science right now where we're working on building an application in your Twitter feed that identifies news in your feed that has clearly been marked as fake news and then other news that has been considered questionable content. And our approach to it is actually not to just be sort of, like, the all-knowing seer who says, you know, this news is fake and this news is not - because, you know, we've found in our research that if some place like Facebook or Twitter tells you what to think about the news, people have a tendency to almost reject that.

Mike Horning: [00:22:01] So we try to highlight questions in the newsfeed that other people in the feed have had so we can kind of encourage more of, you know, a kind of a citizen-to-citizen kind of conversation, and then let people decide for themselves whether they agree with it or not. Our thinking is just to provide sort of these nudges that encourage people to just kind of think a little more critically about the information in their feeds.

Dave Bittner: [00:22:26] That's Mike Horning from Virginia Tech.

Dave Bittner: [00:22:33] And that's the CyberWire. A quick program note. I'll be on vacation next week, and CyberWire executive editor Peter Kilpe will be filling in while I'm gone. Go easy on him.

Dave Bittner: [00:22:44] Thanks to all of our sponsors for making the CyberWire possible, especially to our sustaining sponsor, Cylance. To find out how Cylance can help protect you using artificial intelligence, visit cylance.com. And Cylance is not just a sponsor. We actually use their products to help protect our systems here at the CyberWire. And thanks to our supporting sponsor, VMware, creators of Workspace ONE Intelligence. Learn more at VMware.com.

Dave Bittner: [00:23:10] The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our CyberWire editor is John Petrik, social media editor Jennifer Eiben, technical editor Chris Russell, executive editor Peter Kilpe. And I'm Dave Bittner. Thanks for listening.