The CyberWire Daily Podcast 8.24.18
Ep 670 | 8.24.18

More action against Iranian influence operations. Tehran's cyberespionage against universities. Counter-value targeting in cyber deterrence. Sino-Australian trade war? Law and order.

Transcript

Dave Bittner: [00:00:03] Google puts the cats out. Secureworks describes an Iranian cyberespionage campaign targeting universities. That DNC phishing campaign is confirmed to be a false alarm caused by a Michigan misstep. But almost 15 million voter records appear to have been inadvertently exposed in Texas. The U.S. tells Russia to knock off the influence operations. And some suggest a countervalue deterrence strategy to tame the bears. China warns Australia its new government will face trade retaliation for banning ZTE and Huawei. Reality Winner gets five years. And two Minnesota lawyers go away, too.

Dave Bittner: [00:00:47] Now I'd like to share some words about our sponsor Cylance. AI stands for artificial intelligence, of course. But nowadays, it also means all image or anthropomorphized incredibly. There's a serious reality under the hype, but it can be difficult to see through to it. As the experts at Cylance will tell you, AI isn't a self-aware Skynet ready to send in the Terminators. It's a tool that trains on data to develop useful algorithms. And like all tools, it can be used for good or evil. If you'd like to learn more about how AI is being weaponized and what you can do about it, visit threatvector.cylance.com and check out their report "Security: Using AI for Evil." That's threatvector.cylance.com. We're happy to say that their products protect our systems here at the CyberWire, and we thank Cylance for sponsoring our show. Major funding for the CyberWire podcast is provided by Cylance.

Dave Bittner: [00:01:46] From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Friday, Aug. 24, 2018. FireEye said that YouTube was infested with Iranian front accounts. And yesterday, Google took action to terminate dozens of them. They were channels for the Islamic Republic of Iran Broadcasting, the state-run media outlet that's been under U.S. sanctions since 2013. The YouTube channels have been fronting for Tehran since at least January 2017. Google stopped 39 video channels on YouTube, six accounts on its Blogger platform and 13 accounts on Google Plus. All of them were connected to the Islamic Republic of Iran Broadcasting Service. The YouTube channels had 13,466 views of the inauthentic videos according to Google.

Dave Bittner: [00:02:41] Iran, like Russia, has said it didn't do nothing. It is, a member of Iran's U.N. delegation said, a nonsensical accusation to say the Islamic Republic is conducting an organized campaign of propaganda. Iranian facility with information operations predates the Islamic Revolution itself. It's worth recalling the role cassette tapes bearing the Ayatollah Khomeini's sermons played in the uprising that deposed the Shah.

Dave Bittner: [00:03:08] There have also been long-running Iranian espionage operations. The Secureworks Counter Threat Unit this morning reported its discovery of one of them by the threat actor they call Cobalt Dickens. It's an extensive Iranian credential-stealing campaign that targeted universities across 16 domains with more than 300 spoof pages in 14 countries. Australia, Canada, China, Israel, Japan, Switzerland, Turkey, the United Kingdom and the United States were among the countries whose universities were prospected. Secureworks notes two things about target selection. First, universities can generate interesting intellectual property and technology that would be attractive to an espionage service. And second, universities are relatively soft targets, much more poorly protected than most sectors that would be of comparable interest.

Dave Bittner: [00:03:59] This week's takedowns by Microsoft, Facebook, Twitter and Google suggest that screening for authenticity - that is, determining that the people who post the content are who they say they are, at least more or less - may be a more promising approach to some of the more troubling forms of influence operations than are more aggressive attempts to screen for content - trustworthiness, appropriateness and so on. These have aroused concerns about freedom of speech and about the potentially monopolistic power of big tech. Winkling out the inauthentic seems, on the face of it, less problematic.

Dave Bittner: [00:04:35] The Democratic Party confirmed that it's phishing false alarm was produced by overzealous, ill-conducted red teaming by the party's Michigan wing. Again, realistic training and evaluation are good things, but they have to be properly coordinated. Don't just freelance this stuff. Another election security own goal was reported late yesterday in Texas, where nearly 15 million voter records were found in an exposed server by a New Zealand breech hunter who goes by the nom de hack Flash Gordon. It's so far unknown who mishandled the data, but misconfiguration hunters at security firm UpGuard suggest, on the basis of quick preliminary and circumstantial evidence, that it may have been the Republican-leaning firm Data Trust. UpGuard notes that it found a similar exposure at the company Deep Root Analytics, which sourced much of its information from Data Trust.

Dave Bittner: [00:05:31] U.S. national security adviser Bolton is calling for Russia to knock off its attempts to influence U.S. elections. Coincidentally or not, an Atlantic Council think piece reminds everyone of the Panama Papers and suggests that if you want to deter Russian cyber operations, a sound countervalue retaliatory strategy would be to go after the oligarchs' bank accounts.

Dave Bittner: [00:05:54] The Panama Papers were the take of a 2016 incident in which the hack of a law firm that specialized in offshore financial transactions revealed information about the ways in which influential wealthy Russians were moving money around. A St. Petersburg cellist, one Sergei Roldugin, was noted to have received more than $2 billion from the Russian government and various oligarchs. Mr. Roldugin is a childhood friend of this guy Vladimir Putin, and he's widely believed to have been holding the swag for his old buddy. Offshore money - anonymous offshore money, the Atlantic Council argues, is vital to President Putin's hold on power. If it were no longer safe - even if it were no longer anonymous, that would be a serious matter and one the Russian government would be more likely to take seriously than it would, say, some sanctions that cost a lot of little people their jobs and livelihoods.

Dave Bittner: [00:06:51] Not all Russian trolling is aimed at election influence. A great deal of it is devoted to inciting mistrust and fomenting misery. A good example going on now may be seen in Russian social media accounts systematically flacking anti-vaccine conspiracy theories, especially the claim that measles, mumps and rubella vaccine causes autism. Among the crueler impostures on offer from Moscow and St. Petersburg is a false story that the vaccine left three-quarters of a Mexican village's children either dead or hospitalized.

Dave Bittner: [00:07:26] China promises trade retaliation against Australia for excluding Huawei and ZTE from its coming national 5G network. Such retaliation will be a new government's problem. Malcolm Turnbull is out as Australia's prime minister, replaced by his ally Scott Morrison in a Liberal Party vote. The decision to keep the Chinese manufacturers' devices out of the new network were prompted by security concerns.

Dave Bittner: [00:07:53] NSA alumna and leaker Reality Winner was sentenced to five years in a federal prison yesterday. She had entered a guilty plea to charges related to leaking highly classified material to a news outlet. That outlet wasn't named in the charging documents, but it's widely and credibly believed to be The Intercept. The sentence of five years and change is believed to be the stiffest one a U.S. court has ever handed down in a case of leaking to journalists.

Dave Bittner: [00:08:21] At her sentencing, Ms. Winner said she accepted responsibility for, quote, "an undeniable mistake that I made," end quote. She went on to tell the judge, I would like to apologize profusely for my actions, which she characterized as a cruel betrayal of my nation's trust in me. Ms. Winner, you will recall, was identified quickly by federal investigators on the basis of dots in the printed copy of a document she passed to The Intercept, which showed them to intelligence officers in an attempt to corroborate their authenticity. The dots led to the printer, and the network logs led to Ms. Winner.

Dave Bittner: [00:08:58] And finally, two creeps in Minneapolis - both lawyers, it's sad to say - have copped guilty pleas related to adult content creation and extortion. They made adult video nasties, put them up on various bit torrent sites, noted who downloaded them and then sent an extortion demand to those whose curiosity got the better of them, pay $3,000 or be humiliated in court. The two created shell companies to operate as copyright plaintiffs. The miscreants, may their names live in infamy and infamy only, are Paul Hansmeier and John Steele, both now debarred and awaiting sentencing.

Dave Bittner: [00:09:42] And now a bit about our sponsors at VMware. Their Trust Network for Workspace ONE can help you secure your enterprise with tested best practices. They've got eight critical capabilities to help you protect, detect and remediate. A single open platform approach, data loss prevention policies and contextual policies get you started. They'll help you move on to protecting applications, access management and encryption. And they'll round out what they can do for you with micro-segmentation and analytics. VMware's white paper on "A Comprehensive Approach to Security Across the Digital Workspace" will take you through the details and much more. You'll find it at thecyberwire.com/vmware. See what Workspace ONE can do for your enterprise security, thecyberwire.com/vmware. And we thank VMware for sponsoring our show.

Dave Bittner: [00:10:42] And I'm pleased to be joined, once again, by Ben Yelin. He's a senior law and policy analyst at the University of Maryland Center for Health and Homeland Security. Ben, welcome back. We had a story come by from CyberScoop. The title was "The Latest Attempt by the State Department to Set Behavior Norms," written by Sean Lyngaas. What's going on here? What's the State Department trying to accomplish?

Ben Yelin: [00:11:05] So Congress, basically dating back to the last couple of years of the Obama administration, has been very concerned that we don't have an international strategy to avoid the type of cyberattacks that have plagued us in the past few years, specifically some of the high-profile ones. In the private sector, we had Equifax. In the public sector, we had the Office of Personnel Management within the federal government.

Ben Yelin: [00:11:29] So the Trump administration has proposed a broader set of consequences that the government can impose on adversaries to ward off cyberattacks. This is a document that was unclassified recently. And it calls for the U.S. to work with our allies to inflict swift, costly and transparent consequences on those governments that use significant malicious cyberactivity to harm U.S. interests. Part of that is clearly identifying the malicious activity that exists, that we're seeking to deter. And part of it is developing a separate strategy for each of our cyber-adversaries. So we're going to have to take a different approach when it comes to Chinese hackers versus Russian hackers or North Korea. And I think it'll take a specific set of objectives to deal with each of those threats.

Ben Yelin: [00:12:20] One thing that's worth noting is that, even though this directive has been written by the State Department, there's still a major leadership void in terms of our diplomatic efforts on cybersecurity. The department has been without a cybersecurity coordinator for 10 months. The deputy assistant secretary has been serving as the top diplomat. But you know, this is a confirmable position in the United States Senate. The president has been behind on State Department nominations basically since his inauguration. And so without the sort of strong leadership that comes from a department head, I think not only will the report have less teeth because it's not going to be backed by the full weight and force the department, but it sends a signal to our allies that it's not necessarily a priority for us. And I think that's very concerning.

Dave Bittner: [00:13:10] Yeah. I was going to ask, you know, how much of this is simply putting our potential adversaries on notice of, say - rather than necessarily having teeth behind it, saying - hey, you know, we've got our eyes on you.

Ben Yelin: [00:13:23] Yeah. I mean, I think that's a huge part of it. And you know, obviously, acknowledging the problem is the first step. And that's another instance - I think we've talked about this in the past - where the Trump administration separates a little bit from Trump himself, who, even though he, under his administration, has formulated a pretty cohesive cybersecurity strategy, the president himself often undermines it by, for example, dismissing Russia's electronic interference with our 2016 presidential election. So yeah, I mean, it's a way of putting our adversaries on notice that we're focusing on the problem. And then it's an olive branch to our allies that we're willing to work with them to root out these threats. But it is just a statement of policy. And you know, without strong leadership, it's going to remain simply a document in the State Department and not something that's (inaudible) our international efforts.

Dave Bittner: [00:14:20] And a noticeably understaffed State Department at that.

Ben Yelin: [00:14:25] Absolutely. And I would note, this is not the only department within the Department of State that's understaffed. I think - certainly, diplomatic outposts have been ravaged. Since the early days of this administration, we've lost a lot of our top diplomats. And this is a particularly tough area to be without leadership because it's such an emergent threat. We've seen the consequences of malicious foreign actors instituting cyberattacks within the United States. So I think leadership is needed now more than ever.

Dave Bittner: [00:14:53] All right. Ben Yelin, thanks for joining us.

Dave Bittner: [00:15:00] And now a word from our sponsor, the upcoming Cybersecurity Conference for Executives. The Johns Hopkins University Information Security Institute and Navigant will host the event on Tuesday, Oct. 2, in Baltimore, Md., on the Johns Hopkins Homewood campus. The theme this year is "Cybersecurity Compliance and Regulatory Trends." And the conference will feature discussions with thought leaders across a variety of sectors. You can find out more and register at isi.jhu.edu. And click on the 5th Annual Cybersecurity Conference for Executives. Learn about emerging regulations and how the current cybersecurity landscape is changing as companies must adhere to these regulations and take actionable steps to become compliant. Check out all the details at isi.jhu.edu, and click on the 5th Annual Cybersecurity Conference for Executives.

Dave Bittner: [00:16:01] My guest today is Theresa Payton. She's CEO at Fortalice Solutions, a cybersecurity consulting company, and co-founder of Dark Cubed, a cybersecurity product company. She's a former White House CIO and was recently featured on the CBS television series "Hunted." She's also one of the keynote speakers at the upcoming 2018 (ISC)2 Security Congress, which is taking place October 8 through the 10 in New Orleans. The CyberWire is proud to be a media sponsor of that event.

Theresa Payton: [00:16:33] With all technology, whether it's new inventions in the internet of things, artificial intelligence, machine learning and blockchain, they're really revolutionizing how we conduct business. But they're not 100 percent fail-safe. And really, it's going to be up to businesses and the user to trust but verify. And it's going to be up to the security community to really come up with the right designs to design for the layman. And that is something really missing right now.

Dave Bittner: [00:17:04] And it strikes me that some of these things, as they grow in popularity - things like blockchain, cryptocurrency, certainly artificial intelligence - you know, they kind of become the flavor of the month. And there's a cluster of activity around them. And I can't help but wondering if we do people a disservice with all of the hype that they generate.

Theresa Payton: [00:17:21] I love the kind of saying of, is it hype or reality? And so for example, there are internet of things teakettles out there. And an internet of things teakettle was looked at by security researchers. They took it into a corporation. They set it up. And who doesn't love a really great cup of tea? The way the teakettle works is it's connected to the Wi-Fi. And you can actually, on an app in your phone, tell the teakettle you're on your way into work. And it'll boil the water so that you can brew a perfect cup of tea. And it won't burn out, and it won't over boil. So it's, like, the perfect situation.

Theresa Payton: [00:18:00] But this internet of things teakettle, it grabbed the corporate key to the network in order to authenticate. And when the security researchers set up a rogue Wi-Fi hotspot, the chatty little teakettle actually gave up the corporate access to the network. So that's an example where, from a hype perspective, we're integrating these newer technologies everywhere within corporate infrastructure, not really thinking about it. Who would think a teakettle would be the weak link in - to getting into your company? We're so focused on the human clicking on links, and we're forgetting about the teakettles and the thermostats and the security access to a building.

Dave Bittner: [00:18:40] So what are your recommendations for organizations to get their handle on this, to avoid the hype and to be able to get messages that are based in reality?

Theresa Payton: [00:18:50] This is a really great question. And so - for example, one of the things I would ask a company to be thinking about is, as you integrate these new technologies, how do you segregate out those assets that matter to you the most? Maybe it's your intellectual property; maybe you're doing mergers and acquisitions due diligence; could be customer data; health care data - whatever it is that's really those digital assets that are so important to you, how are you going to safeguard them or cordon them off from these newer technologies that you have to integrate but, at the same time, we all know the security is not where it needs to be? And so if you think about a design where you say, logically and physically, I'm not going to allow a teakettle to be the way that cybercriminals access these important digital assets.

Dave Bittner: [00:19:42] So as you look at the landscape of where we are today when it comes to cybersecurity, is there anything that you feel isn't getting the attention that it deserves, anything that we're missing?

Theresa Payton: [00:19:54] Well, I do believe that with artificial intelligence, machine learning and blockchain - all of these still in the infancy to help with security - but they really will be game changers. But artificial intelligence and machine learning is already giving us some promises. So it's moving from hype to actual reality. So one of the things that we're seeing is you can actually baseline your network traffic. You can baseline all of your user access. And then you can use artificial intelligence and machine learning to show you those anomalies because right now in security operations centers, over 70 percent of the alerts that come in are false alarms. So how do we reduce the false alarms so you don't miss the needle in the haystack? So there are some promises here where we're moving from that hype to reality. But blockchain is still incredibly complex and expensive to implement. And artificial intelligence is only as good as the engineer and the requirements that you gave to the engineer.

Dave Bittner: [00:20:59] Do you have any advice for those folks who are on the sales side of things? To avoid the hype, to earn the respect of the people they're trying to sell to, what should their approach be?

Theresa Payton: [00:21:10] First of all, study. Really look at what's going on in the marketplace, who's really using these newer technologies. And make the use case. And so relate what you see as far as the early adopters of this technology to the person you're talking to. So for example, when you look at blockchain, the majority of blockchain implementations have been in the financial services industry as far as anything outside cryptocurrency. So the question is, if you're sitting with a manufacturer, how do you relate those use cases and the user scenarios to manufacturing? How do you translate that? How do you calculate ROI? It's still so new. How do you say - well, blockchain's just better - and that's your ROI? How do you calculate that?

Theresa Payton: [00:22:01] Same thing with artificial intelligence and machine learning - there's really a buyer beware in my mind. So one of the things that I've seen is on the customer service side. Lots of companies have moved towards customer service run by AI chatbots, and their customers love it. They don't know it's not a human being. But what they do notice is that these AI chatbots, they're never grumpy. They always have a good day, and they always have the answer.

Theresa Payton: [00:22:28] But if companies don't have a trust-but-verify to figure out - well, how is it that these chatbots always have the answer? One company that we work with was surprised to learn, when we asked that question - so they did their inspection - that the AI chatbots were actually escalating each other's privileges because the engineer had been told, we want the chatbots to be self-learning, contextually aware, always have the answer, to not have the customer insist on talking to a human being or going to a brick-and-mortar. And so it's all about effectiveness, efficiency and customer delight. No one ever mentioned - oh, don't forget, we really need to follow our user access controls. And so what we found was these chatbots all had superuser access. Just takes one chatbot to be compromised, and the next thing you know, you're getting your customer data stolen right out from under your nose.

Dave Bittner: [00:23:28] Our thanks to Theresa Payton from Fortalice for joining us. Once again, she'll be one of the keynote speakers at the 2018 (ISC)2 Security Congress. That's taking place October 8 through the 10 in New Orleans.

Dave Bittner: [00:23:45] And that's the CyberWire. Thanks to all of our sponsors for making the CyberWire possible, especially to our sustaining sponsor Cylance. To find out how Cylance can help protect you using artificial intelligence, visit cylance.com. And Cylance is not just a sponsor. We actually use their products to help protect our systems here at the CyberWire.

Dave Bittner: [00:24:04] And thanks to our supporting sponsor VMware, creators of Workspace ONE Intelligence. Learn more at vmware.com.

Dave Bittner: [00:24:13] The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our CyberWire editor is John Petrik; social media editor, Jennifer Eiben; technical editor, Chris Russell; executive editor, Peter Kilpe. And I'm Dave Bittner. Thanks for listening.