Cyber deterrence? What grid failure looks like (and it needn’t come from a cyberattack). EU complains of Russian info ops. Twitter takes down inauthentic accounts.
Dave Bittner: [00:00:03] The New York Times reports that the U.S. has staged malware in Russia's power grid, presumably as deterrents against Russian cyberattacks against the U.S. South America has largely recovered from a large-scale power outage that seems so far to have been accidental. An EU report claims that Russian information operations against the EU are increasing. Twitter takes down more inauthentic sites. What to make of claims of weaponization of artificial intelligence. And the Target outage over the weekend seems to have been caused by glitches, not hacking.
Dave Bittner: [00:00:42] And now a word from our sponsor, ExtraHop, the enterprise cyber analytics company delivering security from the inside out. Prevention-based tools leave you blind to any threats inside your network. By adding behavioral-based network traffic analysis to your SOC, you can find and stop attackers before they make their move. ExtraHop illuminates the dark space with complete visibility at enterprise scale, detects threats up to 95% faster with machine learning and guided investigations that help Tier 1 analysts perform like seasoned threat hunters. Visit extrahop.com/cyber to learn why the SANS Institute calls ExtraHop fast and amazingly thorough, a product with which many SOC teams could hit the ground running. That's extrahop.com/cyber. And we thank ExtraHop for sponsoring our show.
Dave Bittner: [00:01:39] From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Monday, June 17, 2019.
Dave Bittner: [00:01:47] The New York Times says, in a largely anonymously sourced piece, that the U.S. has staged implants in the Russian electrical grid to enable the U.S. to impose costs on widely expected Russian misbehavior during the 2020 elections. This would be battlespace preparation as opposed to an attack. It’s worth noting here that the article itself is much clearer on this than is the headline that accompanied it, which said U.S. escalates online attacks on Russia’s power grid.
Dave Bittner: [00:02:17] The operation would appear to be a deterrent move intended to dissuade Russia from cyberattacks and influence operations against the U.S. No one in the U.S. government has had anything to say publicly, and the sources the Times cites in its article are former and current officials - that’s sources on the alleged staging itself. Plenty of observers have been willing to comment on the record.
Dave Bittner: [00:02:40] Precedent for active cyber operations may be seen in U.S. response to Russian election influence operations in 2018. Lawfare had a useful summary of presumed Cyber Command action against the Internet Research Agency, which President Trump more or less confirmed in a Fox interview back in May. Others see similarities to the allegedly planned but apparently never executed NitroZeus operation prepared during the previous administration against Iran, which is said to have been a comprehensive takedown of Iran’s infrastructure in the event Iran’s nuclear program brought that country and the U.S. into open warfare.
Dave Bittner: [00:03:19] A report of U.S. staging in Russian power infrastructure comes shortly after Dragos reported signs that Xenotime, the activity group responsible for the Trisis - also called Triton - malware used against a petrochemical facility in the Middle East, had been seen in the North American power grid. This activity appeared to be reconnaissance. FireEye, which discussed renewed Triton activity in April, has attributed the campaign to the Russian government, specifically to the Central Scientific Research Institute of Chemistry and Mechanics.
Dave Bittner: [00:03:53] If the New York Times has its story right, the operation it reports would seem to be deterrence. For deterrence to work, the threatened retaliation must be credible, and the adversary must know about it. If that’s the point of any background discussions with the New York Times, then mission accomplished.
Dave Bittner: [00:04:11] And if this is deterrence, it’s worth noting that there’s another similarity with classic Cold War nuclear deterrence - the strategy seems to represent a predominantly countervalue approach. Countervalue deterrence holds something at risk the adversary values but which need have no direct military significance. Counterforce strategies, on the other hand, threaten reprisal against military targets. The deterrence of mutually assured destruction during the Cold War, which held cities at risk, was an example of countervalue strategy. It’s also worth noting that an attack on electrical power distribution anywhere harms civilian targets at least as much as it does military ones.
Dave Bittner: [00:04:52] For an object lesson in what a large-scale temporary grid failure looks like, see the weekend's outage in South America. Argentina and Uruguay were most heavily affected, with effects also felt in Brazil, Chile and Paraguay; all have for the most part recovered. The outages do not appear to be the result of a cyberattack, but some observers have interpreted comments in Argentina's government that such an attack hasn't been ruled out as evidence of suspicion and not the normal caution one would exercise in responding to a question about an investigation that's still in its early stages. As far as is known so far, the power failures seem to be accidents of the kind that Argentina’s energy minister says happen regularly. They’re remarkable for their extent but not necessarily for their cause.
Dave Bittner: [00:05:43] Last week, attention was drawn to Facebook's policies toward the removal of deep fake videos. They had been criticized for not removing a modified video of House Speaker Nancy Pelosi that was unflattering. And in response, someone posted a deep fake video featuring Facebook CEO Mark Zuckerberg. CyberWire's Tamika Smith explores this new era of information warfare.
Tamika Smith: [00:06:07] When you think weaponized artificial intelligence, you may remember the movie "2001: A Space Odyssey." In a specific scene, one of the astronauts, Dave, is trying to get the machine to let him onto the spacecraft to thwart the machine's master plan.
(SOUNDBITE OF FILM, "2001: A SPACE ODYSSEY")
Keir Dullea: [00:06:23] (As Dr. Dave Bowman) Do you read me, HAL?
Douglas Rain: [00:06:24] (As HAL 9000) Affirmative, Dave. I read you.
Keir Dullea: [00:06:27] (As Dr. Dave Bowman) Open the pod bay doors, HAL.
Douglas Rain: [00:06:31] (As HAL 9000) I'm sorry, Dave. I'm afraid I can't do that.
Tamika Smith: [00:06:35] No spoiler alert here - we all know that Dr. David Bowman survives, but the rest of the Discovery 1 crew aren't so lucky. We are far from this '60s version of this physical machine vs. man battle, but experts say the weaponization of AI is leading the way for a new era of information warfare. Here to talk more about this is Britt Paris. She's a researcher at Data & Society. It's a research institute focused on social and cultural issues that come from data-centric technology development. Hi, Britt. Welcome to the program.
Britt Paris: [00:07:04] Hi. Thanks for having me.
Tamika Smith: [00:07:05] You've written extensively about this topic, and most recently you co-wrote an article on Slate entitled "Beware The Cheapfakes." Deep fakes are doubling, but they don't have to be high-tech to be damaging. This was directly related to the AI-generated videos of Facebook CEO Mark Zuckerberg and House Speaker Nancy Pelosi. Let's start with the technological terms here. What's the difference between a deep fake and a cheap fake?
Britt Paris: [00:07:29] So deep fakes are artificial intelligence-generated videos of any sort, and cheap fakes are the types of manipulative videos that have been around forever. They increasingly rely on free software that allows, you know, very easy manipulation of videos through really conventional editing techniques - techniques like speeding up content, slowing it down, as we saw in the Pelosi video, as well as recontextualizing existing footage from previous events.
Tamika Smith: [00:08:01] I must say, when I watched the video, it was very difficult to tell the difference if it was real or fake. What's the technology that's driving the creation of this type of content?
Britt Paris: [00:08:10] So with the Mark Zuckerberg example in particular, it was produced by an advertising company named Canny. And Canny produced video with the help of artists in this proprietary, artificial intelligence-generated video dialogue replacement model that allowed them to take video of Zuckerberg testifying - I believe it was April of 2018 - to take the voice that they had recorded and sort of insert it into the video of Zuckerberg testifying to Congress last year.
Tamika Smith: [00:08:45] With the spread of this new technology, how do we detect what's real and what's fake?
Britt Paris: [00:08:50] There are a few different things. So with the Zuckerberg example, primarily looking at, you know, voice replacement technology - and so you can hear some sort of buzzes and clips, where they're going in and changing the voice. But, you know, whenever it's just sort of a face that is transmogrified onto an existing video, you can look for things like artifacting or sort of pixelation or blurring around where the face is inserted into the video. You can look at whether or not the eyes blink because, you know, if you think about it, training data is taken from images where people's eyes are generally open.
Britt Paris: [00:09:25] In a lot of these videos that are produced through artificial intelligence, the eyes won't blink because, you know, the training data doesn't blink. Generally, changing color in the faces of people, you know, when they're filmed live on video, and that doesn't happen whenever the video is made with artificial intelligence or made from training data through artificial intelligence methods.
Tamika Smith: [00:09:49] Based on what I've seen with the case with Mark Zuckerberg and Speaker Nancy Pelosi, it doesn't seem to me that social media companies - you know, including Facebook and Twitter, etc. - they don't seem like they have a set strategy to deal with this.
Britt Paris: [00:10:04] I know. (Laughter) That's the troubling issue for a lot of people. Social media really rewards content that is novel, inflammatory, that shows people doing sort of outrageous things. It rewards that type of content with, you know, large followings or sort of - it allows it to reach large scales. Because, you know, really what these social media companies are looking for are engagement, clicks, eyeballs because, you know, that's what they use to drive their advertising models.
Tamika Smith: [00:10:38] But based on the amount of people that they reach every day, there has to be some moral obligation.
Britt Paris: [00:10:43] And this is the issue, right? People are trying to press these social media companies for accountability, especially given, you know, the number of debacles that these social media companies have been responsible for producing and fomenting. You know, we can think about examples of WhatsApp, that is owned by Facebook, in Myanmar, in India and in Brazil that have led to very negative consequences - things from inciting violence and even murder, to throwing the elections to a far-right candidate in Brazil. So people are calling upon, you know, Facebook, Twitter, WhatsApp, etc., in their role inciting this type of activity.
Tamika Smith: [00:11:27] Thank you so much for joining the program, Britt, and offering your insight into this topic.
Britt Paris: [00:11:31] Oh, you're welcome. Anytime. Thanks for having me.
Tamika Smith: [00:11:34] Britt Paris is a researcher at Data & Society. It's a research institute focused on social and cultural issues that come from data-centric technology development.
Dave Bittner: [00:11:44] That was the CyberWire's Tamika Smith reporting. By the way, I may have had, as my original error sound effect on my original Macintosh SE/30, HAL 9000 saying, I'm sorry, Dave. I'm afraid I can't do that.
Dave Bittner: [00:12:00] The European Commission has produced a report accusing Russia's government of an extensive social media effort to influence EU election results. The report concludes that, by some indices, Russian disinformation campaigns have more than doubled since 2018, and that their goal remains the same - undermining the legitimacy of European democracies, including, of course, that of the European Union as a whole.
Dave Bittner: [00:12:26] Twitter took down some 5,000 inauthentic accounts late last week. Most of them were being run out of Iran, although a small fraction were operated from Russia, or by people interested in Venezuela’s crisis and the Catalan independence movement in Spain.
Dave Bittner: [00:12:42] Target suffered a widespread point-of-sale disruption over the weekend. The retailer says it recovered yesterday, and that the incident was an accident, not the result of a cyberattack or a data breach. And finally, bravo Bitdefender. The company has released a GandCrab ransomware decryptor.
Dave Bittner: [00:13:04] And now a word from our sponsor, ObserveIT. According to Cisco, over the course of 1 1/2 months, the typical suspicious insider can download 5,200 documents. Unfortunately, many ad hoc insider threat investigations can drag on for weeks or even months since it's tough to know exactly who did what when and why. Security analysts have to wade through a sea of event logs, many of which are completely irrelevant, to eventually discover the root cause of an incident. What if we told you that there's a way to investigate insider threat incidents faster? With ObserveIT's dedicated insider threat management platform, security teams can quickly find out the context into both the user and data activity behind an alert. Detailed user activity timelines and easily searchable metadata help you know the whole story on insider threats. Visit observeit.com/cyberwire to try out ObserveIT's sandbox environment for yourself - no downloads or configuration required. That's observeit.com/cyberwire. And we thank ObserveIT for sponsoring our show.
Dave Bittner: [00:14:20] And joining me once again is Joe Carrigan. He's from the Johns Hopkins University Information Security Institute, also my co-host over on the "Hacking Humans" podcast. Joe, it's great to have you back.
Joe Carrigan: [00:14:30] Hi, Dave.
Dave Bittner: [00:14:30] We've got a story from Ars Technica. This has been making the rounds - a big GDPR fine. This is, "Spanish soccer league's app caught eavesdropping on users in antipiracy push."
Joe Carrigan: [00:14:41] Right.
Dave Bittner: [00:14:41] Now, before we dig into this story, I have a story to share.
Joe Carrigan: [00:14:44] OK.
Dave Bittner: [00:14:44] When I was not long out of college - so this would have been back in the early '90s, I suppose - I had a friend whose job was going around to restaurants and writing down all of the music that the restaurant was playing and reporting that back to the music licensing organizations.
Joe Carrigan: [00:15:02] ASCAP.
Dave Bittner: [00:15:02] ASCAP and BMI.
Joe Carrigan: [00:15:04] Right.
Dave Bittner: [00:15:04] Yeah. And because at the time - and I believe still today - if you were a restaurant playing music in your establishment...
Joe Carrigan: [00:15:13] Right.
Dave Bittner: [00:15:13] ...You had to have a license with ASCAP or BMI.
Joe Carrigan: [00:15:16] ACSAP or BMI.
Dave Bittner: [00:15:16] Or both or whatever.
Joe Carrigan: [00:15:17] Same with radio stations.
Dave Bittner: [00:15:17] And so this friend's job was to go around and basically find restaurants that weren't paying up their licensing fee.
Joe Carrigan: [00:15:25] Right.
Dave Bittner: [00:15:25] And reporting back.
Joe Carrigan: [00:15:27] In violation.
Dave Bittner: [00:15:27] And they would get a a strongly worded letter from ASCAP or BMI basically saying, you know, you can pay us now or you can pay us later.
Joe Carrigan: [00:15:34] Right.
Dave Bittner: [00:15:34] And if you pay us now, it'll be a lot less money.
Joe Carrigan: [00:15:36] Yes.
Dave Bittner: [00:15:37] I tell that story because it kind of leads into this story, which is sort of an automated version of that.
Joe Carrigan: [00:15:44] Right. It's an automated version of - from La Liga.
Dave Bittner: [00:15:48] Yeah.
Joe Carrigan: [00:15:49] It's Spain's top professional soccer league.
Dave Bittner: [00:15:51] OK.
Joe Carrigan: [00:15:52] And they have now been slapped with a a 250,000 euro fine for violating user privacy because they're using a feature, kind of like Shazam, that listens to music.
Dave Bittner: [00:16:02] Right.
Joe Carrigan: [00:16:02] And they're using it to identify pirated copies of their soccer games - so somebody who doesn't have the rights to play these games in a public place. La Liga is entitled to their royalties on these games.
Dave Bittner: [00:16:15] Right. So if I'm a bar...
Joe Carrigan: [00:16:17] Right.
Dave Bittner: [00:16:17] ...And I want to show this to my patrons...
Joe Carrigan: [00:16:20] Right.
Dave Bittner: [00:16:20] ...I have to pay for that.
Joe Carrigan: [00:16:21] You have to pay for it.
Dave Bittner: [00:16:21] Right. OK.
Joe Carrigan: [00:16:22] But what Liga is doing here is they released their soccer app, and they put in the user's app the ability to listen to the audio in the room, and then they're going to listen in the - using the same kind of technology like Shazam, to see if the sound fingerprint coming out of a TV matches the sound fingerprint from a game. Or they're also going to use GPS to see where the phone is and see if that location has a license to show that game. And they didn't let the users know that that was what they were doing, was essentially operating as spies.
Dave Bittner: [00:16:53] Right, on their behalf.
Joe Carrigan: [00:16:55] For La Liga, on behalf of La Liga.
Dave Bittner: [00:16:57] (Laughter) Now, they claim that the fingerprinting technology that they're using only uses a little tiny bit of the audio information, and that it's impossible for them to record human voices or human conversations.
Joe Carrigan: [00:17:10] Yeah, they're probably not doing that. That's right.
Dave Bittner: [00:17:12] I find - I still find that hard to believe.
Joe Carrigan: [00:17:14] That is not the point (laughter).
Dave Bittner: [00:17:15] Yeah, it doesn't matter. It's...
Joe Carrigan: [00:17:18] Right.
Dave Bittner: [00:17:18] You know, it's like, I broke into your house, but all I did was rearrange the furniture.
Joe Carrigan: [00:17:21] Right, and cleaned up.
Dave Bittner: [00:17:22] You still broke into my house.
Joe Carrigan: [00:17:23] Exactly. That's still breaking and entering.
Dave Bittner: [00:17:25] (Laughter) Right, right. Now, I'm guessing that it was probably, as always, buried somewhere deep in the EULA that they had permission to do this, and they'll probably say that, when you initially fired up the app, you gave us access to the microphone.
Joe Carrigan: [00:17:37] Right.
Dave Bittner: [00:17:38] But I think it's...
Joe Carrigan: [00:17:39] And your GPS server.
Dave Bittner: [00:17:39] And your GPS.
Joe Carrigan: [00:17:40] Or GPS system, rather.
Dave Bittner: [00:17:41] So this is what GDPR was supposed to before, right? I mean...
Joe Carrigan: [00:17:45] And this is a GDPR fine, I think.
Dave Bittner: [00:17:47] It is, absolutely.
Joe Carrigan: [00:17:48] Yeah.
Dave Bittner: [00:17:48] Yeah. So I say good for GDPR in this case.
Joe Carrigan: [00:17:52] Yeah, I would agree.
Dave Bittner: [00:17:53] Yeah.
Joe Carrigan: [00:17:54] This is a win for privacy.
Dave Bittner: [00:17:55] Yeah. The other sort of thing that troubles me about this is that this is going to be fuel to the fire that our phones are listening in on us.
Joe Carrigan: [00:18:04] Right, right. Yeah.
Dave Bittner: [00:18:05] Because we've made the point over and over again that, in general, they're not. But here's a case where...
Joe Carrigan: [00:18:10] They are.
Dave Bittner: [00:18:11] ...They are (laughter).
Joe Carrigan: [00:18:12] Right. They're actually listening in.
Dave Bittner: [00:18:13] Yeah. And that's bad.
Joe Carrigan: [00:18:16] I mean, that's what the capability of these things is. You know, they always have the capability to be listening to you.
Dave Bittner: [00:18:22] Right. And here's a case where somebody actually did it.
Joe Carrigan: [00:18:24] Right.
Dave Bittner: [00:18:25] All right. Well, it's troubling. Joe Carrigan, thanks for joining us.
Joe Carrigan: [00:18:28] My pleasure, Dave.
Dave Bittner: [00:18:34] And that's the CyberWire.
Dave Bittner: [00:18:35] Funding for this CyberWire podcast is made possible in part by ExtraHop, providing cyber analytics for the hybrid enterprise. Learn more about how ExtraHop Reveal(x) enables network threat detection and response at extrahop.com. Thanks to all of our sponsors for making the CyberWire possible, especially our supporting sponsor, ObserveIT, the leading insider threat management platform. Learn more at observeit.com.
Dave Bittner: [00:19:01] Don't forget to check out the "Grumpy Old Geeks" podcast, where I contribute to a regular segment called Security Ha. I join Jason and Brian on their show for a lively discussion of the latest security news every week. You can find "Grumpy Old Geeks" where all the fine podcasts are listed. And check out the "Recorded Future" podcast, which I also host. The subject there is threat intelligence, and every week we talk to interesting people about timely cybersecurity topics. That's at recordedfuture.com/podcast.
Dave Bittner: [00:19:30] The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our amazing CyberWire team is Stefan Vaziri, Tamika Smith, Kelsea Bond, Tim Nodar, Joe Carrigan, Nick Veliky, Bennett Moe, John Petrik, Jennifer Eiben, Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you tomorrow.
Dave Bittner: [00:20:08] This CyberWire podcast has been updated since its original release. A factual error regarding the film "2001: A Space Odyssey" was corrected.