Dave Bittner: [00:00:03] Investigation into Argentina's power failure continues, with preliminary indications suggesting operational and design errors were responsible for the outage. Russia reacts to reports that the U.S. staged malware in its power grid. Iran says it stopped U.S. cyber-espionage. ISIS worries about its vulnerability to BlueKeep, and a breach at EatStreet illustrates some of the features of third-party risk.
Dave Bittner: [00:00:34] And now a word from our sponsor ExtraHop, the enterprise cyber analytics company delivering security from the inside out. Prevention-based tools leave you blind to any threats inside your network. By adding behavioral-based network traffic analysis to your SOC, you can find and stop attackers before they make their move. ExtraHop illuminates the dark space with complete visibility at enterprise scale, detects threats up to 95% faster with machine learning and guided investigations that help Tier 1 analysts perform like seasoned threat hunters. Visit extrahop.com/cyber to learn why the SANS Institute calls ExtraHop fast and amazingly thorough, a product with which many SOC teams could hit the ground running. That's extrahop.com/cyber. And we thank ExtraHop for sponsoring our show.
Dave Bittner: [00:01:31] From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Tuesday, June 18, 2019. Investigation into the South American grid failure centered on Argentina proceeds but remains in its early stages, and no cause has been publicly identified according to AFP and other sources. The blackout is thought to have cascaded from a local failure. Operational and design errors are thought to be at fault. Officials in Argentina say, according to the AP, that while a cyberattack is a possibility, that seems unlikely.
Dave Bittner: [00:02:08] Reports of U.S.-staged malware in Russia's power grid, presumably held there for retaliation against future Russian cyberattacks on U.S. targets, stand about where they did yesterday. The reports are unconfirmed publicly and at least partially denied by the U.S. TASS is authorized to state that Russia regards cyberwar with the U.S. as a hypothetical possibility, that while it's accustomed to U.S. misbehavior in cyberspace and elsewhere, Russia is quite capable of protecting its grid, thank you very much.
Dave Bittner: [00:02:40] Lawfare has a useful account of how the laws of armed conflict might apply to what would appear to be a long-running, low-level conflict in cyberspace that many think has the potential to produce kinetic effects. The piece argues that there is at least a plausible case to be made that U.S. staging of malware in the Russian grid represents a, quote, "countermeasure responding proportionally to Russia's activities in U.S. energy systems," end quote. That there have been such Russian activities for some time seems probable to say the least. Last week's warnings about the appearance of Xenotime reconnaissance in U.S. utilities are the most recent reports of such cyber incursions.
Dave Bittner: [00:03:21] It's worth noting that few, if any, are saying that the U.S. has actually induced blackouts in Russia. Johns Hopkins University's Thomas Rid, a scholar whose interests lie in cyber conflict, had observed on Twitter that telling someone you've put malware in their systems blows the capability; that is it alerts the opposition and helps them find and fix what you've done. He offers this as general grounds for skepticism about the story. And as far as that goes, he's surely correct. And it would be wise to await more information. On the other hand, if the aim is deterrence, then you naturally want your opposition to know. They're not deterred by what you might do unless they're aware of what that might be.
Dave Bittner: [00:04:06] There's that saying, when it comes to breaches, it's not a matter of if, it's a matter of when. But it's also a matter of how long - the amount of time an adversary stays in your system, also referred to as dwell time. Jack Danahy is senior vice president of security at Alert Logic.
Jack Danahy: [00:04:23] It would be great if dwell time weren't as important. People could simply feel perfectly protected all the time. But in reality, we know that a dedicated attacker will usually find a way in. And so therefore, dwell time is a really important measure about how quickly that intruder will be found and caught and stopped. So over the past probably decade or so, we've actually seen dwell time improving. There were points in our history where dwell time was measured in years. But what we find now is that dwell time has been reduced, but it's been reduced to months. And the unfortunate part of that is good attacks, successful attacks, are successful in compromising system in seconds or in minutes, and there are exfiltrating data just as soon as that happens. And so therefore, the fact that it will take weeks or months to actually discover that that's ongoing and to find a way to contain it makes it a real problem, right? The dwell time continues to be too slow in comparison with the speed with which the damage is happening.
Dave Bittner: [00:05:25] Can you help me understand what are the different reasons for a long dwell time? Is it a tactical sort of thing? I can imagine there are some cases where someone would want to get out - in and out as quickly as possible. But I suppose there are other times when they want to stay in that system.
Jack Danahy: [00:05:39] You know, a lot of these compromises can have multiple purposes. Over the last few years, we saw a real rise in what I think of - like, smash-and-grab kinds of attacks, like ransomware. The evidence of the attack is the benefit of the attack, that they attack and then they want to tell the victim. I've broken into your system. Give me some money, or I'm not going to give you your data back. And so in that case, dwell time was very, very short. But if you think about a more strategic attack, where they're trying to exfiltrate data, whether it's credentials or financial information or trade secrets, the best way for the attacker to do that is to remain on that system for a long period of time, to take out as much data as they can and not make themselves so instantly discoverable. And for some of the other monetization strategies, things like cryptojacking, they also want to hang around for a long time because those miners are continuously using those system resources to generate cryptocurrency. They also don't want to be, you know, detected very, very quickly.
Jack Danahy: [00:06:35] And so what we see happening - and you see some of this in the reporting that came out in various analyst positions. You actually see that the first forms of attack are getting on the system, but then the next three things that these organizations are subject to is actually these persistence strategies. How do they create backdoors? And how do they stay in those systems?
Dave Bittner: [00:06:55] Why the plateau? Why have we not continued to get better with this?
Jack Danahy: [00:06:59] I think it's a combination of things. The threat surface itself, meaning the way in which organizations are expanding their use of technology and their use of platforms, has caused it to be a really dynamic environment. And maintaining visibility across it can be hard. A second piece is that a lot of the more virulent attacks are now applying themselves almost as commodities across organizations of all sizes. And so we're seeing a lot more attacks against the small to medium-sized enterprises, who may not have the capabilities and the resources to be watching closely. So that amount of threat surface that has to be covered, which changes dynamically, combined with the style of organizations that are being attacked - it creates a natural opportunity for the criminals to get on and stay on.
Dave Bittner: [00:07:42] What are your recommendations? How can folks get a better handle on this?
Jack Danahy: [00:07:46] Well, I think, number one, opportunities like this, where people can learn about the fact that dwell time is a considerable problem that they have to be watching all the time across their entire systems to make sure that these things aren't happening to them is a number one piece of awareness for people. Number two is understanding that the attacks themselves are changing, right? We've seen hundreds and hundreds of new types of attacks that come. We've got dozens of threat researchers who are out there gathering the intelligence about what's changing in the attack profile. And the types of attacks are changing as well. So you have to be vigilant across all your platforms, but you also have to make sure that you're looking for all the things that may matter. And then when those things are happening, you also have to have the capability to recognize them and respond to them - right? - because the ultimate benefit of shortening dwell time is being able to stop the attack and get those people off the system, get those folks out of the system before more damage can happen.
Dave Bittner: [00:08:41] That's Jack Danahy from Alert Logic.
Dave Bittner: [00:08:47] Iranian official media, without providing much detail, says that Tehran has detected and thwarted a U.S. cyberespionage campaign, which they attribute to the CIA.
Dave Bittner: [00:08:58] ISIS, from its diaspora in cyberspace, is said to be expressing an interest in protecting its adherents from BlueKeep exploits. Homeland Security today says the Electronic Horizon Foundation, an ISIS helpdesk, is warning about the risk of BlueKeep-based attacks. It's noteworthy that ISIS is concerned about its own exposure to BlueKeep. And it's not just ISIS. TechCrunch reports that the U.S. Department of Homeland Security has developed a remote code execution proof-of-concept exploiting the bug. DHS's Cybersecurity and Infrastructure Security Agency, CISA, says that it successfully executed remote code on a Windows 2000 machine. Microsoft, of course, stopped supporting Windows 2000 back in 2010, and so Redmond's BlueKeep fixes don't apply there. We hope Windows 2000 is far, far in your rearview mirror. But in case it's not, if you can, upgrade.
Dave Bittner: [00:09:55] EatStreet, an online food ordering service, has disclosed that it sustained a data breach. Unauthorized parties were in EatStreet's systems from May 3 until May 17, at which point they were detected and ejected. Customers who purchased food through EatStreet's website or app, which is available on Google Play, might have lost data that includes names, credit card numbers, expiration dates, card verification codes, billing addresses, email addresses and phone numbers. Also exposed were data EatStreet had on its partners, including participating restaurants and the delivery services that actually brought the food to the customers. EatStreet says it's notified credit card companies to be on the lookout for attempted fraud.
Dave Bittner: [00:10:39] ZDNet has been contacted by the person or persons who claim responsibility, and it's a familiar name - Gnosticplayers. ZDNet says over the past few months, this hacker has stolen and put up for sale 1.071 billion user credentials from 45 companies. In the EatStreet case, he claims to have taken 6 million user records. Whether that's 6 million individual's records or whether Gnosticplayers is counting each data element as a record is unclear.
Dave Bittner: [00:11:08] We heard from security firm Panorays’ CEO Matan-Or-El, who sees this as another instance that demonstrates the ways in which an organization's security extends to its supply chain and into regions that are not really under its direct control. To form a business relationship, Matan-Or-El suggests, is inevitable to assume risk. The lessons Panorays draws is that companies need to vet their prospective partners from the point of view of security, taking into account their postures, practices and procedures and working with the partners to close security gaps before they're onboarded. And even when the partnership is concluded, some form of continuous monitoring is in order since security is an ongoing process. We're accustomed to hearing about and maybe even thinking about third-party risk. Panorays, and they're not alone here, makes a good point. An organization's supply chain risk runs beyond third parties.
Dave Bittner: [00:12:01] Panorays talks about fourth-party risk, and that, indeed, seems depressingly plausible. They sensibly stop there, but why not fifth or sixth or even greater levels of risk? At some point, one would have to stop. If anyone has any persuasive, reasoned account of where, if anywhere, an organization could draw a line in its due diligence, we'd be interested to hear about it, seriously. It would be a shame if we wound up in the position of the philosopher William James, who, in conversation with a society lady given to esoteric speculation, heard from her that the world rested on the back of an elephant, who, in turn, stood upon the back of a turtle. To James' question, and what, Madam, does the turtle stand? She replied firmly, it's no good, Mr. James. It's turtles all the way down.
0:12:45:(SOUNDBITE OF FILM, "2001: A SPACE ODYSSEY")
Douglas Rain: [00:12:47] (As HAL 9000) Just what do you think you're doing, Dave?
Dave Bittner: [00:12:50] And finally, yesterday's Daily Podcast erroneously said in an aside that in Stanley Kubrick's film "2001," HAL 9000 killed Dave Bowman. In fact, the computer killed - at least - Frank Poole, V.F. Kaminsky and J.R. Kimball. The latter two were in suspended animation. HAL Niner Triple Zero, a native of Urbana, Ill. we understand, only tried to kill Dave Bowman, but astronaut Bowman was, in fact, the only survivor. The CyberWire regrets the error.
Dave Bittner: [00:13:27] And now a word from our sponsor ObserveIT. According to Cisco, over the course of 1 1/2 months, the typical suspicious insider can download 5,200 documents. Unfortunately, many ad hoc insider threat investigations can drag on for weeks or even months, since it's tough to know exactly who did what when and why. Security analysts have to wade through a sea of event logs, many of which are completely irrelevant, to eventually discover the root cause of an incident. What if we told you that there's a way to investigate insider threat incidents faster? With ObserveIT's dedicated insider threat management platform, security teams can quickly find out the context into both the user and data activity behind an alert. Detailed user activity timelines and easily searchable metadata help you know the whole story on insider threats. Visit observeit.com/cyberwire to try out ObserveIT's sandbox environment for yourself - no downloads or configuration required. That's observeit.com/cyberwire. And we thank ObserveIT for sponsoring our show.
Dave Bittner: [00:14:43] And joining me once again is Ben Yelin. He's a senior law and policy analyst at the University of Maryland Center for Health and Homeland Security. Ben, it's always great to have you back. We had an article come by - this is actually from Car and Driver - and it is "License Plate Readers are Dealt a Blow in Virginia, but Privacy is Still a Rare Commodity Nationwide." You and I have talked about these license plate readers before. What's the latest here?
Ben Yelin: [00:15:05] As you know, license plate readers are able to take real-time photographs of people's license plate, put them in a giant database, and that information can be used to collect all sorts of identifying information on individuals - where they are at certain moments. You know, I watch a lot of "Law and Order," and they're always using license plate readers to see, you know, who's driven into New Jersey, where can we chase the suspect? So they're very prominent in their usage. What this article lets us know is that one county in Virginia - Fairfax County - which happens to be, I believe, the most populous county in Virginia - their court ruled against the use of license plate readers absent some sort of specific articulable reason for that information to be collected.
Ben Yelin: [00:15:50] Normally - and this is how it works in probably 99% of counties and states across the country - there are virtually no legal limits on the collection of automatic license plate readers. This is largely due to a Supreme Court doctrine called the plain view doctrine. You can't really have any expectation of privacy in anything that you put into the plain view that any law enforcement officer could spot doing a routine, you know, patrol on the street. Perhaps that concept is a little bit outdated. And we're talking not about one single law enforcement officer using his Polaroid to capture a license plate, but we're talking about a systematic effort, an automated effort to collect every single license plate that goes through a particular turnstile, a particular area. This type of license plate reader can hold a lot of information, and it's completely suspicionless. Law enforcement, for the most part, does not need to have a reason to collect this information.
Ben Yelin: [00:16:46] But led by the ACLU, drivers in the county of Fairfax, Va., sued and got an injunction against the police department in Fairfax, saying that in this particular county, the government actually has to have a reason to collect somebody's license plate. This isn't that strict of a standard. It's not saying that you have to have probable cause that somebody has committed a crime. It just has to - you have to - law enforcement has to come up with some justification about why this private information is collected.
Dave Bittner: [00:17:18] Yeah, one of the things that fascinates me with this topic that I wonder about is the difference between the collection of the information and the analysis of the information. In other words, I can imagine a scenario where these cameras are out vacuuming up all the information, but law enforcement isn't allowed to look into that bucket of information without a warrant. So the information's there, and it's available. But I have to convince a judge that what I'm looking for is legitimate and that the scope of what I'm looking for makes sense in terms of being narrow enough.
Ben Yelin: [00:17:49] This is such an interesting question. We see it a lot in all types of Fourth Amendment cases. If you're collecting a haystack of records, then do you really have a privacy interest in a simple needle of that haystack? And what other courts and analysts have said is, you know, think about doing a control-F search in a 100-page Word document. In order to see if the word that you've identified is contained in that document, you necessarily have to search through every single word. If you have a database of license plates that have been subject to these automatic readers, they are necessarily all going to be scanned when you're doing a search for an individual license plate. Now, whether that's problematic from a civil liberties perspective is going to depend on individual tastes, but I think, certainly, courts have acknowledged that a person potentially could have a privacy interest simply in the collection of that information, even if it specifically has not been analyzed.
Dave Bittner: [00:18:49] Well, this is one that continues to evolve. And we'll keep an eye on it.
Ben Yelin: [00:18:53] Absolutely.
Dave Bittner: [00:18:53] Ben Yelin, thanks for joining us.
Dave Bittner: [00:19:00] And that's the CyberWire.
Dave Bittner: [00:19:01] Funding for this CyberWire podcast is made possible in part by ExtraHop, providing cyber analytics for the hybrid enterprise. Learn more about how ExtraHop Reveal(x) enables network threat detection and response at extrahop.com. Thanks to all of our sponsors for making the CyberWire possible, especially our supporting sponsor ObserveIT, the leading insider threat management platform. Learn more at observeit.com.
Dave Bittner: [00:19:27] The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our amazing CyberWire team is Stefan Vaziri, Tamika Smith, Kelsea Bond, Tim Nodar, Joe Carrigan, Nick Veliky, Bennett Moe, John Petrik, Jennifer Eiben, Peter Kilpe. And I'm Dave Bittner. Thanks for listening. We'll see you tomorrow.