Capital One sustains a major data breach. Phishing in LinkedIn. VxWorks patches and mitigations. Brute-forcing NAS credentials. LAPD doxed?
Dave Bittner: [00:00:03] Capital One sustains a major data breach affecting 106 million customers, and a suspect is in custody thanks largely to her incautious online boasting. Iranian social engineers are phishing in LinkedIn, baiting the hook with a bogus job offer. Wind River fixes VxWorks bugs. Network Attached Storage is being brute-forced. And a hacker claims to have doxed members of the Los Angeles Police Department.
Dave Bittner: [00:00:34] Now I'd like to share some words about our sponsor Akamai. You're familiar with cloud security, but what about security at the edge? With the world's only intelligent edge platform, Akamai stops attacks at the edge before they reach your apps, infrastructure and people. Their visibility into 178 billion attacks per day means that Akamai stays ahead of the latest threats, including responding to zero-day vulnerabilities. With 24/7, 365 security operation center support around the globe and over 300 security experts in-house, Akamai surrounds and protects your users wherever they are - at the core, in the cloud or at the edge. If you're going to Black Hat USA this year, visit Akamai at Booth 1522 to take part in their crack the code challenge. Akamai, intelligence security starts at the edge. Learn more at Akamai - that's Akamai.com/security. And we thank Akamai for sponsoring our show.
Dave Bittner: [00:01:39] Funding for this CyberWire podcast is made possible in part by ExtraHop, providing cyber analytics for the hybrid enterprise. Learn more about how ExtraHop Reveal(x) enables network threat detection and response at extrahop.com.
Dave Bittner: [00:01:54] From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Tuesday, July 30, 2019. Data associated with about 106 million credit card users and applicants, mostly in the United States and Canada, were exposed in a breach said to have been committed by a Seattle area woman, Paige A. Thompson. Capital One says that the compromised data include names, addresses, zip code, postal codes, phone numbers, email addresses, dates of birth and self-reported income. Also exposed were customer status data, credit scores, credit limits, balances, payment history, contact information and fragments of transaction data from a total of 23 days during 2016, 2017 and 2018. A more limited set of U.S. social security numbers - about 140,000 - Canadian social insurance numbers - about a million - and linked bank account numbers of credit card customers - roughly 80,000 - were also taken.
Dave Bittner: [00:02:54] Ms. Thompson was arrested yesterday on a charge of computer fraud and abuse. She is alleged to have gained access to Capital One customer data between March 12 and July 17 of this year. Her point-of-entry is said to have been a misconfigured firewall, the Wall Street Journal said. The Department of Justice says that Capital One was warned on July 17 by a GitHub user who had noticed that their customer data had turned up on GitHub. Capital One had stored the data in AWS, and various reports have noted that Ms. Thompson is a former Amazon employee, last working there in 2016. But Amazon Web Services do not appear to have been implicated in the breach. This was quick work by law enforcement, the Washington Post notes. Federal investigators found their task simplified by Ms. Thompson's online boasting. If convicted, she faces up to five years imprisonment and a $250,000 fine. In the press release disclosing the breach, Capital One summarized the financial costs it expects to incur. Quote, "we expect the incident to generate incremental costs of approximately $100 to $150 million in 2019. Expected costs are largely driven by customer notifications, credit monitoring, technology costs and legal support," end quote. Capital One shares dropped some 3% in after-hours trading upon the news.
Dave Bittner: [00:04:17] Iran's APT34 has been particularly busy on LinkedIn, which security firm KnowBe4 says has become a leading venue for social engineering attacks. FireEye researchers note that APT34 is particularly interested in the oil, gas, energy, utility and governmental sectors, and that they're posing as the research staff at University of Cambridge. The phishbait is a job offer. If you take the bait, you'll be asked to complete a form, which unfortunately will also open a backdoor in your system. The particular malware used in this attack is called ToneDeaf.
Dave Bittner: [00:04:54] Wind River has addressed 11 zero-day flaws in its VxWorks product. VxWorks is used in over 2 billion industrial, medical and enterprise devices. Armis Labs, which discovered and disclosed the flaws to Wind River, calls VxWorks the most widely used operating system you may never have heard of. Six of the zero-days were critical remote code execution flaws according to Armis Labs' report.
Dave Bittner: [00:05:20] When you are online going about your business, how do you know if the other individuals you're interacting with are actual flesh and blood humans or maybe bots? And if they are bots, is that necessarily a bad thing? The CyberWire's Tamika Smith looked into that question.
Tamika Smith: [00:05:37] This month, California became the first state to create a law that curtails the power that bots have. It essentially requires that they reveal that they're artificial in two instances, influencing a voter and selling a product. Here to talk more about this new law is Noam Cohen. He's the author of "The Know-It-Alls: The Rise of Silicon Valley as a Political Powerhouse and Social Wrecking Ball." Welcome to the program, Noam.
Noam Cohen: [00:06:03] Oh, thanks, Tamika. I'm glad to be here, yeah.
Tamika Smith: [00:06:05] You recently wrote an article for The New Yorker titled "Will California's New Bot Laws Strengthen Democracy?" I want to get straight into this and start with regulating bots. In your article, you talk about how it should be low-hanging fruit when it comes to improving how we use the internet. However, when the senator from California decided that he wanted to explore creating this law, he found out that it wasn't as easy as he thought it would be.
Noam Cohen: [00:06:33] Totally, yeah. And when I say low-hanging fruit, what I meant is that, like, you know, a bot isn't a person. So you can see that it's kind of complicated to say, hey, you're saying something that's hateful. You shouldn't be allowed to be on Twitter, right? Our president, you know, says things that are abusive, but he's still on Twitter. It's complicated. These are bigger issues. I have opinions on them. But I thought - and I think the senator - right? - Hertzberg thought that a bot would be pretty easy. We can all agree that if it's this computer that's pretending to be a person and is, like, getting - you know, and being annoying or manipulative or harassing and just sort of, you know, thousands of these same - these computer programs saying the same thing over and over again - lock her up or send them back. We could agree that that's, like, not good. Low-hanging fruit - we should - it shouldn't be much complicated about it all.
Noam Cohen: [00:07:18] So I think what he thought and what anyone might think - we're not even dealing with how to deal with the tough questions of people who are abusive and on this platform but just even machines that are - and then what I'm saying - what he discovered is that there is such an extreme kind of libertarian view in Silicon Valley that basically - they raised all these issues about bots that you wouldn't even think of, like, that bots actually are kind of like people speaking. You're like, really? Why? It seems like it's just the computer saying the same thing over and over again. But it's like, well, a person wrote it, and it's conveying ideas. Maybe it's an experiment. They had all these kind of theories that, like, maybe a bot is exploring the idea of what we think of bots. And so if it's identified as a bot, we won't have the ability to look at it and see - you know, see how it goes; that kind of thing. So...
Tamika Smith: [00:07:59] Some people would think that would be pretty extreme.
Noam Cohen: [00:08:01] Right, these kind of interpretations - but yet they basically did force the state senator to really re-evaluate how to write the law. He backed off a lot. It didn't require the sites to block them themselves. That's what he really was hoping would happen; that they would block them themselves. They would agree that we shouldn't have bots on our platform, and we're going to be responsible for getting them off the platform. But that was kind of - they argued and lobbied so effectively, it got taken out of legislations.
Tamika Smith: [00:08:29] As I'm processing this, I'm thinking that when California state Senator Robert Hertzberg decided he wanted to embark on this journey, he tapped into something - tapped into something very huge. And that's probably why Silicon Valley responded the way that they did. The gentleman you mention in your article...
Noam Cohen: [00:08:47] Yeah.
Tamika Smith: [00:08:48] ...John Perry Barlow...
Noam Cohen: [00:08:50] Sure.
Tamika Smith: [00:08:50] ...The founder of the Electronic Frontier Foundation...
Noam Cohen: [00:08:53] Sure.
Tamika Smith: [00:08:53] He wrote something in his Declaration of Independence of Cyberspace in 1996. In your article, it's quoted, "you have no moral right to rule us nor do you possess any methods of enforcement. We have true reason to fear." What do you think he's saying there? What is he tapping into?
Noam Cohen: [00:09:12] So - and this is, like, you know - this has been something I've thought a lot about. And it's kind of like - maybe the original of the internet. It's this fiction that - and for even calling it - right? For the longest time when I was a reporter at the Times, I would write Electronic Freedom Foundation. I was like - and, you know, it's actually called the Electronic Frontier Foundation. I think I even made an error in the paper with that because it's so weird. Like, what is it? A frontier - what do you mean? They're about electronic freedom, right? But they kind of see the frontier as, like, the West - the Wild West. What that quote's saying, like - and your laws don't apply to us. I think in that case, he's kind of saying, we're so much smarter than you. We're hackers. You can't even - you couldn't even discipline us if you wanted to. You couldn't even tell us to shut up because you don't even know how to, like, stop us from communicating via the internet because we're brilliant computer scientists. And you're a bunch of idiot bureaucrats.
Noam Cohen: [00:09:57] I think that view is - you see to this day, that - making fun of public workers and of, like, representatives and how they don't know anything about tech and that arrogance. So I think you had that aspect that we're in our own world, and you can't reach us. But I think the thing I've always really felt bad - you know, felt has gotten wrong is that - I actually wrote this in a book review of a book called "The Players Ball" about the West - the Wild West idea is that, like, you know, the West conquered by killing the natives - right? - you know, killing Native Americans. And, like, you know, obviously, slavery built the country - all these original sins of our country. And it's like this myth that, like, the - it's just the Wild West, and there's no history, no rules so, you know, not recognizing that when the internet isn't fair, it's going to play out that women and minorities are not going to be able to speak as much. So they can think it's like, hey, it's just freedom, but it - actually, freedom really is allowing the majority to oppress the minority.
Tamika Smith: [00:10:48] So what do you think is the step forward? Do you think California - as your article poses, do you think the rest of the nation - it's going to look at California's new law and say, OK, we can start here?
Noam Cohen: [00:10:59] I really hope so. I do think they can see - I think it's a good basic rule to say that, like, bots should identify themselves. You shouldn't trick people into thinking that they're people. When you're talking to a person, you're actually talking to a bot that's actually trying to maybe demoralize you - right? That's also what bots can try to do. They can try to make you feel bad by saying negative things or to trick you or to give you false information. So I do think it's like - it is starting the conversation. And I think that's really important. I do think that probably the way it's going to have to play out is that these big companies have to be broken up because I think they are just too powerful and too - like the article said, they're going to be too resistant to change. And I don't think it's a good system. I think we need more kind of democracy among the companies almost and among the platforms. So I hope that's the way it'll go, but I think it's a really good first step to ask hard questions about how the internet runs.
Tamika Smith: [00:11:49] Thank you so much for joining the conversation.
Noam Cohen: [00:11:50] Thanks very much. I hope it was helpful - great.
Tamika Smith: [00:11:52] Noam Cohen - he's the author of "The Know-It-Alls: The Rise of Silicon Valley as a Political Powerhouse and Social Wrecking Ball." He writes for The New Yorker and has a regular column with Wired magazine.
Dave Bittner: [00:12:04] That is our own Tamika Smith reporting.
Dave Bittner: [00:12:08] Synology has warned its users to protect themselves against a ransomware campaign that's hitting its Network Attached Storage product. The attackers are brute-forcing admin credentials in a coordinated series of dictionary attacks. While Synology has been out front with its warning, Naked Security reports that they're not the only NAS vendor whose products are affected. Indeed, the attack does not exploit any specific flaw in any NAS system. It's after credentials, and that's not a systemic problem.
Dave Bittner: [00:12:41] And finally, a self-proclaimed hacker has told the Los Angeles Police Department he's got data on some 2,500 police officers and about 17,000 recruits, according to Information Security Magazine. NBC4 Los Angeles says the police union is very unhappy. The incident remains under investigation.
Dave Bittner: [00:13:05] And now a message from our sponsor ObserveIT.
Unidentified Person: [00:13:10] Great party, huh?
Dave Bittner: [00:13:12] Yeah. Yeah, great party. Could you excuse me for just a moment? Hey, you. What are you doing? What? Oh, no - looks like another insider got into our systems when we weren't looking. I am going to be in so much trouble with the boss.
Unidentified Person: [00:13:31] Did someone say trouble? I bet I can help.
Dave Bittner: [00:13:34] Who are you?
Unidentified Person: [00:13:35] To catch insider threats, you need complete visibility into risky user activity. Here; I'll show you how ObserveIT works.
Dave Bittner: [00:13:42] Wow. Now I can see what happened before, during and after the incident, and I'll be able to investigate in minutes. It used to take me days to do this.
Unidentified Person: [00:13:52] Exactly. Now, if you'll excuse me, I think there's a cocktail over there with my name on it.
Dave Bittner: [00:13:58] But wait. What's your name? Oh, well. Thanks, ObserveIT, and whoever she is.
Dave Bittner: [00:14:05] ObserveIT enables security teams to detect risky user activity, investigate incidents in minutes and effectively respond. Get your free trial at observeit.com/cyberwire.
Dave Bittner: [00:14:25] And I'm pleased to be joined once again by Ben Yelin. He's a senior law and policy analyst at the University of Maryland's Center for Health and Homeland Security. Ben, it's always great to have you back. We had a story come by from The Atlantic. It's titled "Mass Surveillance is Coming to a City Near You." This actually sort of revisits something you and I have talked about in the past. Bring us up to date here.
Ben Yelin: [00:14:47] Yeah, so this is about aerial surveillance. This technology that was referenced in this article was actually used here in Baltimore City in 2016 as part of a pilot program. Place a plane in the sky over the city and get real-time access to people's movements, aerial surveillance that can - that you can zoom in to the level of a city block, an individual house, a street, et cetera. This is something that, obviously, presents major civil liberties concerns. There's this legal concept called the plain view doctrine, where if you exhibit some sort of behavior in plain view that violates the law, you've lost your expectation of privacy. And the government can arrest you based on what they've seen in plain view. There was a case where a government surveillance plane caught somebody growing marijuana in his own backyard - as long as the technology being used by law enforcement is something that's relatively widely available.
Ben Yelin: [00:15:46] That's where, I think, the legal question will be really crystallized in this case. Because this technology is so new, people who are walking around Baltimore City or any other city where this technology has been deployed aren't necessarily going to be aware that this type of technology exists. And thus, they won't be able to comport their own behavior to the fact that there's constant, persistent surveillance that's tracking our every movement. It requires almost no resources for law enforcement to use - you know, to press the rewind button on hours and hours of aerial surveillance footage as opposed to what they used to do, which is send a cop outside someone's house and actually follow the guy and see if he's committed any crimes. Not to mention it just feels weird and uncomfortable for people to know that they're being tracked in real time by an airplane 24 hours a day in information that's stored and can be searched by law enforcement. I think that's just a very uncomfortable conclusion that is just going to start settling in for people.
Dave Bittner: [00:16:53] I can't help wondering. I mean, what about if we put this behind the requirement for a warrant? In other words, go ahead and gather all this stuff up. But if a police department wants to go in and look at someone, they've got to convince a judge first.
Ben Yelin: [00:17:08] I mean, I think that would be the best way to ensure the legality of something like this because then you wouldn't run into whether this actually falls under the plain view doctrine. The problem is you may not be able to establish probable cause for a warrant unless you had access to some of this aerial surveillance. So let's say you had an inkling but something below probable cause that somebody committed a robbery. You may need to actually get access to this surveillance to know whether that person left their house that day, went to the store that was robbed, et cetera, et cetera. And what law enforcement is going to say is, we're trying to conduct an investigation. We don't have enough information to obtain a warrant. We would like access to the surveillance to see if we can connect this person with a crime.
Dave Bittner: [00:17:54] Yeah.
Ben Yelin: [00:17:55] And, you know, I can see why that potentially could be compelling to jurisdictions like Baltimore City that have major violent and nonviolent crime problems.
Dave Bittner: [00:18:05] Yeah, I can really see the appeal to law enforcement, obviously, because let's say you had some sort of robbery at a store or something. The ability to go to the time of that robbery and then, basically, run everything in reverse and track back every vehicle that came to that place back to wherever they started from - well, boy that's a powerful law enforcement tool.
Ben Yelin: [00:18:26] It is. I mean, you know, you just take normal blue-light surveillance cameras and multiply it by the entire city, so it's an extremely effective law enforcement tool. But, yeah, I mean, we've talked about so many of the potentials for abuse. One thing that this article talks about is even though you'd think, well, you know, they can't - this database of information that's being collected through aerial surveillance is so vast. No one's going to search, you know, you as an individual because they're just not going to have the time and resources to do it. But through machine learning, it's possible - and artificial intelligence - it's possible that the system can start to understand various patterns; you know, where gang activity is located, where certain people are hanging out at certain times. And that information, you know, is much easier to deduce without conducting searches of hundreds of thousands - the whereabouts of hundreds of thousands of individuals. And it could have the same effects on personal privacy.
Dave Bittner: [00:19:25] All right. Well, Ben Yelin, thanks for joining us.
Ben Yelin: [00:19:27] Thank you.
Dave Bittner: [00:19:32] And that's the CyberWire.
Dave Bittner: [00:19:34] Thanks to all of our sponsors for making the CyberWire possible, especially our supporting sponsor ObserveIT, the leading insider threat management platform. Learn more at observeit.com.
Dave Bittner: [00:19:45] The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our amazing CyberWire team is Stefan Vaziri, Tamika Smith, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Nick Veliky, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Peter Kilpe. And I'm Dave Bittner. Thanks for listening. We'll see you tomorrow.