The CyberWire Daily Podcast 2.8.19
Ep 777 | 2.8.19

Australia’s Federal Parliament has a cyber incident. DHS warns of third-party spying. Legit privacy app tampered with. Credit Union phishing. Bezos vs. Pecker. FaceTime bounty. Seal scat.

Transcript

Dave Bittner: [00:00:04] Australia investigates an attempted hack of its Federal Parliament. The U.S. Department of Homeland Security warns that spies are working through third parties to get to their targets. Spyware is bundled in a legitimate privacy app. Credit unions get spear phished. Mr. Bezos says no thanks, Mr. Pecker. Sandi Roddy is chief scientist for cyberwarfare operations at Johns Hopkins University Applied Physics Lab. She joins us to talk key management. Apple will pay a FaceTime bug bounty. Microsoft says don't use IE as a browser, and what they found in that seal scat.

Dave Bittner: [00:00:44] Now a moment to tell you about our sponsor, ObserveIT. The greatest threat to businesses today isn't the outsider trying to get in. It's the people you trust, the ones who already have the keys - your employees, contractors and privileged users. In fact, a whopping 60 percent of online attacks today are carried out by insiders. Can you afford to ignore this real and growing threat? With ObserveIT, you don't have to. See, most security tools only analyze computer, network or system data. But to stop insider threats, you need to see what users are doing before an incident occurs. ObserveIT combats insider threats by enabling your security team to detect risky activity, investigate in minutes, effectively respond and stop data loss. Want to see it in action for yourself? Try ObserveIT for free. No installation required. Go to observeit.com/cyberwire. That's observeit.com/cyberwire, and we thank ObserveIT for sponsoring our show.

Dave Bittner: [00:01:47] From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Friday, February 8, 2019. The Australian Federal Parliament was subjected to a cyberattack that seems to have been largely unsuccessful. It's thought to be a foreign operation, but there's no evidence it was directed at influencing upcoming elections. The Australian Broadcasting Corporation says the Australian Signals Directorate is investigating. The inquiry is in its early stages, and no attribution is expected in the near term. A number of observers, however, are speculating that the incident was a Chinese operation. China's intelligence services have targeted the Federal Parliament before.

Dave Bittner: [00:02:29] The U.S. Department of Homeland Security has added its voice to a report on Chinese cyberespionage by Recorded Future and Rapid7 from earlier this week. DHS warns that there's a trend of APT10 and other state-directed threat actors to approach their targets through third parties.

Dave Bittner: [00:02:48] Security firm BitDefender warns that Triout spyware has been bundled with altered copies of the legitimate Android privacy app Psiphon. The company's researchers had first observed and sounded an alert about Triout last August. In that round of infection, the spyware was bundled with an adult content app. This time, the packaging is much more innocent in appearance. Once installed in an Android device, Triout records calls, logs incoming texts, records videos, takes pictures and collects GPS coordinates. And of course it reports back to whoever's running it currently via a server located in France.

Dave Bittner: [00:03:27] BitDefender thinks the combination of high capability and low infection rate suggests that the spyware's masters are using it against carefully selected targets. The clean version of Psiphon is the one sold through Google Play. As usual, it's better to stick to large, official, well-known app stores. They're imperfect, of course, and everything's imperfect. But they're far better than buying from some opportunistic market. And of course to install a pirated version of anything is just asking for trouble in more ways than one can easily count.

Dave Bittner: [00:04:00] Krebs on Security reports that there's been a recent phishing campaign targeting officers at credit unions who are responsible for anti-money laundering measures. And the email told the credit union that the National Credit Union Administration, the NCUA, had noticed transactions that looked like money laundering and then encouraged the recipient to open an attached PDF for more details.

Dave Bittner: [00:04:22] The PDF, of course, carried the malicious payload. The text of the email was fortunately marred by the uncertain command of English usage that so often betrays phishing attempts for what they are. And it's not clear that any of the recipients, whom one would expect to be a wary bunch, actually opened the attachment. But the credit unions have a queasy feeling that someone somewhere might have.

Dave Bittner: [00:04:45] One of the credit unions - all of them are speaking to Krebs on Security on background, not for attribution - says that its IT staff traced one of the emails back to a Ukrainian source. So the campaign may be the work of an Eastern European criminal gang. The specificity of the phishing is interesting. It was first observed on January 30 when National Credit Union Administration anti-money laundering points of contact at various individual credit unions received emails that purported to be from the NCUA.

Dave Bittner: [00:05:15] The persons being spear phished were the Bank Secrecy Act officers. The Patriot Act requires credit unions to carry. NCUA is the independent federal agency responsible for ensuring deposits at credit unions. The phishing campaign has been sufficiently well-informed to lead credit unions to suspect that the attackers have somehow obtained nonpublic information from the NCUA. NCUA is not really talking about the incident, but the Treasury Department had said it's aware of the attempts and has asked that all credit unions disregard emails of this kind.

Dave Bittner: [00:05:50] The duty of care campaign in the U.K. has apparently persuaded Instagram, which has announced that it will take content that shows or advocates self-harming down from its service. The policy change was prompted by the very sad case of a young teenage girl who took her own life. Her family fairly convincingly blames content on Instagram for prompting her to commit suicide.

Dave Bittner: [00:06:15] Amazon founder and Washington Post owner Jeff Bezos says in a blog post on Medium that AMI, the National Enquirer's corporate parent, is trying to blackmail him into calling the Post off stories AMI would prefer it didn't run, mostly pertaining to either Saudi Arabia or to the current U.S. administration.

Dave Bittner: [00:06:35] AMI seems to have told Mr. Bezos they have, and will publish, intimate selfies. He's responded by preemptively telling everyone what's in those selfies, and he's declined the offer to keep things quiet in exchange for certain considerations - "No Thank You, Mr. Pecker," as his post is titled, effectively telling AMI to publish and be damned. And he asks rhetorically, if in my position I can't stand up to this kind of extortion, how many people can?

Dave Bittner: [00:07:02] Mr. Pecker is David Pecker, head of AMI. How the Enquirer got the below-the-belt selfies is unclear, TechCrunch says. And it also notes that the Enquirer is an old hand at getting embarrassing pictures. AMI, according to the Independent, the Washington Post and other sources, is conducting its own internal investigation to see if it might've done something wrong in the way it got a hold of the pictures, which it doesn't think it did, but which it says it's going to get to the bottom of.

Dave Bittner: [00:07:33] Good news for the teenager who found and reported the privacy bug in FaceTime - with a lot of persistent help from his mom, Apple will pay him a bug bounty.

Dave Bittner: [00:07:44] Maybe you thought Internet Explorer was a browser. We sure tended to think of IE that way. But think again. Microsoft says it's a compatibility solution that should be used selectively and not as your primary browser. As Redmond puts it, quote, "We're not supporting new web standards for it. And while many sites work fine, developers by and large just aren't testing for Internet Explorer these days. They're testing on modern browsers." So for your browsing needs, look elsewhere.

Dave Bittner: [00:08:17] Finally, here's a little cautionary tale about the physical destruction and disposal of electronic media. Don't just fling the stuff overboard and expect your data to vanish for good. Wildlife veterinarians in New Zealand were running a check of seal scat, which is a standard way of monitoring the health of various animals. As they were doing so, they found, in the scat, a USB drive that the animal had apparently swallowed and subsequently pooped out. We stress apparently because not only was the data on the drives easily recovered, it held videos of seals disporting themselves off the bow of a kayak. But the owner has come forward.

Dave Bittner: [00:08:58] A seal enthusiast herself, she says she has interest in all matters otarine or phocine, down to, and including, their scat. She thinks she accidentally dropped the dongle in some seal droppings she was checking out on a beach. Anywho, if a drive can survive whatever happened there, it will surely survive being just tossed out. Dispose of electronics securely and properly, and keep them out of the mouths of children and animals.

Dave Bittner: [00:09:34] And now, a word from our sponsor KnowBe4. Many of the world's most reputable organizations rely on Kevin Mitnick, the world's most famous hacker and KnowBe4's chief hacking officer, to uncover their most dangerous security flaws. Wouldn't it be great if you had insight into the latest threats and could find out, what would Kevin do? Well, now you can. Kevin and Perry Carpenter, KnowBe4's chief evangelist and strategy officer, will give you an inside look into Kevin's mind. You'll learn more about the world of penetration testing and social engineering with firsthand experiences and some disconcerting discoveries. In this webinar, you'll see exclusive demos of the latest bad-guy attack strategies. You'll find out how these vulnerabilities may affect your organization. And you'll learn what you can do to stop the bad guys. In other words, what would Kevin do? Go to knowbe4.com/cyberwire to register for the webinar. That's knowbe4.com/cyberwire. And we thank KnowBe4 for sponsoring our show.

Dave Bittner: [00:10:44] And joining me once again is Justin Harvey. He's the global incident response leader at Accenture. Justin, it's great to have you back. I wanted to touch base with you today on credential stuffing and how folks can protect themselves against it. Can we just start off at the beginning here? What are we talking about with - when we say credential stuffing?

Justin Harvey: [00:11:02] Credential stuffing is where an attack group - typically cybercriminals - want to steal identity information or even, in some cases, credit cards or create fraudulent transactions on e-commerce sites. And the way that they do this is they go out on the public internet, and in some cases, even the dark web. And they download huge files of email address and password combinations. And these files exist out there through intentional dumps from other attack groups. And they're freely available out there. In fact, there are even some websites that advertise entering your email address.

Justin Harvey: [00:11:44] And we can tell you how many times you've been compromised on these e-commerce sites because these dumps become the public domain, essentially, when they hit the internet, a lot of times. So these adversaries grab those large files, and then they write scripts to try each of these username and password combinations against your e-commerce site. There are ways to prevent this, and in some cases, maybe if not prevented, then slow it down to a manageable level so that you can take action.

Justin Harvey: [00:12:19] So the first and the best course of action is to implement multifactor for your customers. Now, I know that there may be some revenue people out there that are going to be saying, well, Justin, that's going to affect the customer experience. And we're going to see a certain percentage of lost revenue because our customers can't figure out multifactor. And I'm going to say, there's two ways to go about this. The first way is yes, you can take that little bit of customer experience hit, or you can wait until your site has become a victim of this and it becomes newsworthy. And you take the brand damage, or you take the hit of that.

Justin Harvey: [00:13:00] And in some cases - take the EU, for example - there could be a GDPR violation by not taking appropriate steps. So multifactor is the best course of action. It doesn't matter if it's an SMS, Google Authenticator or CAPTCHA or image selection. But there's got to be some way to verify the next step of identity after you put your email address and password.

Justin Harvey: [00:13:28] One really effective way to seeing how many of your users have been affected by this is to essentially crack your own passwords. And what I mean by that - the way to go about this is to talk to your threat intelligence provider. I know we do this at iDefense at Accenture where our customers will ask for the latest dump files out there - the millions of usernames and passwords combinations - and they'll put that into their system and essentially run the same encryption protocol on the dump file.

Justin Harvey: [00:14:04] And then they take each encrypted password and compare it against the valid encrypted passwords on their own site. And that way, if there's a match, you know that that user has reused a password somewhere else on the Internet that there's - where it's been publicly available. And then you can do a few things. You can lock that user account, you can send them a helpful email, or you can reset their password and send them an email that they need to essentially reset or unlock that account.

Dave Bittner: [00:14:35] Now, what about things like rate limiting - just not letting people, you know, pound that login with the - with attempt after attempt?

Justin Harvey: [00:14:43] You know, it's funny you say that. I was just - I literally just worked a case on that last month. And there are products out there in the market that could do that. I think that this client was working with Akamai. They have something called the Bot Manager, which looks for anomalous patterns in traffic in order to identify that. But one way to get around that - and it takes a little more time, but - and it take a bigger swath of hosts that the adversary has access to, but they can do this in a low and slow manner.

Justin Harvey: [00:15:17] In fact, there's also ways to do this through using human beings instead of a script. You could even farm this out to 10, 20, 100 people, perhaps, in low-wage countries in order to run the attack yourself. So rate limiting is definitely recommended. It is effective, but it is not quite as effective as multifactor. And I wouldn't put all your eggs in that basket.

Dave Bittner: [00:15:41] Yeah. All right. Well, Justin Harvey, thanks for joining us.

Justin Harvey: [00:15:45] Thank you very much.

Dave Bittner: [00:15:50] Now I'd like to share some words about our sponsor Cylance. AI stands for artificial intelligence, of course. But nowadays, it also means all image or anthropomorphized incredibly. There's a serious reality under the hype, but it can be difficult to see through to it. As the experts at Cylance will tell you, AI isn't a self-aware Skynet ready to send in the Terminators. It's a tool that trains on data to develop useful algorithms. And like all tools, it can be used for good or evil. If you'd like to learn more about how AI is being weaponized and what you can do about it, visit threatvector.cylance.com and check out their report, "Security: Using AI for Evil." That's threatvector.cylance.com. We're happy to say that their products protect our systems here at the CyberWire. And we thank Cylance for sponsoring our show.

Dave Bittner: [00:16:49] My guest today is Sandi Roddy. She's chief scientist for cyber warfare operations at Johns Hopkins University Applied Physics Lab. She joins us to share her expertise on the proper management of encryption keys and the importance of understanding the key life cycle.

Sandi Roddy: [00:17:07] We all seem to be very, very comfortable with the fact that, oh, click this button, invoke this thing and your data will be encrypted. But the missing piece in my mind is the life cycle approach to say, when I need to do encryption, what are the entire set of concepts and ideas that I need to make sure that I understand so that I don't unintentionally brick my data? One of the analogies that I think is - I have one of those locks on my front door where I can set different key codes for the different people who are coming into and out of my house when I'm not here.

Dave Bittner: [00:17:44] Sure.

Sandi Roddy: [00:17:44] And that allows me the ability to manage my key to my front door. And it starts with the fact that I knew I needed to be able to allow different things to happen. I knew the purpose of ingress and egress of my house. And I knew that there were periods within which certain keys would be active and certain keys would then become deactivated. So that's a beginning piece of trying to understand key management.

Dave Bittner: [00:18:12] Well, let's dig in some here. Can you describe to us - what are you talking about when you're putting out this notion of the life cycle of these keys?

Sandi Roddy: [00:18:22] So the first thing you need to understand is, what kind of information do I need to encrypt? What kind of keys do I want to apply to it? Who's going to have access to those keys? And who's actually going to manage the keys? We are very, very comfortable with - NIST has done a phenomenal job with the FIPS 140 criteria by which when you go buy an appliance that's going to generate your key for you, we know that it's good. But what we don't know is how many people are going to use that key. Where's the appliance going to be stored? What are the administrators going to be doing? And how are you going to be auditing those administrator functions?

Dave Bittner: [00:19:06] So it really is sort of - it's a circular lifecycle. Things come around in sort of a natural transition from step to step.

Sandi Roddy: [00:19:16] Exactly. Everything is cyclical in the approach that, one, generally if you're doing this properly, you don't create a key that you use in perpetuity because, as we all watch academics push further and further into how do I break key - I mean, it's a active challenge for academics and mathematicians to be able to say, oh, I can factor the next whatever key size of RSA is out there because they spend their lives doing that.

Sandi Roddy: [00:19:47] So if you're still using key material that is smaller than whatever's being factored today, you're essentially wasting your time. So you have to have this cyclical approach that allows you to iteratively improve the mechanisms that you're using and the way that you're approaching key.

Dave Bittner: [00:20:05] And what are some of the areas where focus falls short on this? Where do they drop the ball?

Sandi Roddy: [00:20:11] The tendency is to say, oh, I need key to encrypt this kind of data. And so I'm going to go buy that product and bring it in without stepping back and saying, what's the full range of technologies that I'm using? And are the decisions that I'm making for applying cryptographic solutions consistent with what my IT environment looks like?

Sandi Roddy: [00:20:36] The other piece where I see is that looking at the hierarchy of what is the most secure set of solutions that you can apply and then, how do you work your way down into picking a set of solutions. For example, I think that one of the things that people generally do is they pick one solution and say, oh, that's going to work for everything. But if you've got storage area networks and you've got file encryption and you've got hard drives, you have to understand exactly how your IT environment works and what you've got. And then what are those solutions that you can bring in and replace?

Dave Bittner: [00:21:14] I think folks have a natural tendency to want to sort of set it and forget it. I suspect in this case, that can lead to some real problems.

Sandi Roddy: [00:21:24] Yes, and the first piece of it that I had mentioned earlier about the lifecycle of the key is keys do age off. And keys don't retain the security functions that one expects them to do when you do Day One initialization of key. So that's a big part of it. And then adjusting where your data is and the priority of who has access to that data.

Sandi Roddy: [00:21:51] For example, if you have administrators that are able to get to your unprotected key material and they leave, you want to have processes in place that can adapt and adjust for that. And again, I'm not picking on administrators as being nefarious in any stretch of the imagination. But you have to understand who has access to the crown jewels. And that's what keys are.

Sandi Roddy: [00:22:16] And then what are your plans, before you give them access to those keys, for adjusting and responding to the fact that they may leave, and they may move on? You want to be able to say, I have mechanisms in place so that when my administrators move on or I need to replace them, I can also replace the key.

Dave Bittner: [00:22:36] Now, what about the protection of the key itself, the security of the key itself? I'm thinking of sort of the real world analogy of having a lock on your front door. And it's one thing to leave the key under the mat. It's another thing to put a sign on the front door that says, the key is under the mat.

Sandi Roddy: [00:22:52] Yes, and that's absolutely true. And what we find is that vendors don't always tell you where the key is when it's stored. I am a huge fan of hardware security modules, especially some of the ones that have proven time and time again that they do protect the key while the key is in there. So again, your security technologists and architects need to understand where the key is during the information lifecycle and the encrypt-decrypt lifecycle. It has to be unencrypted in memory. That's just the nature of the way using key is. But do you have audit processes in place to be able to understand which applications are pulling the key out of memory?

Sandi Roddy: [00:23:37] So you - you look where it is at rest. You look where it is in motion. And then you understand the protections, whether they're actually - here we're getting into onions of keys - encrypting the key. Or is it the fact that it's in a protected process when it's being used? So it can get very complicated. And I think that's why most people want to have somebody else tell them, here's your solution, and here's what you should do. Just push my easy button, and then - and you're all good. But I think we all owe it to ourselves to just dive into it a little bit deeper and have some sense of assurance that it is functioning as intended.

Dave Bittner: [00:24:19] That’s Sandi Roddy. She's from the Johns Hopkins University Applied Physics Lab.

Dave Bittner: [00:24:28] And that's the CyberWire. Thanks to all of our sponsors for making the CyberWire possible, especially our supporting sponsor, ObserveIT, the leading insider threat management platform. Learn more at observeit.com. The CyberWire podcast is proudly produced in Maryland, out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our CyberWire editor is John Petrik, social media editor Jennifer Eiben, technical editor Chris Russell, executive editor Peter Kilpe. And I'm Dave Bittner. Thanks for listening.