The CyberWire Daily Podcast 10.29.19
Ep 959 | 10.29.19

Fancy Bear paws at anti-doping agencies. Johannesburg says no to the Shadow Kill Hackers. Adwind jRAT’s new misdirection. US FCC versus Huawei, ZTE. Georgia hacked.

Transcript

Dave Bittner: [00:00:03] Fancy Bear is pawing at anti-doping agencies again, suggesting more to come for the 2020 Tokyo Olympics. Johannesburg has declined to pay the Shadow Kill Hackers the money they demanded. Adwind jRAT has gotten a bit harder to detect. The U.S. FCC is considering a measure that would prevent certain funds from being used to purchase Huawei or ZTE gear. Pwn2Own goes ICS. Georgia is hit by unknown hackers. And Magecart appears in an American Cancer Society website. 

Dave Bittner: [00:00:39]  And now a word from our sponsor Coalfire. When organizations stand up new services or move existing applications to the cloud, IT security efforts need to be coordinated with business units and partners. A common question inevitably arises. Is security the cloud platform provider's responsibility, or is it the customer's responsibility? To optimize data security, you must clearly articulate who owns what, identify security gaps and determine who will close those gaps. With the introduction of the HITRUST Shared Responsibility Program, there is now a solid path to address the misunderstandings, risks and complexities when partnering with cloud service providers. Coalfire has delivered hundreds of HITRUST CSF certifications since 2011, and they help organizations clarify the roles and responsibilities of security controls that protect information. They've certified the leading global cloud service providers and can help you migrate data to the cloud securely. Find out more from Coalfire, the HITRUST cloud assessor, at coalfire.com/hitrust. That's coalfire.com/hitrust. And we thank Coalfire for sponsoring our show. Funding for this CyberWire podcast is made possible in part by McAfee - security built by the power of harnessing 1 billion threat sensors from device to cloud, intelligence that enables you to respond to your environment and insights that empower you to change it. McAfee - the device-to-cloud cybersecurity company. Go to mcafee.com/insights. 

Dave Bittner: [00:02:27]  From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Tuesday, October 29, 2019. Microsoft yesterday reported finding indications that Russia's GRU has resumed targeting networks of anti-doping agencies that police international sports. Microsoft refers to the GRU as Strontium. Others refer to it as Fancy Bear or APT28. Fancy Bear was active against anti-doping groups during the last Olympiad, when officials disqualified Russian teams for widespread use of performance-enhancing drugs. 

Dave Bittner: [00:02:54]  Microsoft's notice suggests that Moscow has neither forgotten nor forgiven and that organizations connected with what's called the Olympic movement can expect more hostile attention in cyberspace through next summer's Tokyo Games. Japanese authorities have been aware of and preparing for cyberthreats to the games since 2015 at least. 

Dave Bittner: [00:03:17]  Anti-doping organizations have received the attentions of the GRU before. In October of 2018, the U.S. Department of Justice indicted seven officers of the Russian military intelligence service on charges related to the hacking of such agencies. Microsoft didn't specify which organizations were the subject of this most recent round of hacking, but it has warned them that they do figure in the attackers' plans. Fancy Bear has been generally regarded as responsible for the Olympic Destroyer malware that hit the 2018 Winter Olympics in South Korea. That particular campaign was false-flagged in ways designed to lead to the conclusion that North Korea was responsible for the attacks. The imposture worked for a while but was debunked within a matter of weeks. 

Dave Bittner: [00:04:02]  In South Africa, the city of Johannesburg has declined to pay the ransom the Shadow Kill Hackers demanded and has called upon international support to help with recovery. The deadline for payment expired last night, and there are no signs that the attackers have so far made good on their threats. Authorities say they've restored some 80% of the online services used by the city of 5 million. 

Dave Bittner: [00:04:27]  Researchers at Menlo Security say the Adwind jRAT has grown more difficult to detect. The malware is an information stealer that, for the most part, has been used to collect passwords from infected systems. The newest version, which, in a departure from Adwind jRAT's earlier platform-agnostic manifestations, seems to be targeting Windows machines, usually arrives as a Java Archive file attached to a phishing email or downloaded from an old WordPress-based watering hole. The initial Java Archive file is obfuscated in ways that make behavioral or signature-based detection difficult. In effect, Menlo Security says, the malware is hiding in plain sight. Eventually, of course, it has to reveal itself by sending stolen credentials to a remote server. And that, Menlo says, is what will blow the gaff to alert defenders. 

Dave Bittner: [00:05:19]  We've been checking in with Robb Reck from Ping Identity on his CISO advisory council's research for 2019. Their final report is titled "5 Steps to Improve API Security." 

Robb Reck: [00:05:31]  The API kind of behind-the-scenes movement has really changed the way IT and development teams work. Instead of all, you know, the majority of user interaction - or excuse me - system interaction happening, like, through a webpage where a system, you know, goes to a browser and accesses it, the vast majority of transactions, the vast majority of business being done is behind the scenes. An API is where systems talk to each other. 

Dave Bittner: [00:05:54]  And so what are the practical implications of that? 

Robb Reck: [00:05:57]  Well, we've built these security teams and development teams that are really better at - you know, especially from a security perspective, we're better at, you know, securing web apps. We're better at, you know, doing vulnerability scans for systems. We don't necessarily understand what is - how does an API work. And even our pen testers - they're - you know, they're much better at focusing on those web application systems. And so we really need to start thinking about, what are the - what's the impact, what's the implications of having these APIs that we're not quite as skilled at dealing with? 

Dave Bittner: [00:06:24]  Well, let's go through the steps together here. Walk us through what you recommend. 

Robb Reck: [00:06:30]  Yeah. So I'd say these are five steps to get started, right? We're certainly not expecting that you finish these five steps and you're done. But I know there's a lot of departments out there that just don't know even, you know, where to begin. So step one is you got to know what APIs are in use, and this is, you know, no different than any other kind of part of security. 

Robb Reck: [00:06:47]  But knowing your systems, knowing your infrastructure - generally, I'd imagine that, you know, if you ask a security team how many APIs they have, they, No. 1, don't have the answer. And No. 1, if they'd take a guess, they would guess, you know, way too few. And the only way you start to know this is by really teaming up with other departments. No. 1 is you have to talk to your development folks. That's the folks who are creating APIs and putting out new ones all the time. Work with them. Work with your IT, maybe your information systems teams that manage those different systems and start to put together that single inventory systems. 

Dave Bittner: [00:07:21]  What's No. 2 on your list? 

Robb Reck: [00:07:22]  Gain visibility into the activity. So it's not just what APIs are there. It's, like, what do they do? What are the purposes of these things? And that's not so easy, you know, especially in this kind of dev ops, you know, continuous integration, continuous deployment world we live in where an API that existed today doing one thing very well tomorrow may do a lot of additional things. So we really need to start seeing, what does normal behavior look like on those APIs, so we can understand where is the biggest risk. You know, we don't - we probably don't have the resources to go look into every single API. But as we know which ones do in high-value transactions, maybe financial transactions, maybe health transactions, we can start to focus our attention on those higher-risk APIs. 

Dave Bittner: [00:08:04]  Well, speaking of resources, your third tip here is to assemble the right resources. 

Robb Reck: [00:08:09]  Yeah. So this is not just, you know, put your security, you know, your network firewall admins on your APIs and expect them to be able to be effective, right? We really need to figure out who can be effective. And it's kind of a hybrid position where someone who has a security mindset but has the skills of a developer who understands how APIs work, what do they do - and this is, you know, one of the big challenges we have. And you know, you talk about it on the CyberWire all the time - the skills gap - right? - the security skills gap that we run into. This is a place where I think it's probably most obvious - is finding a - an experienced, security-focused API person. That's just about impossible. 

Robb Reck: [00:08:47]  What we need to do instead is let's find someone in the organization who has either a strong development background and has some interest in security or a security person who's willing to put in the time to learn how APIs work. Let's get some of those resources together, whether they're located in security or in development - doesn't matter that much as long as both teams are kind of part of that process. 

Dave Bittner: [00:09:07]  Yeah, and that goes right to your No. 4, which is, assign ownership of API security. 

Robb Reck: [00:09:13]  Yeah, you got it. And I'd say - once again, it doesn't matter to me who owns it. My problem is - what I see too much of is they say, well, we both are a part of the solution; no one really specifically owns it. And you know, I firmly believe if nobody owns something, then - or excuse me - everybody owns something, then nobody owns it. You have to have an individual department, individual person who's going to be held accountable for that. And then, of course, they'll partner up with the others. 

Robb Reck: [00:09:35]  So if security is going to own it, then they're going to depend a ton on development to be effective. And if development owns it, you know, that means that they have to be the ones to answer for, you know, why were your APIs not secure versus asking the CISO to come, you know, be the one to answer if development was the one doing it. 

Dave Bittner: [00:09:50]  And then your fifth item on the list is, address API security by design. 

Robb Reck: [00:09:54]  Yeah. This is - I mean, this is clear for any kind of development, but I think it's especially important in APIs because of the nature that generally, API development comes along in this much more agile, much, you know, more dev ops-y type of environment. When you start to see these quick changes, they really can have big impacts. You know, APIs have been notoriously abused in a number of big breaches, you know? 

Robb Reck: [00:10:17]  I guess the Cambridge Analytica scandal - not so much a breach, but an abusive APIs. And we've had lots of other examples of that. The only way you address that is by starting to get the threat model for APIs much - considered and developed in much earlier in the process. When I say the threat model, I mean, well, who would be interested in going after this data that the API protects? How could they do it? Where are they coming from? - and make sure we're developing around that threat model. 

Robb Reck: [00:10:44]  And then one other element there is all development of APIs should probably be done considering as though this API were going to be externally exposed, versus kind of trusting that it's behind a firewall and we're not going to worry about having to be as security-focused there. Over time, you know, exposure changes. These things have to change. And in this world we're living in, where borders and firewalls aren't necessarily going to be there for very long, really focusing on making sure we have a resilient API is one of the keys. 

Dave Bittner: [00:11:12]  That's Robb Reck from Ping Identity. 

Dave Bittner: [00:11:16]  The U.S. Federal Communications Commission has proposed rules that would prevent recipients of Universal Service Funds from using that money to purchase equipment or services from companies that threaten national security. The measure, which the FCC will vote on this November 19, isn't restricted to any particular companies or countries, but the commission specifically calls out Huawei and ZTE as examples of the companies it has in mind. USF money is designed to support rural telecommunications infrastructure. So should the measure pass in November, it won't amount to a ban, but it will be a powerful disincentive to using products and services from the two Chinese companies. The selection of the USF as a tool to use against Huawei and ZTE is significant. The company's reputation for low cost have made them attractive to carriers serving rural areas and closing gaps in the proverbial last mile. The loss of USF money would change the economic calculation. 

Dave Bittner: [00:12:16]  Pwn2Own will add industrial control systems to its bug-hunting target list this January, according to Dark Reading. They point out that they're not going to ship, obviously, pump controls or centrifuges to the conference venue, but Trend Micro, which is running the program, believes it's found suitable software-based products to make the exercise interesting. 

Dave Bittner: [00:12:38]  An unattributed cyberattack against Georgian targets has taken down some 2,000 websites and the national television station, according to the BBC. The website attacks were, for the most part, defacements. There's no attribution yet but, as is usually the case in the former Soviet republics that make up the independent states of the near abroad, the speculation in the country of Georgia is that the hackers were Russian. That's, of course, premature and merely a priori. There could be any number of other threat actors responsible. The story is developing. Whoever is behind the attack, they seem to have a taste for 1980s American science fiction, assuming that I'll Be Back is in fact the homage to "The Terminator" it appears to be. 

Dave Bittner: [00:13:22]  And finally, creeps using the Magecart card-scraping malware that's afflicted many e-commerce sites over the past year have turned to a new target, the American Cancer Society's online store. The code was injected last week, was removed at some point after researchers at Sanguine Security found and disclosed it, and now seems to be gone. But if you've used a card recently on the American Cancer Society site, do check with your card company. 

Dave Bittner: [00:13:54]  And now a word from our sponsor, KnowBe4. Having spent over a decade as part of the CIA's Center for Cyber Intelligence and the Counterterrorism Mission Center, Rosa Smothers knows the ins and outs of leading cyber operations against terrorists and nation-state adversaries. She's seen firsthand how the bad guys operate. She knows the threat they pose. And she can tell you how to use that knowledge to make organizations like yourself a hard target. Get the inside spy scoop and find out why Rosa - now KnowBe4's SVP of cyber operations - encourages organizations like yours to maintain a healthy sense of paranoia. Go to knowbe4.com/cia to learn more about this exclusive webinar. That's knowbe4.com/cia. And we thank KnowBe4 for sponsoring our show. 

Dave Bittner: [00:14:55]  And joining me once again is Daniel Prince. He's a senior lecturer in cybersecurity at Lancaster University. Daniel, welcome back. We wanted to touch today on risk management and uncertainty. What do you have to share with us today? 

Daniel Prince: [00:15:08]  Well, thank you for having me back on. So I've been doing quite a lot of work looking at risk management and thinking about, actually, what do we mean by risk? When you start to look at some of the formal definitions, risk is really looking at a system where we can know all the specific outputs and we can assign probabilities to those possible outputs. The problem with - I'm finding with digital systems is that the ability to be able to enumerate all the possible outcomes, all the possible problems that that system has, is nearly impossible because of the complexities of the system. 

Daniel Prince: [00:15:49]  And that leads us into, really, the concepts of uncertainty, where we can - we know some of the possible outcomes, but we just don't know all of the possible outcomes. And therefore, it becomes much more complicated to have a quantitative-based system to understand where all the probabilities of all the different outcomes happen. And so for me, this is really important when we start to talk about things like systemic risk within systems. So systemic risk is this concept that there is an underlying big problem that could actually change the way that people behave. But that assumes that, one, we can identify all the possible outcomes and assign probabilities, and two, that we know the whole system. My point here in thinking is that we can't know all the possible outcomes so we have to start thinking about systemic uncertainty. 

Daniel Prince: [00:16:44]  And that leads you on to, instead of doing, really, a lot of planning, a lot of more thinking about, how do we respond to incidents? Which is one of the reasons why, when I'm teaching and thinking about risk management, I'm actually thinking more about, how do we prepare people to be able to respond effectively to the materialization of unintended or bad events within a particular system, including the people and the technology? 

Dave Bittner: [00:17:11]  Now, do you find that people approach this in a logical way? Do people come at it thinking that they can eliminate all risk? Do they have unrealistic expectations? 

Daniel Prince: [00:17:21]  I think the unrealistic expectations starts with believing they can know all the possible outcomes that the computer system could generate. And that's, in some ways, a little bit of a naive position to take. And I think if you talk to a lot of technologists, they wouldn't take that position. But a lot of other people who are not completely aware of the complexities of computer systems do take that position and believe that you can know all the outputs. 

Daniel Prince: [00:17:48]  But there is often sort of, I find, a bit of a bias, sort of an overconfidence bias within some technical people within mismanagement, that they assume that they can know all the possible outcomes and quantify them, and then they're dealt with. The reality is, I think, it's much more important for a whole organization to be really prepared to face an incident. And that's just not the technical people, but that's also all of the business people all across the whole organization, and thinking about how the organization really responds as a collective of people to support the organization to deal with a specific threat. 

Dave Bittner: [00:18:33]  Yeah. It strikes me that it's not unlike how we deal with our ourselves, our human bodies and our frailties, and our ability to get sick. You know, so you can do everything. You can wash your hands. You can, you know, not sneeze on your co-workers. But still, people are going to get colds. People are going to get the flu. And as an organization, you have to be prepared for that, that sometimes people aren't going to be able to show up for work. 

Daniel Prince: [00:18:58]  Yeah. That's it. And it's one of the really interesting things about, you know, in our day-to-day lives, we're quite happy with uncertainty, most of us. We're quite happy to be able to deal with the unintended outcome, the things we didn't think about. We are capable of doing that. And we accept that we have that in our daily lives. But what's interesting when it comes to computer systems, because it is technology and because it's engineered, there is this kind of, well, why can't we know everything? That's the question that sort of comes out. But if you take a standard computer system, and you've got some hardware that we don't know what's in it. We, you know, don't know where there's vulnerabilities. 

Daniel Prince: [00:19:39]  So things like Meltdown and Spectre are key examples of that. Then we put an operating system on top of that, which could have some problems. And then we install a wide variety of applications on top of that. And we don't - you know, no one installation is exactly the same as the other. So every single system we have, and all the systems that interconnect us, can be considered as unique as every single person on the planet. So when you start to think about it like that then it's - you know, we really need to start to think about doing the best defense we can, but also be able to respond as effectively as we can, as well. 

Dave Bittner: [00:20:14]  All right. Daniel Prince, thanks for joining us. 

Dave Bittner: [00:20:21]  And that's the CyberWire. 

Dave Bittner: [00:20:23]  Thanks to all of our sponsors for making the CyberWire possible, especially our supporting sponsor, ObserveIT, the leading insider threat management platform. Learn more at observeit.com. 

Dave Bittner: [00:20:34]  Don't forget to check out the "Grumpy Old Geeks" podcast, where I contribute to a regular segment called "Security Ha." I join Jason and Brian on their show for a lively discussion of the latest security news every week. You can find "Grumpy Old Geeks" where all the fine podcasts are listed. And check out the "Recorded Future" podcast, which I also host. The subject there is threat intelligence, and every week we talk to interesting people about timely cybersecurity topics. That's at recordedfuture.com/podcast. 

Dave Bittner: [00:21:03]  The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our amazing CyberWire team is Stefan Vaziri, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Nick Veliky, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Peter Kilpe. And I'm Dave Bittner. Thanks for listening. We'll see you tomorrow.