The CyberWire Daily Podcast 11.9.20
Ep 1211 | 11.9.20

Supply chain security. New cyberespionage from OceanLotus. Data breaches expose customer information. And GCHQ has had quite enough of this vaccine nonsense, thank you very much.

Transcript

Dave Bittner: Alerts and guidelines on securing the software supply chain. OceanLotus is back with its watering holes. Two significant breaches are disclosed. Malek Ben Salem from Accenture Labs explains privacy attacks on machine learning. Rick Howard brings the Hash Table in on containers. And, hey, we hear there's some weird stuff out there about vaccines, but GCHQ is on the case.

Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Monday, November 9, 2020. 

Dave Bittner: The U.S. FBI last week made public an alert issued on a restricted basis back in October. The alert warned that unknown actors had exploited insecurely configured instances of the SonarQube code review tool to steal source code from companies and government agencies. 

Dave Bittner: ZDNet summarizes the research into and remediation of the issue. While the industry has been rife with warnings of the ways in which MongoDB and Elasticsearch databases can be left exposed, the comparable problem of exposing SonarQube was often overlooked. But the consequences of an unsecured SonarQube instance are significant for the software supply chain, since the tool is used in checking code during development. 

Dave Bittner: The typical problem is that organizations using SonarQube have left in place default configurations on port 9000 and default admin credentials. Those default credentials are admin/admin. That ought to be a red flag for everyone. Admin/admin is about as good as username/password, so do remember to change those defaults. 

Dave Bittner: Calling the pandemic a wake-up call, the U.S. Cybersecurity and Infrastructure Security Agency has released a set of lessons learned on "Building a More Resilient Information Technology and Communications Supply Chain." Noting the ways in which the supply chain has been globalized, as the document says, a product may be designed in New York, built in Vietnam, tested in Taiwan, stored in Hong Kong and sent to China for final assembly before it's distributed. 

Dave Bittner: CISA's task force identifies three primary areas in which supply chains are vulnerable. Those are, first, lean inventory approaches, second, undiversified suppliers, and third, ignorance of lower-tier suppliers. Their recommendations fall into these categories - proactive risk classification, map the corporate supply chain, broaden supplier network and regional footprint, potential development of standardized mapping and other illumination tools, work to shift the optimal amount of inventory held and plan alternatives in logistics and transportation. 

Dave Bittner: Researchers at the security firm Volexity report that OceanLotus, the Vietnamese cyber-espionage crew also known as APT32, is using an array of bogus websites and Facebook pages to attract victims. CyberScoop notes that OceanLotus has, since its discovery in 2017, been particularly active against foreign corporations doing business in Vietnam. 

Dave Bittner: Two significant data breaches have come to light and are currently under investigation. The Indian online grocer BigBasket has sustained a data breach, exposing the data of about 20 million users. According to BloombergQuint, the cyber intelligence firm Cyble has informed the Bengaluru police Cyber Crime cell that it's detected criminals selling leaked data on the dark web for some 3 million rupees, or a bit more than $40,000. The data at risk includes email addresses, phone numbers, order details and physical addresses. So it's not the gold standard of fullz, but it's a serious breach, nonetheless. 

Dave Bittner: The other data exposure incident affected the Spanish firm Prestige Software, whose channel management platform, Cloud Hospitality, automates hotel accommodation availability for delivery to online booking services, such as Expedia and booking.com. Website Planet's investigation shows that some significant personally identifiable information is at risk, including names, email addresses, phone numbers, full pay card information and even details on guests' reservations themselves - dates of stay, special requests and so on. 

Dave Bittner: Reports say that Britain's GCHQ has gone on the offensive against anti-vaccine propaganda. The Times says that the SIGINT agency is using techniques proved against Islamic State online activity against state-sponsored purveyors of vaccine disinformation. It's not a comprehensive rumor control effort but operates against state-directed disinformation only. 

Dave Bittner: According to Reuters, GCHQ is taking down hostile state-linked content and disrupting the communications of the cyber actors responsible. The campaign against which GCHQ's efforts are directed is Russian, Engineering & Technology reports. The Week suggests the motive for the disinformation is at least partly commercial, since Russia is interested in seeing widespread adoption of two vaccines developed in that country. The disinformation is directed against the COVID-19 vaccine developed in the U.K. by AstraZeneca and Oxford University. 

Dave Bittner: One might think that such disinformation would take the high-toned, friend-of-nature line that circulates in the tonier precincts of Silicon Valley or Marin County. Vaccination causes various childhood development impairments and so on. Not true, of course, although vaccines have had their troubling side effects. Consider the swine flu vaccine problems in the mid-'70s, for example. 

Dave Bittner: No, the straight line out of Moscow is a lot scarier and much more direct in terms of its proposed cause and effect. Here's the deal. 

Dave Bittner: So those eggheads at Oxford and AstraZeneca came up with this vaccine, right? But did you know that they used a chimpanzee virus to make it? Anyhoo, it stands to reason that anyone who gets the vaccine will turn into an ape, on account of they made their vaccine from, like, some chimpanzees or something. What the hey? Chimps, man. 

Dave Bittner: Edward Jenner, call your office. Maybe using cowpox wasn't such a good idea after all. Weren't there all those cattle-people mooing out there in the countryside? What? No? Well, maybe the whole ape-man risk is being overstated here, or else there's some serious mad science going on in the Urals. 

Dave Bittner: But it seems more likely that this view of vaccine trials is more informed by repeated viewings of "The Fly" - the Vincent Price version, not the Jeff Goldblum remake - than it is by the history of medicine. The whole story is more Seymour's "Fright Night" than it is the New England Journal of Medicine. 

Dave Bittner: We hope that few are persuaded by the Russian campaign. And above all, we wish GCHQ good hunting. 

Dave Bittner: And it is my pleasure to welcome back to the show Rick Howard, the CyberWire's chief analyst and chief security officer. Hello, Rick. 

Rick Howard: Hey, Dave. 

Dave Bittner: So on last week's "CSO Perspectives" episode, you made the preliminary case, and I would say a compelling case, that since containers and serverless functions are really infrastructure as code stored in the cloud, that we need to protect them with the same rigor as any other collection of data we store there. 

Dave Bittner: Now, this week, you brought in some Hash Table experts to get their thoughts on this whole matter. What sort of feedback did you get from them? 

Rick Howard: Well, as per usual with the Hash Table group, Dave, my theories about how to protect our digital environments have run afoul of practical considerations and resource limitations. 

Dave Bittner: OK. 

Rick Howard: What I initially thought was important may not be. The question I wanted the Hash Table members to answer was this. Is there a high risk of material impact to your organization because you use containers or serverless functions? In other words, should you drop everything in order to focus resources on securing these digital assets? 

Rick Howard: The answer, at least for today, is probably not. 

Dave Bittner: All right. That - I have to say that's not what I was expecting. So what's their logic here? 

Rick Howard: Well, if we just look at the MITRE ATT&CK framework - which, by the way, I'm a huge fan of. You're familiar with it. 

Dave Bittner: Sure. 

Rick Howard: It's the most comprehensive open-source collection of adversary tactics, techniques and procedures in the world right now. And if you're not using it to establish your intrusion kill chain first-principle prevention strategy, you're probably failing at that. We did a whole entire episode of this way back in Season 1 on Episode 8. But even the MITRE ATT&CK framework is silent about any container-related tactics, techniques and procedures. 

Dave Bittner: Why is that? I mean, are the bad guys not coming after it yet? What's the reality on the ground? 

Rick Howard: Yeah, at least they're not right now. And we can debate the reason why, but it's probably because it's too hard to do - not impossible, but hard. You know, adversaries have many other ways to destroy or steal data that are not nearly as complicated. 

Rick Howard: So I was talking to Roselle Safran about this at the Hash Table. She is the CEO and founder of a small startup called KeyCaliber. She uses containers to deliver her security service to her customers. And I've known Roselle for a number of years, and she has a first-class cybersecurity mind and in a former life worked as a government cyber operator in multiple functions. Here's what she had to say. 

(SOUNDBITE OF PODCAST, "CSO PERSPECTIVES") 

Roselle Safran: Well, I mean, some of it is just the infrastructure. By its nature, it implicitly has some defenses in place. And maybe that's just because it's newer technology, and so that was more built into it than with some older technology. 

Roselle Safran: For example, from the perspective of the memory and making sure that the memory is protected, NX bit, so the - an attacker can then execute from the stack and ASLRs, where everything is in a - the stack is in random locations. It forces the attacker to have to go to, you know, return-oriented programming attacks, so they can't even get to softball attacks. And so you have that type of infrastructure that's already in place with it, and so that helps. 

Rick Howard: This doesn't mean that hackers will never try to leverage this new client-server architecture. It just means that they aren't right now - that if your organization has limited cyberdefense resources and you still have work to do preventing all the things we already know that hackers do that are currently listed in the MITRE ATT&CK framework, diverting security resources away from that to containers and serverless functions is probably not the right move. 

Dave Bittner: Well, as always, there's a lot more to the conversation. So check out - hear what the Hash Table had to say. It's the "CSO Perspectives" podcast. It is part of CyberWire Pro. You can check that out on our website. Rick Howard, thanks for joining us. 

Rick Howard: Thank you, sir. 

Dave Bittner: And I'm pleased to be joined once again by Malek Ben Salem. She's the Americas security R&D lead at Accenture Labs. Malek, it is great to have you back. I wanted to focus today on some stuff that you and your team have been tracking, and this is privacy attacks to machine learning. What's going on here? 

Malek Ben Salem: Yeah, Dave. As you know, more and more businesses are using their data to gain insights into their clients or customers and, you know, using it for predictive use cases. And that means, you know, using machine learning, if you will. And this has been expedited by the prevalence of data, but also by the capabilities - the computing capabilities available to us on the cloud. 

Malek Ben Salem: But that machine learning, whether it's performed by one company on its own or whether it's performed in collaboration with ecosystem (ph) partners, requires, in most cases, sharing data between the different parties or uploading data to the cloud. And there are a few risks associated with that, particularly privacy risks, which is what I want to focus on today. 

Malek Ben Salem: The first one comes from if a party is uploading data and storing it on the cloud in the clear, right? Obviously, there's a risk associated with that. Most companies do encrypt their data when they upload it to the cloud, but that data has to be decrypted if you want to perform any computation on it. So when it gets decrypted, then it - you know, there's a privacy risk if the data contains private information. 

Malek Ben Salem: That is just, you know, the obvious risk, but most companies do a pre-processing step where they try to anonymize the data, you know, remove any sensitive or PII data. 

Malek Ben Salem: But we've seen that that step is not enough to prevent deanonymization. And there have been several attacks where the data was anonymized, but parties can take - or adversaries can take that data and combine it with external data or third-party data to be able to deanonymize it and to re-identify the individuals whose data shows up in that dataset. 

Malek Ben Salem: But those are - again, those are the, you know, the straightforward attacks. But there are more sophisticated attacks. 

Malek Ben Salem: So one of the techniques that companies do or one of the pre-preprocessing steps they go for in order not to have their data - their private data in the clear on the cloud or any - on any system is a step where they take the raw data and turn it into what is known as features that can be used by the machine-learning model as input. So this is a pre-processing step that extracts some of the, you know, the identifying - some of the features that are used to train the machine-learning model out of the raw data. And then the party would take that feature data and upload it on the cloud or the, you know, the server where they perform the computation instead of the raw data itself. 

Malek Ben Salem: However, adversaries can - even when only the features are transferred and stored on the computation server, there is a threat known as a reconstruction attack, where the adversary's goal is reconstructing the raw private data by using the knowledge they have of the feature vectors. 

Malek Ben Salem: So examples of that that have been performed previously is - are taking a fingerprint - or reconstructing a fingerprint image from a minutiae template that includes just features or, you know, taking mobile device text gestures and reconstructing the touch events from the features that include the velocity and the direction of the touch. 

Malek Ben Salem: Now, in these cases - in both of these cases, you know, the threat was - resulted in a security threat. So from a privacy threat, this resulted in a security threat to an authentication system. So that's basically the third type of attack. 

Malek Ben Salem: And this, you know, this can be exacerbated by the type of machine-learning algorithm that is used. So in some cases, even if that feature data is not available, but the adversary gets access to the machine-learning model that uses it, some of the machine-learning models store these feature vectors in them. So, you know, models like support vector machines or the k-NN, the k-nearest neighbor algorithm, use these feature vectors to identify the model itself. 

Malek Ben Salem: So if the adversary gets access to the model without getting access to the data at all, they may be able to infer some information - private information about the individuals whose data was used to build that model. 

Dave Bittner: Wow. Well, it's a lot to unpack, but always appreciate you explaining this stuff for us. Malek Ben Salem, thanks for joining us. 

Malek Ben Salem: Thank you, Dave. 

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our daily briefing at thecyberwire.com. And for professionals and cybersecurity leaders who want to stay abreast of this rapidly evolving field, sign up for CyberWire Pro. It'll save you time and keep you informed. Oh, what heights we'll hit. Listen for us on your Alexa smart speaker, too. 

Dave Bittner: The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Elliott Peltzman, Puru Prakash, Stefan Vaziri, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Ben Yelin, Nick Veliky, Gina Johnson, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.