The CyberWire Daily Podcast 12.11.18
Ep 742 | 12.11.18

Audit finds no Chinese spy chips on motherboards. Huawei CFO hearings continue in Vancouver. Oilfield services firm’s servers attacked. Spyware and adware. Congressional hearings, reports.

Transcript

Dave Bittner: [00:00:03] An audit finds no Chinese spy chips on Supermicro motherboards. Huawei CFO Meng's hearing continues. An oil services firm's servers have been attacked. Seedworm shows some new tricks. Secure instant messaging apps may be less secure than hoped. A new adware strain's been reported. Mr. Pichai Goes to Washington, and Uncle Pennybags puts in an appearance. And the U.S. House Oversight and Government Reform Committee reports on the Equifax breach.

Dave Bittner: [00:00:38] Now I'd like to share some words about our sponsor Cylance. AI stands for artificial intelligence, of course. But nowadays, it also means all image or anthropomorphized incredibly. There's a serious reality under the hype, but it can be difficult to see through to it. As the experts at Cylance will tell you, AI isn't a self-aware Skynet ready to send in the terminators. It's a tool that trains on data to develop useful algorithms. And like all tools, it can be used for good or evil. If you'd like to learn more about how AI is being weaponized and what you can do about it, visit threatvector.cylance.com and check out their report "Security: Using AI for Evil." That's threatvector.cylance.com. We're happy to say that their products protect our systems here at the CyberWire. And we thank Cylance for sponsoring our show.

Dave Bittner: [00:01:35] Major funding for the CyberWire podcast is provided by Cylance. From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Tuesday, December 11, 2018. Regular listeners will recall on October 4, Bloomberg reported that motherboards built by Supermicro had been compromised in a hardware attack on the company's supply chain. Small chips the size of a grain of rice were said to have been found in the motherboards, and these chips were said to have been installed to give Chinese intelligence services access to any devices that use them. Supermicro denied the report. Among Supermicro's customers were Apple and Amazon, and both of them also quickly issued strong and unambiguous denials that any such compromised hardware existed in their servers.

Dave Bittner: [00:02:24] Bloomberg did not retract its report, but some of the sources cited in the articles walked back the stronger claims attributed to them. Federal authorities, including the FBI, the director of National Intelligence and the Department of Homeland Security, also expressed public doubt about the Chinese spy chip claims. At this point, the story is widely regarded with skepticism, and there has been little subsequent follow-up. In a letter to its customers today, Supermicro says a third-party audit of its hardware conducted by Nardello tested the company's motherboards and found none of the Chinese spy chips a Bloomberg report said there were. That said, as TechCrunch noted, the October report worked its damage. Supermicro's stock tanked shortly after its publication. Share prices have not recovered their September value.

Dave Bittner: [00:03:16] Huawei CFO Meng's bail hearing continues. A Vancouver judge did not, as expected, rule yesterday, and the process has continued into today. Ms. Meng has proposed electronic monitoring as an alternative to custody and has offered to arrange and pay for security. The proffered oversight by her husband and private security seems unlikely to convince the Supreme Court of British Columbia. It's worth noting that Ms. Meng is wanted by the U.S. for alleged sanctions violations, not, as one might think from such coverage, on espionage or IP theft charges. Security concerns about Huawei persist and are widely shared, but they are not directly what this case is about.

Dave Bittner: [00:04:02] There's a developing story in the oil and gas sector this week. The Italian oil service company Saipem reports that its Middle Eastern servers have sustained a cyberattack. Details remain sparse, but Saipem says it's shut down some of its IT in order to remediate and recover from the incident. The affected centers, apart from a small branch office in Aberdeen, were located in the United Arab Emirates and Saudi Arabia. Elsewhere in the oil and gas sector and affecting other targets as well, the Seedworm espionage group continues to be active and troublesome.

Dave Bittner: [00:04:38] Researchers at security firm Symantec find that the threat actor, which they also track as Muddy Water, has deployed a new backdoor - Powermuddy - new variants of its PowerStats backdoor, a GitHub repository for storing scripts and an array of post-compromise exploit tools. Seedworm is most active against targets in the Middle East, but it's also been found in Europe and the Americas. There's been a shift from oil and gas toward telecommunications services and government agency IT services. Symantec assesses the group's goal is espionage, collection of actionable intelligence likely to be useful against the target at some point.

Dave Bittner: [00:05:20] Researchers at Cisco's Talos unit report that secure instant messaging services may be less secure than generally believed. They found that the widely used apps WhatsApp, Signal and Telegraph are, in principle, vulnerable to side-channel attacks that could expose messages to hackers. Data may be secure in transit, but during processing or on a user's device, not so much. A great deal depends on the way the apps and their protocols are implemented, and many users overlook the complexity of setting them up in a secure manner. The upshot is that all three of the popular apps could be susceptible to desktop session hijacking.

Dave Bittner: [00:06:00] Controlling access to your network and data is of critical importance to every organization, but just how common are issues with third-party access? Barry Hensley is chief threat intelligence officer for SecureWorks, and he joins us to share what they're seeing.

Barry Hensley: [00:06:17] You know, if you look at it from a storage perspective, we did about a thousand incident response engagements last year. And we found about 3 percent of those engagements. Now, those are, you know, opportunities that an organization either was breached or had an opportunity to be breached. We found that 3 percent of those were tied to some third-party supply chain challenge, meaning the avenue of approach into the environment was based upon a third-party relationship that they had. A common theme that we saw was a trust relationship that was some case broken, meaning, you know, if you had a relationship with some software distribution portal or some software development world or other software update mechanisms, how do you validate that - those downloads, as an example? Or the other thing is if you gave a third party, you know, manage service provider - from an IP perspective - access to your environment, how do you validate their credentials and their access in a way that's a trust but verified model?

Barry Hensley: [00:07:19] We took a step back and we said, you know, what's the most common things we'd recommend that you'd - in this case, what we call have a holistic defense in-depth approach based upon these various type of risks from the supply chain perspective. Some of it does go back - and I hate to say - gets you back to the basics. And so we found that mostly engagement - people didn't have the right log-in in place that, ultimately, would allow them to draw a conclusion. Was it their own employer? Was it some third party? How do you give those suppliers access to your environment? So now as an example, anybody that accesses the network, especially externally from the Internet, should be doing what we call multi-factor authentication so that there's more than just a username and password that you gave them.

Barry Hensley: [00:08:06] And then obviously, you know, how do you manage user account access or privileges? And so what access should those third-party suppliers have? And then, you know, once they, in this case, did get the network, how do you ensure they can, you know, what we call elevate privileges of some user based upon the access they maintain? And the last one - you know, the endpoint is the new parameter. And so in the end, they're using it and getting access to the first server, the first endpoint, their host that they can gain access to. And then they're going to pivot into the network. And so from a rapid detection perspective, how do you have the ability to detect that initial, you know, compromise? And so I guess the last one is, how robust is your visibility at the endpoint?

Dave Bittner: [00:08:56] That's Barry Hensley from SecureWorks.

Dave Bittner: [00:09:01] A quick report from security firm Netskope this afternoon tells us that they've found an adware family they're calling CapitalInstall that's moving from Microsoft Azure blob storage whose IP range is, unfortunately, widely white listed. The malware looks like a commonly used enterprise software installer. Netskope says the malware makes its criminal masters money through ads relating to Altcoin mining and bogus search engines. Its effect on the victims is mostly productivity loss and consumption of computational resources.

Dave Bittner: [00:09:34] French authorities investigate possible Russian influence over ongoing yellow vest unrest. RT, the news service formerly known as Russia Today and one of the Russian government's principal information outlets, objects that covering the news isn't meddling. And that's a fair point. Simply saying that there are demonstrations and some rioting in France and discontent over President Macron's policies surely doesn't constitute interference or disinformation.

Dave Bittner: [00:10:02] But that's not what investigators are looking into. They are inquiring into whether fictitious foreign persona are trolling in social media. The chum tossed out in this case would be mainly the #GiletJaune, that is yellow vest. And protesters have certainly made use of that in a grassroots way. The opportunistic conduct of information operations would seem to make it possible that such trolling has made its own contribution to the unrest. How large that contribution might be is unknown. Social upheaval of this kind is very commonly overdetermined in any case.

Dave Bittner: [00:10:38] Google CEO Sundar Pichai makes his appearance before the House Judiciary Committee today to discuss Google's data collection, use and filtering practices. His prepared remarks emphasize Mountain View's American family romance, founded by two young dreamers - one a Michigander, the other a Marylander - coming together at Stanford to dream big. They welcome employees of all viewpoints. They built jobs, made immigrants profoundly grateful for this land of opportunity and so on. Congress is interested in hearing about data privacy - they think Google may have a problem with this - and bias - ideological, gender or any other form bias may take.

Dave Bittner: [00:11:18] Pichai stressed Google's neutrality, to Democratic satisfaction and Republican skepticism, with respect to its filtering algorithms. He also came in for questioning over the company's privacy policies, given some point by yesterday's disclosure that Google Plus had exposed some 53 million users' data to app developers through an unduly permissive API. The company has said it's found no evidence that the data was misused. But its accelerated plans to retire Google Plus now destined for an even quicker trip to the scrap heap. Pichai says the company supports federal privacy legislation.

Dave Bittner: [00:11:56] The hearings unfolded today with the usual street theater one sees in Capitol Hill hearing rooms. For example, a guy dressed up as Uncle Pennybags from the Monopoly game was there in the audience behind Pichai, twirling his moustache and mugging for CSPAN. Uncle Pennybags made his first appearance in the congressional peanut gallery during last year's Equifax hearings. He's interested, he says, in showing, by his presence, that industry is incapable of self-regulation. It's not clear how this follows, but the monocle, top hat and handlebar mustache are a nice look for him. Take a ride on the Reading, sir. If you pass go, collect $200.

Dave Bittner: [00:12:35] The House has released two reports on its investigation of the Equifax breach. The Oversight and Government Reform Committee's report found that the breach was the preventable result of the credit bureau's internal security missteps, thus confirming the conclusion most observers have also reached. A report by the committee's Democrat minority staff raked the majority over the coals for not doing more for data protection, but such disputes are the small change of partisan combat in Washington. There's no dissent from the basic findings. Do not pass go, as Uncle Pennybags might put it. And no, free parking doesn't entitle you to anything either.

Dave Bittner: [00:13:18] It's time to tell you about our sponsor ThreatConnect. With ThreatConnect's in-platform analytics and automation, you'll save your team time while making informed decisions for your security operations and strategy. Find threats, evaluate risk and mitigate harm to your organization. Every day, organizations worldwide leverage the power of ThreatConnect to broaden and deepen their intelligence, validate it, prioritize it and act on it. ThreatConnect offers a suite of products designed for teams of all sizes and maturity levels. Built on the ThreatConnect platform, the products provide adaptability as your organization changes and grows. Want to learn more? Check out their newest white paper titled "Threat Intelligence Platforms: Open Source vs. Commercial." As a member of a maturing security team evaluating threat intelligence platforms or TIP, you may be asking yourself whether you should use an open-source solution, like a malware information sharing platform or MISP, or by a tip from one of the many vendors offering solutions. In this white paper, ThreatConnect explains the key technical and economic considerations every security team needs to make when evaluating threat intel solutions to help you determine which is right for your team. To read the paper, visit threatconnect.com/cyberwire. That's threatconnect.com/cyberwire. And we thank ThreatConnect for sponsoring our show.

Dave Bittner: [00:14:54] And I'm pleased to be joined once again by Professor Awais Rashid. He's a professor of cybersecurity at University of Bristol. Awais, welcome back. Today we wanted to touch on some of the things people have to consider when they are making - decision-making, particularly when it comes to risk and some of the challenges that come with using data there. What can you share with us?

Awais Rashid: [00:15:16] We live in a data-intensive world at the moment. We also talk about big data and AI transforming everything. But if you look at the sort of projections of something like 30 billion devices or more by 2021 and other projections, which talk about something like 278 exabytes of data per month by the same period, then we are looking at, potentially, a large amount of information that we can actually collect from the underlying infrastructure. The challenge comes is that - how do you make sense of all this data? And there is always a tendency to think that we can actually log everything and mine, effectively, the living daylights out of it. But there is a big challenge there as to, how do we curate this information? - and actually be more selective about what information from the infrastructure or the applications and services that run in that infrastructure is really pertinent to just think about its security state.

Dave Bittner: [00:16:13] So when it comes to managing risk, what sort of approach are you advocating?

Awais Rashid: [00:16:19] I think risk is ultimately a decision-making problem because we can't remove risk. But it's how we inform our risk decision-making is very, very important. And if we are not careful in the way we curate the data and what data we actually bring from the underlying system or infrastructure and risk decision-making, then we - no pun intended - risk overloading the decision-makers in the first instance with the information. And as a result, it makes it really hard for them to make sense of such information. I think the key here is a good balance between automated, semi-automated or human decision-making.

Awais Rashid: [00:16:59] And at the moment, we actually do not necessarily know as to which bits of it can be automated and how automation can provide a value to the human decision-makers so that they can defer some of the decisions because the information that comes and the decisions that come from automation and AI techniques will provide very valuable insights. And where do we defer to the human? - because they can look at the bigger picture, the social-economic business consequences of some of the decisions that they are making with regards to risk.

Dave Bittner: [00:17:32] Yeah. I mean, it strikes me that in this attempt to separate the signal from the noise that you sort of need a virtuous feedback loop where the - if you have automation providing things to the humans, then the humans need to be able to provide feedback to the automated systems to say, this was valuable to me or you missed the mark here.

Awais Rashid: [00:17:51] Absolutely. And humans are very good at spotting patterns that computers sometimes can't. And I think the key challenge, really, there is that we need to make sure how we get that feedback loop right. Over the years, you know, mistakes have been made where the knowledge of so-called laypersons in the organization - not security specialists - when they are seeing some information coming through is often disregarded because they are not security specialists. However, they understand the process within which they're working very, very well. And they're often much better at spotting anomalies than perhaps a security system would. I'm not saying always. And I think it's how we get that feedback loop right and getting the expert - the domain expert to provide what would be anomalous, non-anomalous events in that sense actually create a more holistic loop between the people and the machine in terms of spotting events and hence, informing risk decision-making.

Dave Bittner: [00:18:48] Professor Awais Rashid, thanks for joining us.

Dave Bittner: [00:18:54] And that's the CyberWire. Thanks to all of our sponsors for making the CyberWire possible, especially to our sustaining sponsor Cylance. To find out how Cylance can help protect you using artificial intelligence, visit cylance.com. And Cylance is not just a sponsor. We actually use their products to help protect our systems here at the CyberWire. And thanks to our supporting sponsor VMWare, creators of Workspace ONE Intelligence. Learn more at vmware.com.

Dave Bittner: [00:19:22] The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our CyberWire editor is John Petrik; social media editor, Jennifer Eiben; technical editor, Chris Russell; executive editor, Peter Kilpe. And I'm Dave Bittner. Thanks for listening.