The CyberWire Daily Podcast 9.20.19
Ep 932 | 9.20.19

Coordinated inauthenticity in five countries draws action from Twitter. Cryptomining continues. Huawei fights its ban in US Federal court. Notes from CISA’s Cybersecurity Summit.

Transcript

Dave Bittner: [00:00:03] We've got a quick look at CISA's National Cybersecurity Summit. A big new distributed denial-of-service vector is reported. Medical servers leave patient information exposed to the public internet. Huawei is suspended from the first group as it argues its case in a U.S. federal court. And one of the challenges of engaging ISIS online is that it relies so heavily on commercial infrastructure. It's got to be targeted carefully. 

Dave Bittner: [00:00:34]  And now a word from our sponsor ExtraHop, the enterprise cyber analytics company delivering security from the inside out. The cloud may help development and application teams move fast, but for security teams already dealing with alert fatigue, tool sprawl and legacy workflows, cloud adoption means a lot more stress. You're building your business cloud first. It's time to build your security the same way. ExtraHop's Reveal(x) provides network detection and response for the hybrid enterprise with complete visibility, real-time detection and guided investigation. Reveal(x) helps security teams unify threat detection and response across on-prem and cloud workloads so you can protect and scale your business. Learn more at extrahop.com/cyber. That's extrahop.com/cyber, and we thank ExtraHop for sponsoring our show. 

Dave Bittner: [00:01:32]  Funding for this CyberWire podcast is made possible in part by Bugcrowd, connecting organizations with the top security researchers, pen testers and white hat hackers in the world to identify 10 times more vulnerabilities than scanners or traditional pen tests. Learn more about how their award-winning platform provides actionable insights like remediation advice to help fix faster while methodology-driven assessments ensure compliance needs are met at bugcrowd.com. 

Dave Bittner: [00:01:59]  From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Thursday, September 19, 2019. U.S. federal agencies are taking election security seriously, as we heard yesterday at the second annual National Cybersecurity Summit organized by the Cybersecurity and Infrastructure Security Agency, CISA. CISA and its partners are concerned with direct hacking of voting systems but also with countering influence operations mounted by hostile foreign governments. Discussions were particularly aware of the ways in which social media lend themselves to confirmation bias and the ways in which such bias can be used to create or exploit fissures in civil society. 

Dave Bittner: [00:02:43]  CISA director Christopher Krebs also offered a suggestion to the security industry - please stop selling fear. Sure, it can work for marketing sometimes, although even there, it's subject to diminishing returns as the customer slides into learned helplessness. But it's an impediment to sensible discussions and planning that could actually avert damage. This is especially true, he thought, with election security, where citizens' confidence in their institutions is a principal target. He didn't ask why we should do the opposition's work for them, but we will. If the bad actors want to destroy trust and confidence, let them try to do so without the security industry scoring a lot of their own goals. So keep calm and carry on. 

Dave Bittner: [00:03:29]  Akamai reports that a new distributed denial-of-service vector WS-Discovery, a UDP amplification technique, is being exploited in the wild. The approach is a good one from the attackers' point of view since it's enabling them to achieve amplification rates of fifteen-point-three-thousand percent. Now, we don't have an intuitive grasp of how big that is either. It's like astronomical distances. You've got no feel for them at all, but you're pretty sure they're pretty big. This, Akamai points out, gives the attack technique the fourth highest reflected amplification factor on the DDoS leaderboard. 

Dave Bittner: [00:04:06]  There's been another case of misconfigured servers exposing private information to public inspection. Researchers at Greenbone Networks have found a very large number of medical images - radiological images, for the most part - sitting out there online. Greenbone looked at 2,300 picture archiving and communication systems - servers based on the DICOM protocol - and found that some 400 million images belonging to 24.5 million patients were easily accessible. 

Dave Bittner: [00:04:36]  Why would someone care about this? Apart from being sensitive about your X-rays, there are several good reasons. The exposed files were commonly associated with patient data that included a full name, date of birth, date of examination, what the researchers call scope of the investigation, the type of imaging, the attending physician, the health care facility where the procedures were performed and the number of images generated during the procedures. One often thinks first of identity theft in such cases, and of course, that's a possibility, but this sort of information is also very useful in social engineering. Consider you're in for medical imaging, which is often associated with serious and frightening conditions. Your guard will be down if you receive an email or phone call that appears to be from the doctor or the tech who took the X-rays or MRI. That's the bigger problem here. 

Dave Bittner: [00:05:30]  GDPR created huge incentives for companies to make sure they met data privacy regulations by the implementation deadline. Still, there are some areas where they are lagging behind. David Talaga is from data integrity and integration firm Talend, and he offers his insights. 

David Talaga: [00:05:48]  It's the GDPR one-year anniversary back in May '19. At that times, the European Data Protection Board - we're just told the were 90,000 complaints. Most complaints were coming for kind of telemarketing use case, promotional email, video surveillance - that kind of things. On all sites, what we found out is that 98% week that sanctions against the company amount to an unconstitutional bill of attainder, the Wall Street Journal reports. This argument is similar to the one Kaspersky unsuccessfully raised against its own ban from U.S. federal networks. A bill of attainder is an unconstitutional punishment imposed on a legal person by legislative action as opposed to a court. It seems unlikely that Huawei will enjoy any more success with this argument than Kaspersky did. 

David Talaga: [00:06:39]  The Cybersecurity and Infrastructure Security Agency's Second Annual National Cybersecurity Summit wraps up today just outside Washington, D.C. In a keynote delivered Wednesday, CISA director Chris Krebs outlined what the new agency has achieved since it was set up last year. Krebs cited a number of directives and executive orders that have been passed, and he pointed to the series of indictments against threat actors around the world. As an example of the effectiveness of these measures, he said that, quote, "indictments of the SamSam ransomware actors have stopped SamSam ransomware attacks worldwide." He cited these achievements in the course of advocating what amounts to a whole-of-nation approach, with strong cooperation between government and the private sector. Krebs stressed the growing importance of cooperation between the public and private sectors in defending against threats - quote, "the government's not going to solve this problem alone. This is a national problem set," end quote. 

David Talaga: [00:07:35]  Krebs wants to prepare for a large-scale cyberattack before it happens. Relating such an event to a natural disaster, he said we know how to prepare for hurricanes because we know what happens when a hurricane hits. We don't have that level of knowledge when it comes to a cyber event. But he said the spate of ransomware attacks against government targets this summer came pretty close to a large-scale event. One of the threats CISA is preparing for is the possibility that ransomware could be deployed against voter registration databases during the 2020 election. 

David Talaga: [00:08:08]  One sort of private sector contribution Krebs would discourage, however, is FUD - fear, uncertainty and doubt. He pointedly asked the cybersecurity industry to stop selling fear. He acknowledged that it's an effective marketing tactic, but said we need to remove the hysteria and have measured and reasonable conversations about threats, particularly those surrounding election security. The threats to infrastructure are undeniably real, but self-interested alarmism doesn't help and only serves to drive down voter confidence. 

Dave Bittner: [00:08:45]  And now a word from our sponsor, Dragos, the leaders in industrial cybersecurity technology. Threats to electric infrastructure are progressing in both frequency and sophistication. In their latest whitepaper and webinar, Dragos re-analyzes the 2016 Ukraine cyberattack to reveal previously unknown information about the Crashoverride malware, its intentions and why it has far more serious and complex implications for the electric community than originally assessed. Learn more about Crashoverride and what defenses to take to combat future sophisticated cyberattacks by reading the whitepaper at dragos.com/white-papers or watching their webinar at dragos.com/webinars. To learn more about Dragos' intelligence-driven approach to industrial cybersecurity, register for a free 30-day trial of their ICS threat intelligence at dragos.com/worldview. And we thank Dragos for sponsoring our show. 

Dave Bittner: [00:09:56]  And joining me once again is Malek Ben Salem. She's the senior R&D manager for security at Accenture Labs. Malek, it's always great to have you back. I wanted to touch base with you on the news we've been seeing lately when it comes to facial recognition systems. I wanted to get your take on where are we, what's the technology, where do things stand. 

Malek Ben Salem: [00:10:16]  Yeah, so our facial recognition technology has spread widely over the last decade, especially due to advances in big data, deep convolutional metrics and the graphics processing units, or GPUs. And we see them being used widely. You know, most people know them from social networking platforms where pictures or people - people's faces get tagged. They're used for, you know, to spot missing people, to catch slackers who lie about the hours they spend in the office. Most recently, they've been deployed at the - I believe the Hyderabad Airport. So you can use your face now as your boarding card. So the - you know, the uses continue to be - to grow thanks to the advances in computational power and to deep learning. But there are issues with the technology itself. 

Dave Bittner: [00:11:24]  What kind of concerns are you tracking? 

Malek Ben Salem: [00:11:25]  Well, there's obviously the privacy concern, the fact that these technologies are being used everywhere, not necessarily with people's consent. As a matter of fact, just last week, one school was fined in Europe because it used facial recognition systems to track the presence of students in their school. So this was in Sweden. And, you know, about a 20,000 Euro fine was issued against this school because of that use. 

Malek Ben Salem: [00:12:04]  But beyond the privacy concerns, facial recognition systems, just like any machine learning systems, you know, reflect the data that they get trained with. And because a lot of the data that they were trained with was not reflective of entire populations, they end up having biased results. So no matter how - accuracy improvements they've been able to achieve, overall, across, you know, the widest population, for certain demographic groups, they don't perform as well, which makes them not reliable. So if we think about uses in law enforcement, for instance, to match certain faces with people of interest or people who have committed crimes before, then it has been noted that, you know, certain demographic populations are more likely - or people from those demographic populations are more likely to be matched to the faces of interest. 

Dave Bittner: [00:13:14]  Yeah. It seems like that's a high-risk proposition there, where that's a situation where it's really important to get it right. 

Malek Ben Salem: [00:13:22]  Yeah, absolutely. Absolutely. And that is why we need to take a look back at the data sets that are used to train these facial recognition systems to address this bias problem, address this false positive problem when dealing with watch lists. 

Dave Bittner: [00:13:43]  Now, is this something that you think, as time goes on, the reliability is going to improve or are we ever going to see these get to the point where we feel like we can trust them? 

Malek Ben Salem: [00:13:54]  I think so. I think that technology will continue to improve. For instance, we know that, up to this point, these systems have had difficulty distinguishing twins. But they can be complemented with certain techniques so that they're able to distinguish the faces of twins, for instance, by looking at, you know, pores within the twins' faces, and, you know, computing the distances between (laughter) those pores, they may be able to get additional information or additional - build additional discriminative power between the faces of twins. Other things that can be leveraged is how the people walk. If we're not just looking at the face of the person, but at the, you know, entire video of a person walking or moving, then we're able to improve the accuracy of these algorithms and these systems that way. 

Dave Bittner: [00:15:01]  All right. Well, it's something that'll continue to develop and certainly merits keeping an eye on. Malek Ben Salem, thanks for joining us. 

Malek Ben Salem: [00:15:10]  Thank you, Dave. 

Dave Bittner: [00:15:15]  Now a word from our sponsor, KnowBe4. Email is still the No. 1 attack vector the bad guys use, with a whopping 91% percent of cyberattacks beginning with phishing. But email hacking is much more than phishing and launching malware. Find out how to protect your organization with an on-demand webinar by Roger A. Grimes, KnowBe4's data-driven defense evangelist. Roger walks you through 10 incredible ways you can be hacked by email and how to stop the bad guys. And he also shares a hacking demo by KnowBe4's Chief Hacking Officer Kevin Mitnick. So check out the 10 incredible ways, and learn how silent malware launch, remote password hash capture and rogue rules work, why rogue documents establishing fake relationships and compromising a user's ethics are so effective, details behind clickjacking and web beacons and how to defend against all of these. Go to knowbe4.com/10ways to watch the webinar. That's knowbe4.com/10ways. And we thank KnowBe4 for sponsoring our show. 

Dave Bittner: [00:16:30]  My guest today is Henry Harrison. He is chief technology officer at Garrison, a company that offers secure isolation technology using a technique called hardsec. We asked Henry Harrison to explain what hardsec is, what it's good for and where it came from. 

Henry Harrison: [00:16:46]  If you go back, like, two decades ago and you looked at that kind of national security space in the world's kind of leading nations, then really pretty much the only cybersecurity tool they trusted was AIR, and they didn't trust any of the software that was around. But obviously, just using AIR as your approach to cybersecurity is incredibly inefficient and causes all manner of business problems. So a lot of effort has gone in within that community of looking at - looking into technologies that they would be able and would be willing to trust. 

Henry Harrison: [00:17:18]  And hardsec, basically, is the - it comes from one key insight, which is that the reason the exploitation of vulnerabilities in software is such a problem is because of the very nature of software; it's because software is a concept that is based around the Turing machine. You have these hardware platforms that are essentially universal Turing machines, and they'll do absolutely anything you want them to, provided you give them the right software. And that's also the great - you know, the great opening for an attacker. So if you can trick the software that's running, you can get the Turing machine to do what you want it to do instead. 

Henry Harrison: [00:17:52]  And the - you know, the objective with hardsec was to say, well, how can we do - the strongest way of doing security is to use non-Turing machine approaches, to use less sophisticated digital logic, the simple state machines, simple combinatorial logic to implement security controls, at which point we don't have that inherent vulnerability issue that we've got associated with software. And, you know, that's not - in some ways, that's not a new thing - right? - because, you know, processor manufacturers have been building, you know, core security features like memory isolation and NX bits and vm-support and so on using non-Turing machine logic inside their hardware. But the insight of hardsec was that we could make all that feel programmable by using a different type of Silicon device called the field-programmable gate array, or an FPGA. 

Dave Bittner: [00:18:42]  I mean, if you'll forgive me, I'm reminded of the original "Pong" arcade machine, which my understanding is was hardwired to play "Pong" and only "Pong." You couldn't, you know, reprogram it to play "Pac-Man" or "Asteroids" or anything else. It was a circuit-soldered together on a board to do only that one thing. Is that the sort of thing we're talking about here? 

Henry Harrison: [00:19:06]  Well, so that's certainly true, that "Pong" is very, very secure, right? And as you say, it's only going to play "Pong." But nobody wants to return to that world where we have machines that can only play "Pong." We can basically kiss goodbye to decades of innovation if we try and do that, and we're certainly not going to innovate anything more because the economics don't work, right? We can't go around building special hardware for every job that we need to do that. That's simply not going to work. 

Henry Harrison: [00:19:29]  And that's why hardware-level security has historically been something that has applied to very, very specific things that are universal, right? So for example, virtual memory protections, that's a tool that's used by, you know all manner of different applications. And so it's built into processors and is universal. We can afford to take the manufacturing cost of building that into the processors because everybody uses it. But it's not a good way to solve a whole, you know, broader range of security problems because, you know, we just can't justify building hardware for them. 

Henry Harrison: [00:20:04]  And so this trick of using field-programmable gate arrays, FPGAs, really allows us to get the best of both worlds. So you can get the inherent security that comes from building something that can only do one thing, and yet at the same time achieve the seemingly impossible, which is to make it actually reprogrammable, so you can have a single piece of hardware that does multiple different tasks at different times depending on what logic you tell it to have. 

Dave Bittner: [00:20:28]  Help me understand how that's not merely shifting the security back a layer. If you can still program that gate array - right? - isn't there an issue there? 

Henry Harrison: [00:20:39]  Yeah. Well, that's - couldn't have been a better question because the real key thing about an FPGA is that you can reprogram it - hence, it's called field-programmable - but you can only reprogram it using very specific pins on the device. And so the security architecture for hardsec says OK, what you need to do above all is take those pins out to a dedicated management interface - right? - an out-of-band management interface, so that the FPGA can only be reprogrammed by somebody who's got access to the management interface or to a network that's connected to that management interface. And then if the FPGA is processing inputs that come from other pins that would be connected to the internet or connected to, you know, a corporate network or whatever, then data that's coming through that physical interface can't reprogram the FPGA. 

Henry Harrison: [00:21:26]  So what we've done is we've isolated the reprogramming capability. And then we're able to say OK, we can apply all manner of restrictions on which people are allowed to reprogram it, under what circumstances, what monitoring we can do around them and what physical scenarios and so on just as you would with a typical kind of data center out-of-band management network. 

Dave Bittner: [00:21:45]  So what are the applications for which hardsec is the right choice? And are there applications where it's not the right choice? 

Henry Harrison: [00:21:54]  Yeah. So it's definitely not the right choice - right? - for building your next-generation machine learning artificial intelligence platform that requires, you know, constant innovation and so on. We still need to, you know - fundamentally, most things are going to continue to be built using software. The real role that hardsec plays is, above all, around input sanitization. I mean, everybody's used to input sanitization in the context of web development where we say OK, well, we need to make sure that sequences are escaped and so on to avoid vulnerabilities like SQL injections. But actually, there's a much broader scope for input sanitization where instead of trying to say we're going to detect bad things and stop them, what we do instead is we say OK, we're going to assume everything is bad. And then we're going to transform it into a form which we can validate to be good. 

Henry Harrison: [00:22:44]  And that's a pattern, in fact, that was published by the U.K.'s National Cyber Security Centre - which is part of the GCHQ Intelligence Agency - last year. And they called it pattern for safely importing data. And they talk about transforming data into a format where it can be verified before you bring it in. That's applied to all manner of different things. So it can be applied to structured data like REST APIs and JSON schemas, XML schemas. It can be applied to files. There are companies out there building that kind of approach for file sanitization. Other companies that are doing it around kind of interactive human interface streams as well, so kind of like video sanitization and GUI sanitization. 

Henry Harrison: [00:23:28]  And hardsec really plays into that role there where you have data that you're going to assume is potentially risky. You're going to transform it into a format which is then easy to verify using a hardsec-based platform. And you've then got a really secure way of knowing what emerges from that hardsec platform has a very strong guarantee of being safe and having been sanitized ready to pass on to your software systems, which, of course, have vulnerabilities in them. 

Dave Bittner: [00:23:58]  That's Henry Harrison. He's chief technology officer at Garrison. 

Dave Bittner: [00:24:06]  And that's the CyberWire. Thanks to all of our sponsors for making the CyberWire possible, especially our supporting sponsor, ObserveIT, the leading insider threat management platform. Learn more at observeit.com. 

Dave Bittner: [00:24:19]  The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our amazing CyberWire team is Stefan Vaziri, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Nick Veliky, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Peter Kilpe. And I'm Dave Bittner. Thanks for listening. We'll see you tomorrow.