The CyberWire Daily Podcast 12.14.18
Ep 745 | 12.14.18

False flags and real flags. ISIS claims the Strasbourg killer as one of its soldiers. A bogus bomb threat circulates by email.

Transcript

Dave Bittner: [0:00:03] False-flag cyberattacks mimic state actors, especially Chinese state actors. Chinese intelligence services are prospecting U.S. Navy contractors. Russia's Fancy Bear continues its worldwide phishing campaign. ISIS claims the career criminal responsible for the Strasbourg Christmas market killings as one of its soldiers. And a bogus bomb threat is being circulated by email - call the technique boomstortion.

Dave Bittner: [0:00:36] Now I'd like to share some words about our sponsor, Cylance. AI stands for artificial intelligence, of course, but nowadays, it also means all-image or anthropomorphized incredibly. There is a serious reality under the hype, but it can be difficult to see through to it. As the experts at Cylance will tell you, AI isn't a self-aware Skynet ready to send in the Terminators. It's a tool that trains on data to develop useful algorithms. And like all tools, it can be used for good or evil. If you'd like to learn more about how AI is being weaponized and what you can do about it, visit threatvector.cylance.com and check out their report, "Security: Using AI for Evil." That's threatvector.cylance.com. We're happy to say that their products protect our systems here at the CyberWire, and we thank Cylance for sponsoring our show.

Dave Bittner: [0:01:33] Major funding for the CyberWire podcast is provided by Cylance.

Dave Bittner: [0:01:36] From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Friday, December 14, 2018. Happy Friday, everybody. Thanks for joining us.

Dave Bittner: [0:01:47] China has come in for considerable criticism in recent weeks for its cyber operations, particularly those devoted to industrial espionage. It's displaced, at least for now, Russia as the prime adversary in American policymakers' public statements, as we've heard this week in testimony and comment before the U.S. Senate Judiciary Committee. That China is an assertive, indeed, aggressive cyber power isn't really open to serious question, but criminals are increasingly flying Chinese false flags and attacks that have little or nothing to do with Beijing.

Dave Bittner: [0:02:21] Fifth Domain notes that this is an attractive ploy for criminals interested in deflecting attention from themselves. It's particularly easy to sail under false Chinese colors, not only because a lot of people are disposed now to believe that if it's hacking, it's probably China but because Chinese intelligence services commonly make use of widely available tools that many criminal hackers can get their hands on.

Dave Bittner: [0:02:46] Attacks in Russia also suggest that criminals are trying to pass themselves off as intelligence services, the better to deflect official suspicion. Researchers at security firm Cylance say that the recent attack on state-owned oil company Rosneft was framed to look like a nation-state attack. In reality, the hackers in that case were just criminals. That said, there are surely nation-state campaigns afoot. China is probing U.S. Navy contractors, The Wall Street Journal reports, looking for all manner of detail about naval technology. And Russia's Fancy Bear is still phishing widely in foreign governments' ponds.

Dave Bittner: [0:03:26] Nonstate actors are reappearing during this holiday season, too. ISIS has, for some time, been relatively quiet in cyberspace, but its propaganda arm this week hailed the Strasbourg Christmas market murderer as one of its soldiers. The terrorist, killed by police, was apparently radicalized in prison. Whether ISIS played a role in inspiring him or is simply retrospectively and opportunistically claiming responsibility is unclear, but the terror group, as always, is attentive to the seasons and its propaganda.

Dave Bittner: [0:03:59] A fake bomb threat is being used to extort bitcoin from businesses, mostly in the U.S. and Canada. Several businesses closed and evacuated their offices, but no bombs were found. The threats are being distributed with a demand for $20,000 in bitcoin, payable by close of business. The subject line of the shakedown email is Hollywood-esque - think twice - things like that. The text goes on in the broken English that's become customary in spam land.

Dave Bittner: [0:04:28] We quote, "There is the bomb in the building where your business is located. My recruited person constructed an explosive device under my direction. It has small dimensions, and it is very hidden well. It is impossible to damage the supporting building structure by my bomb, but there will be many wounded people if it detonates. My man is controlling the situation around the building. If any unnatural behavior, panic or emergency is noticed, he will power the device. I want to suggest you a deal. You send me $20,000 in bitcoin, and the bomb will not detonate. But do not try to fool me. I warrant you that I have to call off my man solely after three confirmations in blockchain network."

Dave Bittner: [0:05:05] The poorly worded email threats bear the common usage and grammatical markers of spam, but it's just badly done. Connoisseurs of spam will notice that the missive lacks the appealing (foreign language spoken) of the way The Shadow Brokers used to talk. And when we read stuff like this, we missed The Brokers, and we hope they got a better job somewhere, maybe with wealthy elite on some personal service contract. Whoever they are, they seem to be explosive buffs. Apart from their mention of TNT, the scammers, in some of their communications, specify the explosive as hexogen. Our CyberWire energetic materials desk tells us hexogen is a plasticized form of RDX, which, pound for pound, packs even more punch than TNT.

Dave Bittner: [0:05:50] Ars Technica points out, reasonably, that not even someone who writes like this can seriously expect to make money this way. It would take Regular Joe Lunchbucket and Janie Sixpack - and those are people like you and me, my friend - well past close of business to figure out how to get ahold of some bitcoin. Even a bitcoin baron would likely think twice and call the police. WIRED said this morning that the total sum that appeared to have been deposited in the five or so bitcoin wallets amounted to less than two bucks.

Dave Bittner: [0:06:19] So if you follow Ars in their speculation, it would seem that either the goons behind the keyboard haven't thought this one through - always a possibility in the underworld - or they're doing it for the lulz, or they're actually just interested in disruption. But unlike sextortion, which this threat is clearly modeled on, a bomb threat - even an implausible one - is harder to laugh off than a promise to show pictures of you looking at adult content - which of course, none of you would do. But maybe your friends would.

Dave Bittner: [0:06:48] In all seriousness, most people have to take bomb threats seriously, and many of them have. The San Francisco Chronicle says the local municipal railway's bus lines, the Jewish Community Center and the San Francisco Fire Credit Union were disrupted. ABC 7 Chicago says that multiple hospitals and businesses in that city closed. And the Tampa Bay Times says there have been building closures and school lockdowns in Tampa. Do what you need to do to keep your people safe but take comfort from the fact that major police departments across North America are calling this one a hoax.

Dave Bittner: [0:07:25] The U.S. Department of Homeland Security's National Cybersecurity and Communications Integration Center, the NCCIC, part of the Cybersecurity and Infrastructure Security Agency, says this is a worldwide campaign. They recommend you do three things if you get this email. First, don't respond or try to contact the sender. Second, don't pay the ransom. And third, report the email to the FBI's Internet Crime Complaint Center or your local FBI office. A writer posting over at the SANS Institute suggests boomstortion or bombstortion as a name for this kind of caper. We're going to go with boomstortion.

Dave Bittner: [0:08:09] It's time to tell you about our sponsor, ThreatConnect. With ThreatConnect's in-platform analytics and automation, you'll save your team time while making informed decisions for your security operations and strategy. Find threats, evaluate risk, and mitigate harm to your organization. Every day, organizations worldwide leverage the power of ThreatConnect to broaden and deepen their intelligence, validate it, prioritize it and act on it. ThreatConnect offers a suite of products designed for teams of all sizes and maturity levels. Built on the ThreatConnect platform, the products provide adaptability as your organization changes and grows. Want to learn more? Check out their newest white paper titled "Threat Intelligence Platforms: Open Source Versus Commercial." As a member of a maturing security team evaluating threat intelligence platforms, or TIP, you may be asking yourself whether you should use an open-source solution, like a malware information sharing platform, or MISP, or buy a TIP from one of the many vendors offering solutions. In this white paper, ThreatConnect explains the key technical and economic considerations every security team needs to make when evaluating threat intel solutions to help you determine which is right for your team. To read the paper, visit threatconnect.com/cyberwire. That's threatconnect.com/cyberwire. And we thank ThreatConnect for sponsoring our show.

Dave Bittner: [0:09:44] And joining me once again is Malek Ben Salem. She's a senior R&D manager for security at Accenture Labs. Malek, it's great to have you back. We wanted to touch today on some vulnerabilities with smart speakers, specifically ways that they can misinterpret commands. What do we need to know today?

Malek Ben Salem: [0:10:03] I think a lot of people, by now, have heard about adversarial examples against computer vision systems, particularly those that are being used by self-driving cars, where you can have a, you know, the vision system misinterpret signage. If they see a stop sign, sometimes that could be misinterpreted as a speed limit sign by adding some perturbation to the image that they see.

Malek Ben Salem: [0:10:33] Well, a similar thing happens also with smart speakers that are listening to voice commands. So you can issue a voice command to, you know, your Alexa or your Google Assistant or your Apple Siri, and there is a possibility for the attacker to add noise that can be misinterpreted by that system as a real command. Now, we've seen this before with something called the DolphinAttack, where, you know, that noise is added. It gets misinterpreted. There is some legitimate action that happens - illegitimate action that happens that is taken by Alexa or Siri, etc. But in that case, the noise is heard by the user, so they may be aware that something wrong is happening.

Malek Ben Salem: [0:11:30] What we're talking about here is that that noise can be designed or engineered in a way that it looks or it sounds very normal. You can embed it, let's say, within a song. So you'd be thinking that you're listening to some song that - you know, that noise that was added, that perturbation that was added to the sound of the song, the sound bites of the song, can be misinterpreted by your digital assistant as some command. This attack has been tested. And you know, you can embed that sound in a YouTube song, for instance. You can publish that song. And everybody who'd be listening to that song would be vulnerable to - would be a victim of this type of attack.

Dave Bittner: [0:12:20] And what's the specific vulnerability here? What sort of information could they harvest by triggering the device?

Malek Ben Salem: [0:12:28] So they can issue any command that the normal user would issue. So they can, you know, read email, have Google read email, have Google restart a phone, have it open - have Echo open a front door, for instance. And they can, you know, do some, say, Capital One credit card payment. These have been successful attacks that have been tested by the researchers conducting this research at success rates that, you know, reach 90 percent.

Dave Bittner: [0:13:05] Wow. And is there any effective way to prevent this? I suspect, you know, if you want to be able to use the functionality of these devices, they need to be listening all the time.

Malek Ben Salem: [0:13:15] Yeah. So what can be done is, again, looking back at these machine learning models that we develop to interpret sound, to interpret these acoustic models that are listening, that are interpreting that sound and transforming it into text. Those have to be hardened and made robust against these types of adversarial examples. So it's basically securing the machine learning models that we're creating. They will never be 100 percent secure, but what we can do is, again, make them more robust. There are techniques to do that, by training them through the adversarial examples upfront, but that effort has to happen.

Malek Ben Salem: [0:14:05] Again, similar to what we're doing with vision systems, I think we think we need to be thinking broadly across all machine learning models. We need to be thinking that AI and machine learning is creating a new attack surface, and we need to be aware of that attack surface and start thinking about ways to reduce it, by rethinking about the way we train and develop our machine learning models.

Dave Bittner: [0:14:32] Malek Ben Salem, thanks for joining us.

Malek Ben Salem: [0:14:35] Thank you, Dave.

Dave Bittner: [0:14:40] And now a few words about our sponsor, our friends in the technology news world, Techmeme. You probably know Techmeme from their curated, online comprehensive view of all the day's tech news. And now they also produce the "Techmeme Ride Home" podcast. If you like the CyberWire and you're looking for even more technology news, "Techmeme Ride Home" is the podcast for you. We're fans, and we think you'll like it, too. It's 15 to 20 minutes long and hosted by veteran podcaster Brian McCullough. You may know Brian from the "Internet History Podcast." The "Ride Home" distills Techmeme's content into, well, the kind of things you'd like to listen to on the ride home - headlines, context and conversation about the world of tech. It posts every weekday afternoon around 5:00 p.m., great for afternoon drive time in the U.S. Be sure to search your favorite podcast app for "Ride Home" and subscribe today. That's the "Techmeme Ride Home" podcast. And we thank the "Techmeme Ride Home" podcast for sponsoring our show.

Dave Bittner: [0:15:45] My guest today is Laura Noren. She's director of research at Obsidian Security, a company building machine learning-based technologies to support enterprise security, where she focuses on data science, ethics and human-centered design.

Laura Noren: [0:16:01] Data science, as probably anyone who's ever done it knows, it's a very, very important part of a product build, but it has to follow engineering build. So a lot of what we work on is getting the engineering right. And then once we've kind of built our infrastructure and built our pipelines, which are designed for data science purposes, then we get to start ingesting data and building models around that data.

Dave Bittner: [0:16:27] And so at what point does the importance of ethics come into play?

Laura Noren: [0:16:32] So in my opinion, ethics comes in pretty much throughout. And it actually really is helpful if the data science team has been involved in building some of the engineering infrastructure because what we want to aim to do is to be able to ask questions about the broader impacts of the technology that we're building. And this would apply to any technology firm. Essentially, technologists are kind of world makers, world builders. They're shaping the way that people are able to inhabit the world. And of course, they're a company so they're aimed at a particular corporate purpose or set of purposes. But they typically aren't asked to think about broader social impacts because it's not in the day-to-day operations of how companies work.

Laura Noren: [0:17:16] But we are starting to ask those questions very early. And cybersecurity is a particularly interesting area in which to do this because we're pitting something that's very important, security, which is usually afforded at the collective level. You do security for an entire company or an entire country. That's what we're in the business of doing. And that is often perceived as being at odds with individual privacy. That's not always the case, but in data science, that's kind of a crux that you run into a lot of times.

Laura Noren: [0:17:46] And it's not just cybersecurity that runs into this problem. Marketing runs into this problem of, you know, how do you make predictions about who's likely to buy your product that sometimes feels like it might be challenging ideas about privacy? You're looking at signals and a large corpus of behaviors. And in order to do that, you need to have - or it's useful with data science to have individual insight, insight into what individuals are doing. And then that's where you run into questions about privacy, which is one of the ethical concerns that we have, although it's not the only one.

Dave Bittner: [0:18:18] Yeah. It's interesting to me 'cause I think - would it be correct to say that not all companies have people on board who are specializing in the ethical side of things? And I suppose at Obsidian, that's something that the powers that be have decided is a worthwhile investment.

Laura Noren: [0:18:38] Yeah, it is actually kind of a truism. If you look across, you know, which companies are the most likely to have an chief ethics officer - I mean, now that's, you know, anyone operating in Europe because they have to following GDPR. But if you look before that, companies like Microsoft had a chief ethics officer and really put that person right next to the CEO's office. It's older companies that have made a few mistakes and have run into some significant regulatory hurdles.

Laura Noren: [0:19:09] Companies that are older have usually been the ones that have these ethics rules in them. And it's usually because their technology has run out ahead of themselves or the business decisions they're making have kind of gotten ahead of where regulations are, and then the regulation catches up. And it's costly. Those are the - usually the companies where we see this, so it is particularly unusual to have a startup that's trying to build in ethics from the very beginning.

Dave Bittner: [0:19:31] Within the organization itself, is there a natural, I suppose, almost healthy tension within of - you know, I can imagine the marketing folks want to achieve certain things. The technology folks want to achieve certain things. And so I could see there being push-pull between those - even the legal department, between them and what you're tasked with doing.

Laura Noren: [0:19:51] Yes. I mean, the - legal tends to be very interested in compliance, which is great. Compliance - you know, any law is always reactive to a situation, so it tends to lag a little bit behind what an ethicist might want to do. So legal isn't necessarily antagonistic to ethics. It's just - it's really - they're not the same.

Laura Noren: [0:20:10] Legal is usually fairly supportive of what we're trying to do, though it may take some education to get on the same page about what each goal is. But it is important to point out that legal compliance and ethical principles are not the same. Ethics is always - or the beauty and the strength of ethics is that it's a set of principles that can be forward-looking, not just reactive.

Dave Bittner: [0:20:31] Now, what is your advice for companies that are either just starting up and want to get a handle on this or perhaps just want to - you know, it's something that they feel as though they've neglected. What's a way to approach this when someone's coming at it for the first time?

Laura Noren: [0:20:45] I would recommend approaching it both from the top down and from the bottom up. So you want to have leadership really taking this seriously and able to hear from the data scientists, from the engineers when things might be getting a little creepy. So we have kind of created a reporting structure where if anyone says - if anyone on any of those teams sees something that's like, you know - it turns out, you can actually learn a lot about what's going on in a company by reading a file name. We hadn't ever thought of hashing file names because it seems somewhat innocuous at the outset.

Laura Noren: [0:21:20] But someone on our team said, hey. You can actually learn a lot from file names. Is there a way to still maintain some insight into what's being sent around without reading entire file names? How can we handle that? And they have someone to take that concern to. If you don't appoint a person for that, then chances are that idea that crosses an engineer or data scientist's mind is just going to fade. You know, they'll think about it and then they'll get on to some other problem. And it won't go anywhere.

Laura Noren: [0:21:46] But if you have a feedback mechanism where there's a place to say, hey. There's a potential privacy issue here that nobody had really thought about. Can we think about it? Is there an easy fix for this? And for something like that, there might be a relatively easy fix. So, you know, not everything is about saying, no. It's about saying, well, how can we do this in a way that's more privacy protecting?

Laura Noren: [0:22:08] It helps if you're - the person to whom you're reporting this stuff is - has some strengths in the social sciences and kind of understands, you know, their history and how these things have played out in the past and also has some technical chops so that they can suggest a fix rather than just suggesting we don't do X, Y or Z. A stop sign isn't all that useful. You know...

Dave Bittner: [0:22:28] Right.

Laura Noren: [0:22:28] A redirection is much more useful, so that's the top down part. Have a very intelligent person who's trained kind of across domains to understand what should happen next to whom people can report without being, you know, punished or singled out in any way. It does help to sort of have some lightweight programming, like corporate programming that kind of touches people.

Laura Noren: [0:22:51] So when you get - when you hire junior people, assign them to mentors, someone who's within their kind of managerial organizational stack reporting structure and someone who's not in that structure, who can help them not only professionalize, you know, guide their careers but also learn to articulate things they're seeing that might be, you know - that we may want to question. If you don't teach someone how to articulate that, it's unlikely that everyone's going to learn how to do that on their own. And it is usually the best in sort of one-on-one situations, so that's kind of the bottom up.

Laura Noren: [0:23:22] Anyone who's aware of these things can start framing conversations about, well, what's the broader social impact of what we're building? What's the broader social impact of what a company like Google - everyone uses Google, so it's a nice example. What's a broader social impact of some of the things that they do?

Laura Noren: [0:23:38] Those are the conversations that are - you know, that we can have our mentors have with some of the younger staffers to teach them that it's completely within their wheelhouse to ask those bigger questions, that they don't just need to stay in a track where they just build stuff and they never get to ask the big questions.

Dave Bittner: [0:23:55] That's Laura Noren from Obsidian Security.

Dave Bittner: [0:24:02] And that's the CyberWire. Thanks to all of our sponsors for making the CyberWire possible, especially to our sustaining sponsor Cylance. To find out how Cylance can help protect you using artificial intelligence, visit cylance.com. And Cylance is not just a sponsor. We actually use their products to help protect our systems here at the CyberWire. And thanks to our supporting sponsor VMWare, creators of Workspace ONE Intelligence. Learn more at vmware.com.

Dave Bittner: [0:24:30] The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our CyberWire editor is John Petrik, social media editor Jennifer Eiben, Technical Editor Chris Russell, executive editor Peter Kilpe. And I'm Dave Bittner. Thanks for listening.