Dave Bittner: [00:00:03:22] They're hoping in Australia that those tapes made it to the shredder and didn't fall off the truck. Equifax's board of directors gets reelected. Are China's espionage services preparing the battlespace for a supply chain attack? New Spectre-like vulnerabilities are found in Intel chips. Google and Amazon clamp down on domain fronting and anti-censorship advocates are unhappy. Here Kitty, we have Monero for you. And a change of command at NSA and US Cyber Command.
Dave Bittner: [00:00:39:01] And now, a few words about our sponsor, Dragos, the leaders in industrial control system and operational technology security. In their latest white paper, Dragos and OSIsoft present a modern day challenge of defending industrial environments and share valuable insights on how the Dragos/OSIsoft technology integration helps asset owners respond effectively and efficiently. They'll take you step by step through an investigation, solving the mystery of an inside job using digital forensics with the Dragos platform and the OSIsoft PI system. Download your copy today at thecyberwire.com/dragos. That's thecyberwire.com/Dragos, and we thank Dragos for sponsoring our show.
Dave Bittner: [00:01:37:09] Major funding for the CyberWire podcast is provided by Cylance from the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Friday, May 4th, 2018. May the fourth be with you.
Dave Bittner: [00:01:52:03] Australia's Commonwealth Bank gets a black eye from its loss of about 20 million customers' records. In 2016, the bank engaged Fuji Xerox to decommission one of the Commonwealth Bank's data centers and that entailed secure destruction of 15 years' worth of customer statements on the center's backup tapes. After the decommissioning, however, the bank became aware that it didn't have the certificate that would have vouched for the tape's destruction, nor could the tapes themselves be found. After looking around and considering various possibilities, including but not limited to the off chance that the records fell off the truck on their way to destruction, the bank decided that the records had, in fact, been destroyed and that there was no need to notify the customers.
Dave Bittner: [00:02:37:12] The incident appears to have been an accident and not a hack, and probably customer accounts weren't compromised, but the bank's failure to notify customers when it realized what had happened doesn't look good. Give them credit for retracing the delivery truck's route and scouring the roadside for fallen tapes, but still. The Australian Prudential Regulation Authority Tuesday said, that trust in Australia's banks had been "badly eroded" and that Commonwealth Bank in particular had "fallen from grace". The bank will be required to carry an additional billion dollars in regulatory capital as the result of that fall.
Dave Bittner: [00:03:18:17] Commonwealth Bank has been commendably contrite and promises to do better in the future. Its leaders might take heart from this week's elections for Equifax board members. Despite the horrific data breach the credit bureau endured on their watch, every member of the board who stood for re-election was returned to office by the shareholders, who are either unusually, discerning, forgiving, or inattentive. We're guessing door number three. Still, congratulations and best wishes to Equifax. May your housecleaning and restoration continue apace.
Dave Bittner: [00:03:55:07] Researchers at ProtectWise think they discern a shift in Chinese cyber espionage: a focus on IT staff in targeted enterprises, and collection of code-signing certificates. These are taken as signs of preparation for supply chain attacks.
Dave Bittner: [00:04:12:05] Intel has confirmed that Spectre-like chip vulnerabilities reported by an industry site in Germany are real. There are eight of them, according to C'T, the German publication Computer Teknik and Intel is working on fixes. C'T calls the flaws "Spectre-NG". A number of researchers appear to have contributed to the discovery, Google's Project Zero among them. One of the newly discovered issues is arguably more serious than the original Spectre problem. It could be exploited, some think, to bypass virtual machine isolation from cloud hosts and then infiltrate sensitive data, including passwords and keys. For all that, researchers are cautiously optimistic that the flaws are relatively unlikely to see widespread exploitation. Intel plans to roll fixes out in two tranches, one this month and a second in August.
Dave Bittner: [00:05:09:19] Researchers at security firm Imperva warn of "Kitty", a cryptominer that specializes in Monero. Kitty exploits the so-called "Drupalgeddon 2.0" remote code execution flaw, which has been patched. Kitty is particularly problematic, SC Magazine reports, in that it compromises web application servers, from whence it goes on to compromise future users of apps running on those servers.
Dave Bittner: [00:05:37:13] Amazon and Google have, as expected, put an end to domain fronting, a feature widely used by services like Open Whisper's Signal to evade internet censorship. Google began the process some weeks ago pointing out that domain fronting had been an accidental and not a supported feature of their content-delivery system. Amazon shut the option down this week, telling Open Whisper that their use of Amazon's CloudFront would be suspended immediately if Open Whisper's Signal continued using third-party domains without their permission.
Dave Bittner: [00:06:10:22] In domain fronting, an app like Signal is able to obscure a connection's destination. Thus, as far as a Russian, or Chinese, or Qatari, or other state censor is concerned, they're simply seeing a connection to Google or Amazon, not to a prohibited service like Signal. The censors could either block nothing, or they could shut down everything provided by the big content delivery networks, which would be as close to shutting down the internet as makes little difference. The upshot, as the Electronic Frontier Foundation and others put it, is that Amazon and Google have elected, in their business models, to foreclose certain ways of evading censorship.
Dave Bittner: [00:06:51:14] US Cyber Command today was officially elevated to Combatant Command Status, putting it on a par with major military organizations like US Strategic Command. General Paul Nakasone got his fourth star as he assumed command of Cyber Command and duties as Director, National Security Agency. Nakasone replaces Admiral Michael Rogers, who now enters retirement. So, hail and farewell, respectively, General Nakasone and Admiral Rogers.
Dave Bittner: [00:07:22:16] Hackers who don't like the US state of Georgia's proposed anti-hacking law have protested by, wait for it, hacking sites in the Peach State. So, this is arguably better thought out than dim-witted homages to war criminals on an Arizona highway sign, but still, really. The hacktivists aren't alone is thinking the law a bad one. The Man his-self in the person of big tech companies is inclined to agree. But there are surely better ways of making a point. To all you young techno-libertarians out there, you say you want a revolution, but if you go hacking some sites in the County of Barrow, you ain't going to make it with anyone anyhow, or that what some old guys told us, anyway.
Dave Bittner: [00:08:11:08] I'd like to give a shout out to our sponsor BluVector. Visit them at bluvector.io. Have you noticed the use of file-less malware is on the rise? The reason for this is simple. Most organizations aren't prepared to detect it. Last year, BluVector introduced the security market's first analytic specifically designed for file-less malware detection on the network. Selected as a finalist for RSA's 2018 Innovation Sandbox Contest, BluVector Cortex is an AI driven, sense and response network security platform that makes it possible to accurately and efficiently detect, analyses and contain sophisticated threats. If you're concerned about advanced threats like file-less malware or just want to learn more, visit bluvector.io. That's bluvector.io, and we thank BluVector for sponsoring our show.
Dave Bittner: [00:09:12:01] And joining me once again is Johannes Ullrich. He's from the SANS Technology Institute and he's also the host of the ISC Stormcast podcast. Johannes, welcome back. We had the recent news about hardware flaws like Rowhammer and Spectre, but you wanted to make the point that maybe we need to look into the past to be reminded that some of these things might not be so new.
Johannes Ullrich: [00:09:34:03] Yes, and the reason I am saying that is, you know, being a developer myself, you always assume that hardware is flawless, which is kind of odd, because I know my code is not flawless, so why should the developers that develop hardware be any better in writing code? And that's essentially what they do if it doesn't look like code; they design systems which, of course, have flaws. And so I looked a little bit in the history, here. How old are these flaws? Now, Spectre, Meltdown was the big hit recently. Turns out, actually I think it was around 2006, 2008, papers were already being published that essentially just talked about this particular flaw. If you have these predictive execution threats, a code supposed to may not be supposed to be executed based on privilege settings, will it get executed? And then, if you don't clean up right, well, you end up with a broach escalation vulnerability, exactly what what Spectre was about. Then I looked at Rowhammer. It's this vulnerability, a little bit older than Specter, Meltdown. Essentially what you do is you flip certain bits in memory really fast and that effects the neighboring bits that you may not have access to, and with this, it can manipulate memory that you're not supposed to be able to manipulate.
Johannes Ullrich: [00:10:57:10] This was even more amazing when it comes to old vulnerabilities. It turned out good old magnetic core memory, which was used back in the '60s and such, had exactly this vulnerability, and this was a well described phenomenon. PDP11s, which is an old digital computer which was used quite a bit, actually had very specific feature built into the system, where you could calculate a measure of what's called the averse case noise, which means if you write certain patterns to memory, you may flip additional bits. So people maybe should look at these old research papers before they design new systems, and not just putting in a factor of these new systems being too fast or overly cute.
Dave Bittner: [00:11:55:10] I remember in the past couple of decades, probably around the Pentium-time, where there was a lot of publicity about some of the processors had some issues with some floating point calculations, where you could ask a Power PC processor one math question and ask a Pentium processor the same math question, and you wouldn't necessarily get the same answer.
Johannes Ullrich: [00:12:18:03] Correct. Back then, Intel actually did a big recall, and I remember doing it myself. I received this new processor I had to swap for the old one. It was one of these real weird bugs where, if you used one particular number, there was a bug in the processor that would essentially interpret that number differently. Back then, it was a little bit easier to swap CPUs, usually a desktop with sockets and such. Today, even if Intel attempted a recall it would be quite difficult to exchange CPUs as they're mostly soldered in these days, so that really wouldn't work that well. I can remember my old Commodore 64 had a special command. When you send it, it would physically destroy the computer. So yes, these problems always existed. People just seemed to forget about it, that really hardware isn't perfect and your software should not assume that hardware is perfect.
Dave Bittner: [00:13:17:04] That's a great point worth remembering. Johannes Ullrich, thanks for joining us.
Dave Bittner: [00:13:26:21] Now a moment to tell you about our sponsor, ObserveIT. It's 2018. Traditional data loss prevention tools aren't cutting it anymore. They're too difficult to deploy, too time consuming to maintain and too heavy on the endpoint. They are high maintenance and require endless fine tuning. It's time to take a more modern approach. With ObserveIT, you can detect insider threats, investigate incidents quickly and prevent data loss. With its lightweight agent and out-of-the-box insider threat library, ObserveIT is quick to deploy and far more effective at stopping data from leaving your organization. That's because ObserveIT focuses on user behavior. It's built to detect and respond to insider threats and it's extremely difficult even for the most technical users to bypass. Bring your data loss prevention strategy into the modern era with ObserveIT. Learn more at observeit.com/cyberwire. That's observeit.com/cyberwire, and we thank ObserveIT for sponsoring our show.
Dave Bittner: [00:14:36:16] My guest today is Philip Tully. He's a Principal Data Scientist at ZeroFOX. At the RSA conference this year, he presented on the topic of artificial intelligence and how we may see more adversaries making use of it soon.
Philip Tully: [00:14:50:04] It's been about a decade now that enterprises, security professionals and defenders have been using artificial intelligence in general, or machine learning-based data driven methods to detect, prevent and remediate attacks on perimeters. More and more we're seeing the event of these techniques and they're applied to more things. Classically, it was applied to problems like spam detection in emails. There was a new wave of approaches involving detecting malware, whether it be binary malware or URLs, also, in the phishing domain, just finding malicious links, detecting botnets, network intrusion attempts and, what I do for ZeroFOX, more recently detecting threats on social media, for example. These type of things have been evolving and more recently you're starting to see, at least in the academic world and in the research realm, several examples of AI or data-driven techniques being leveraged for offensive purposes and for attack automation.
Philip Tully: [00:16:03:19] At the moment, I want to be clear, and there's a lot of hype around this type of thing, from my point of view and where I stand, I haven't seen any credible evidence of an AI or a data-driven technique being waged for an attack in the wild yet.
Dave Bittner: [00:16:17:04] I'm curious, because what I hear people say often is that the attackers are using the most efficient and also least expensive ways to attack people. They phish people because phishing works; they use ransomware because it works. Is it a matter that using AI and machine learning, is there a cost associated with that, which makes is unattractive to the adversaries?
Philip Tully: [00:16:40:18] Absolutely. This is a fair point and this is, I think, one of the primary reasons you don't see these attacks waged often if at all, currently. But there are certain trends, both in the hardware realm, where you're starting to see increased paralyzation, and cloud based computing, and easier access to GPUs, and continuation of Moore's Law, and even technologies that are positioned post-Moore's Law like quantum computing and neuromorphic computing, that are becoming more available. Nowadays, I can log in to AWS and spin up a box, and start to play with machine learning tools within an hour as a non-expert, and this was never possible five, ten years ago. On the software side, you have trends that match this. The rise of deep learning, which the previous generation of machine learning models, I would say fit ten, 15 years ago, and even before then, all relied on hand-tuned features. So you'd have to define in advance what the models that you are building should care about.
Philip Tully: [00:17:48:11] Deep learning automates that process away. You don't need to hand-tune features and do feature engineering anymore. On top of this, you have different trends, you have educational resources like Coursera and code sharing via GitHub and stack overflow. These type of things lower the bar for entry. You have lots of open source data sets and pre-trained models, and professionally quality open source libraries like Tensorflow that are being released by these big companies, and these are extremely powerful tools. There's a general trend to try to lower the bar so what we're seeing more is that, beforehand, you would have to be an expert or get a PhD or get a Masters or have some specialized training in this field to practice these techniques. But I expect, if it's not happening already, I expect more and more for these skills to be taught earlier on in education cycles, in college and in high school, and I think it's going to be par for the course in not even five years away that people will start to use techniques like this on a more regular basis.
Philip Tully: [00:18:54:05] The trend's are all pointing towards lowering the bar for entry, and when you think about that in terms of the attacker, lowering the bar for entry and eliminating these technological hurdles is going to speed up their processes and make them more appealing.
Dave Bittner: [00:19:11:13] Where do you suppose we would see the adversarials first turning to this technology? Is there an area that you think it's most likely?
Philip Tully: [00:19:20:00] I've worked on a project before with a colleague, John Seymour, that was concerned about automating spearphishing, and so we built a tool that didn't take us very long, and so this is what got this idea in motion about the ease of applying machine learning on offense. It took us a few months to build this tool, which went out and procured information from people's Twitter timelines. We had a model that was able to generate Tweets at a high level, and we would be able to take information from each individual user's timeline and see the model with that information. So, if you're posting a lot on your Twitter or your social media about cybersecurity, or the recent vacation you went on, or the recent movie that you just saw that you loved, the model would be seeded with this interest, this hobby or this general interest that you have and that you're posting about.
Philip Tully: [00:20:13:21] The hypothesis was that, if the post that we targeted you with was concerned with that interest and it aligned with the content of your timeline, you'd be much more likely to click on a link that was served up to you via a Tweet, and this was born out in the data. We ran a simulation where attacks like this were a lot more successful than your run-of-the-mill question/answer attacks, or not randomly targeting people with stuff that didn't necessarily match to their timeline. You can do this all using a technique that relies on unsupervised learning. This is a sub-method of machine learning that does not need labeled data in order to work, so to speak. You can basically tell a model, "Hey, we have this distribution of data here, we want you to generate a piece of data or a piece of content that appears similar and has the same or similar statistics as this piece of data and this mountain of data we already have."
Philip Tully: [00:21:10:13] Because you don't need to label that data, or associate each piece of data with a label like "malicious" or "benign", you can just go out and scrape or grab a bunch of data, and that's very easy to scale up, train the model up and start to use it in a much shorter amount of time than it takes a defender to spin up a similar model that might be used for defense, because the defender actually has to label each piece of data, "malicious" or "benign" in order to better predict an attack or a non-attack that's incoming.
Dave Bittner: [00:21:43:01] How should we be preparing, then? It's interesting to me that we're still dealing with the human factor here. The bad guys could be using AI to better fool the people. In this arms race of machine versus machine, AI versus AI, is the weak link still the meat in the middle, the humans?
Philip Tully: [00:22:03:03] Yes. I would say the human is always going to be a weak link in this sense. In that example, it's very clear that, especially on a social media based venue like Twitter, it's hard for a human to decipher whether or not a post was generated by a bot or a human. Attackers have always had an advantage, simply because of what's at stake. They only need to win a few times in order to win that battle overall, whereas blue teams or defenders really need to have detection that approaches 100% success. What's different this time around is that, in the cybersecurity domain, you have politics, or you have a little bit more nuance than you do in generic machine learning or generic image recognition and other natural language processing, other high level applications to which machine learning is applied.
Philip Tully: [00:22:56:11] In those realms, and in the core machine learning research field, you have people sharing data, often with each other, researchers sharing data, and this accelerates the field and makes these models and these methods advanced in a shorter amount of time. The position of the cybersecurity field is a little bit different, because sharing data can be either illegal because of contractual obligations you have with your clients, the data can be too sensitive to share because it contains personal information or whatnot. Data is secret sauce, it's intellectual property, so if you have two companies that are developing a similar approach, they're competing with each other, they're not incentivised to share their data. They want to build a more accurate model than their competitor, and so they view it as something, as data, this fundamental thing that gives them a leg up in this fiercely competitive market.
Dave Bittner: [00:23:48:00] That's Philip Tully from ZeroFOX.
Dave Bittner: [00:23:54:12] And that's the CyberWire. Thanks to all of our sponsors for making the CyberWire possible. Especially to our sustaining sponsor Cylance. To find out how Cylance can help protect you through the use of artificial intelligence, visit cylance.com. And thanks to our supporting sponsor, VMWare, creators of Workspace ONE intelligence. Learn more at vmware.com.
Dave Bittner: [00:24:16:00] The CyberWire podcast is proudly produced in Maryland out of the start-up studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our show is produced by Pratt Street Media, with editor John Petrik, social media editor Jennifer Eiben, technical editor Chris Russell, executive editor Peter Kilpe and I'm Dave Bittner. Thanks for listening.