The CyberWire Daily Podcast 2.1.19
Ep 772 | 2.1.19

No more Apple time-out for Facebook and Google. Inauthentic sites taken down. Fancy Bear paws at Washington, again. Malware-serving ads. Amplification DDoS. Data exposures in India.

Transcript

Dave Bittner: [00:00:03] Apple lets Facebook and Google out of time-out. Russia decides it would like access to Apple data because, you know, it's Russian law. Social networks take down large numbers of inauthentic accounts. Fancy Bear is snuffling around Washington again already with some spooked think tank sites. A shapeshifting campaign afflicts ads. China sees CoAP DDoS attacks. An Aadhaar breach hits an Indian state as the SBI bank recovers from a data exposure incident.

Dave Bittner: [00:00:39] Time to tell you about our sponsor Recorded Future. If you haven't already done so, take a look at Recorded Future's Cyber Daily. We look at it. The CyberWire staff subscribes and consults it daily. The web is rich with indicators and warnings, but it's nearly impossible to collect them by eyeballing the internet yourself, no matter how many analysts you might have on staff. And we're betting that however many you have, you haven't got enough. Recorded Future does the hard work for you by automatically collecting and organizing the entire web to identify new vulnerabilities and emerging threat indicators. Sign up for the Cyber Daily email to get the top trending technical indicators crossing the web - cyber news, targeted industries, threat actors, exploited vulnerabilities, malware, suspicious IP addresses and much more. Subscribe today and stay ahead of the cyberattacks. Go to recordedfuture.com/intel to subscribe for free threat intelligence updates from Recorded Future. That's recordedfuture.com/intel. And we thank Recorded Future for sponsoring our show.

Dave Bittner: [00:01:44] From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Friday, February 1st, 2019. Happy Friday, everybody. Apple's time-out punishment of Facebook and Google was sharp, but soon over. TechCrunch reports that Apple has restored Facebook's Enterprise Certification, and with it, employee access to internal apps. The publication also notes that Apple has restored Google's Enterprise Certification. Google's employees can again access iOS versions of pre-launch test apps. Google's Screenwise Meter and Facebook Research collected user data in ways Apple deemed violated its terms of use.

Dave Bittner: [00:02:24] The magazine Foreign Policy suggests Russia envies Mountain View's access. Roskomnadzor, Moscow's telecommunications authority, says it expects Apple to comply with a 2014 law requiring data collected on Russian citizens to be stored on Russian servers, where it must be decrypted on demand should the security service require it.

Dave Bittner: [00:02:47] As much as they've struggled and continue to struggle with content moderation, social media platforms continue to have more success working against bots and people who are not whom they claim to be. Facebook this week continued its purge of inauthentic accounts. The social network has taken down more than 700 pages that were being directed from Iran, amplifying Islamic Republic state media content and targeting audiences in the Middle East and South Asia.

Dave Bittner: [00:03:14] Facebook stopped short of calling it an Iranian government operation; patriotic activism is also possible. Twitter has been active against information operations, as well, offering an account of 2018 election influence attempts emanating from Russia, Iran and Venezuela. The company also took down follow-bot services ManageFlitter, Statusbrew and Crowdfire. Twitter found all of these in violation of its automation rules.

Dave Bittner: [00:03:43] Fancy Bear, Russia's GRU, seems to have hit a prominent Washington think tank. Microsoft said Wednesday in a court filing that they'd taken down bogus sites spoofing the Center for Strategic and International Studies, or CSIS. CSIS has long studied Russian matters, and Fancy Bear's interest in this particular think tank is unsurprising. Bears know where the honey is. Observers are throwing their hands in the air over this one amid speculation that the operation is battlespace preparation for meddling with U.S. elections.

Dave Bittner: [00:04:17] The 2020 election season starts far sooner than any sane person would like, but this is really early. It suggests to many that deterrence is either not working at all or that it's working imperfectly. U.S. deterrence has involved naming and shaming, Lawfare, sanctions and spooky direct messages to Russian government trolls, but these seem insufficient.

Dave Bittner: [00:04:39] The Foundation for the Defense of Democracies, in a mid-term assessment of the current U.S. administration security policies, coincidentally notes how difficult it's been to deter Russian hacking and information operations and suggest that if such things continue, the U.S. respond directly in kind. And if they do, then look to the security of your Nintendo Switch, Mr. Putin.

Dave Bittner: [00:05:04] Researchers at The Media Trust report the discovery of adaptive malware that's hitting Alexa 500 sites. The security firm calls the campaign ShapeShifter-3PC. The Media Trust says it's worked through 44 adtech vendors to afflict visitors to 49 premium publishers that rank among Alexa 500 sites. As attacks were detected and blocked, the campaign would shift to new ad formats, new delivery channels and so on.

Dave Bittner: [00:05:34] Security firm NETSCOUT reports a wave of CoAP reflection amplification DDoS attacks. The CoAP protocol is, for the most part, used by mobile phones in China, and it's there that the effects of the denial of service attacks have been mostly felt. But CoAP is expected to come into widespread internet of things use, and as it does, the problem can be expected to spread with it.

Dave Bittner: [00:05:58] Another breach has compromised a large number of Aadhaar numbers from India's national identity system - over 100,000. In this case, it wasn't a centralized breach. Instead, the system the state of Jharkhand used to track the work attendance of government employees proved susceptible to scraping. TechCrunch reported that the exposed data, which were apparently left without password protection since 2014, included names, job titles and partial phone numbers of 166,000 workers.

Dave Bittner: [00:06:29] Bad enough, but unfortunately, the file name on the workers' photos that accompanied these bits of PII was simply the individual's Aadhaar number. The Aadhaar number, which over 90 percent of Indian citizens have, is roughly analogous to an American Social Security number, at least insofar as it picks out a single unique individual.

Dave Bittner: [00:06:51] Breaches of Social Security numbers are bad enough, although with all the breaches of the last 10 years, most Americans have arrived at a kind of learned helplessness with respect to their Social Security numbers. They don't like them being exposed, and there are disadvantages to their compromise. But unfortunately, many - perhaps most - now feel that that particular horse has already fled the barn. And the Social Security number is no longer used as much as it once was to establish identity. It said right on the card that it wasn't to be used for identification purposes, although, of course, inevitably it was.

Dave Bittner: [00:07:27] Aadhaar is a more serious matter. You can use it - or alternatively, your thumbprint - to prove your identity when you register to vote or sign up for some government service, open a bank account or conduct any number of other transactions. The reasons for exposure aren't entirely clear yet, but it seems that Jharkhand left a lot of data flapping in the breeze, just the way the state of Oklahoma recently did stateside, we observe. So don't get cocky, kids.

Dave Bittner: [00:07:56] Another exposure also hit India this week as the State Bank of India, or SBI - government-owned and the biggest bank in the country - left two months of SBI Quick data exposed without so much as the fig leaf of a password to cover its shame. The information was sitting on a server in a Mumbai data center. SBI Quick is a customer-friendly service that lets people who bank with SBI to text or phone in questions about their accounts. Naturally, these communications held information better kept confidential - phone numbers, bank balances, recent transactions, whether a check had been cashed, things like that.

Dave Bittner: [00:08:33] None of these, even taken together, amounts to what the dark web black marketeers would call fullz, but they can be damaging enough. One possibility is that even such partial information could be used to target people, particularly people with big bank balances, for social engineering attacks. And there's even an Aadhaar angle here, too. SBI, just a few days earlier, called out the UIDIA - the Unique Identification Authority of India, the government agency that oversees the Aadhaar system - for sloppy data handling practices. So gander, sauce. TechCrunch reports that SBI has now secured the previously open database.

Dave Bittner: [00:09:20] Now, a moment to tell you about our sponsor, ObserveIT. The greatest threat to businesses today isn't the outsider trying to get in. It's the people you trust, the ones who already have the keys - your employees, contractors and privileged users. In fact, a whopping 60 percent of online attacks today are carried out by insiders. Can you afford to ignore this real and growing threat? With ObserveIT, you don't have to. See, most security tools only analyze computer, network or system data. But to stop insider threats, you need to see what users are doing before an incident occurs. ObserveIT combats insider threats by enabling your security team to detect risky activity, investigate in minutes, effectively respond and stop data loss. Want to see it in action for yourself? Try ObserveIT for free - no installation required. Go to observeit.com/cyberwire. That's observeit.com/cyberwire. And we thank ObserveIT for sponsoring our show.

Dave Bittner: [00:10:28] And I'm pleased to be joined once again by Johannes Ullrich. He's the dean of research for the SANS Institute. He's also the host of the ISC StormCast podcast. Johannes, it's great to have you back. Today we wanted to talk about the effectiveness of block lists. What do you have to share with us?

Johannes Ullrich: [00:10:44] Yes. So a block list is something that I'm often being asked about with our system with DShield and Storm Center. We're collecting a lot of data about IP address and of course, some of the data indicates that IP addresses are not behaving the way they're supposed to - same, of course, for domain names and the like. And what I've found over the last years is that block lists the way people typically implement them are not really all that useful - in particular, the way sort of a lot of the web traffic works these days.

Johannes Ullrich: [00:11:22] And we publish a very short block list - just the 20 entries of the 20 nastiest networks, you want to call it this way. But even there, we often do see some false positives. And the other problem is that the attacks that you really worry about - they use very flexible IP addresses. They change their source addresses quite a bit - so really not that much use in spending a lot of time and effort in implementing block lists.

Dave Bittner: [00:11:52] Now, what amount if you're trying to block something like Shodan?

Johannes Ullrich: [00:11:56] Yeah, so Shodan is this search engine that enumerates the internet of things, and we have actually done a test with that recently. One of our STI graduate students did a research paper where what he looked at was whether or not being listed in Shodan actually makes a difference when it comes to attack traffic we’re seeing (ph). And we didn't really see a correlation there. Now, one thing we did see, however, is that the amount of traffic that you're - sort of blocking your firewall, it comes from researchers like Shodan. That's actually quite substantial.

Johannes Ullrich: [00:12:33] Not a lot of different IP addresses that they're using, but it can be sort of in the 20, 30 percent range if you're just looking at the number of packets that you're dropping at your firewall that are caused by research scans like Shodan. There are a number of other search engines like that. We also noted that a lot of the published blocklists that you'll find for systems like Shodan are quite incomplete. They use a lot more system-suggested scanning than is actually sort of commonly being published.

Dave Bittner: [00:13:07] So is this a matter of perhaps a blocklist not being the most effective place to use your time and energy?

Johannes Ullrich: [00:13:14] Correct. Like, yes, it blocks some attacks. But are these really the hacks that you worry about? For the most part, what you find in blocklists are things that are sort of these common run-of-the-mill scams. And if you're vulnerable to them, you probably have other problems. The other issue is always the false positive issue. Like, we published, for example, a list of cryptocoin mining pools. And that's sort of a useful list in the sense that cryptocoin miners - well, they're a very common infection tool.

Johannes Ullrich: [00:13:47] And so seeing outbound connections into these cryptocoin mining pools may be an indicator that you are infected. The problem here is that a lot of these tools, for example, know hide-behind networks like Cloudflare. And once you're blocking Cloudflare IPs, well, you're also blocking thousands of other websites that are associated with Cloudflare. So again, your risk of false positives is rather large.

Johannes Ullrich: [00:14:18] The way I kind of like people to use these lists is - the way I put it is, you know, color your logs - add color to your logs. So instead of blocking, just have tools that add automatic notes to your log saying, hey, this may be a cryptocoin mining pool. So - and then you can manually check and make sure whether or not this system is infected or not.

Dave Bittner: [00:14:39] All right, it's good advice. Johannes Ullrich, thanks for joining us.

Dave Bittner: [00:14:47] Now I'd like to share some words about our sponsor, Cylance. AI stands for artificial intelligence, of course. But nowadays, it also means all image or anthropomorphized incredibly. There's a serious reality under the hype, but it can be difficult to see through to it. As the experts at Cylance will tell you, AI isn't a self-aware Skynet ready to send in the Terminators. It's a tool that trains on data to develop useful algorithms. And, like all tools, it can be used for good or evil. If you'd like to learn more about how AI is being weaponized and what you can do about it, visit threatvector.cylance.com and check out their report, "Security: Using AI for Evil." That's threatvector.cylance.com. We're happy to say that their products protect our systems here at the CyberWire. And we thank Cylance for sponsoring our show.

Dave Bittner: [00:15:46] My guest today is Daniel Faggella. He's the founder and CEO of Emerj Artificial Intelligence Research, a market research firm focused on the implications of artificial intelligence in business. He believes that the most important ethical considerations of the coming years will be the creation or expansion of sentience and intelligence in technology.

Dan Faggella: [00:16:08] Generally speaking, AI is seen as kind of the meta-umbrella under which machine learning sits. Now a lot of people will argue that machine learning is the only thing that's actually AI. Today a lot of Ph.D.s - and we interview a lot of them - are of the belief that it sits under the broader umbrella of AI and that there's a lot more vistas to explore under the broader domain of AI. Old-school AI was kind of baking human expertise into a bunch of if-then scenarios to hopefully shake out, you know, some kind of a pachinko-machine decision that a human would make as well.

Dan Faggella: [00:16:44] Machine learning is more hurling a million instances at a bunch of nodes in a neural network to get that network to pick up on patterns and determine what image is cancerous or what tumor images are cancerous or non-cancerous or what pictures have a stop sign or don't have a stop sign, et cetera. So the dynamics are changing. But broadly, in terms of the two terms, those are good ways to understand them.

Dave Bittner: [00:17:07] Now, one of your focuses is the ethical considerations of these technologies. Where do you see us headed there?

Dan Faggella: [00:17:15] At the highest level, unabashedly, my interest is in sort of the grander transitions of AI in, let's say, the next 30 to 50 years where, I think, we're going to come up with kind of some post-human transition scenarios whereby we have certainly hyper capable and intelligent machines but potentially also exceedingly self-aware machines by maybe, let's say, 2060 or so - and that if we were able to replicate, you know, sentience and legitimate general intelligence in machines, the ethical ramifications of whatever is after people is astronomically important, just like the Earth has a lot more moral weight to it because there's humans here as opposed to, let's say, just amoebas or crickets.

Dan Faggella: [00:17:57] The Earth will have a lot more moral weight when it has astronomically intelligent AI entities. And sort of how we - how the transition beyond humanity occurs, I think, is a - the great concern. But when we speak about these things to business and government leaders, it's a lot more about algorithmic transparency. How do we know these decisions are being made correctly. Responsibility - who's going to be responsible when this machine does something that could harm people or negatively affect people? So it's more about practical applications of individual-use cases.

Dave Bittner: [00:18:24] Well, you know, I think back to, I guess, the '80s in the early days. And we had things like - there was a program called ELIZA that would simulate being a therapist for you. And basically, like you said earlier, it was a bunch of if-then things. It would parse your language and just keep on feeding you questions. But, you know, every now and then, it would shoot something back at you that would sort of make you sit up in your seat and go, oh, wow, you just referred to something from earlier in the conversation. And that was - certainly we've come a long way since then. So I guess I'm curious, where do you think we are in the evolutionary pathway towards eventual or, would you say, inevitable sentience?

Dan Faggella: [00:19:04] So we've polled three dozen Ph.D.s at a clip about the emergence of self-awareness in AI on a number of occasions. The most recent bigger poll that we did on that topic had kind of the biggest lump in the bar chart - happened in, like, the 20-65, kind of 20-60 range. Whenever that day does come, Dave, it is sort of the grand crescendo of moral relevance. So when we do broad polls across a swath of Ph.D.s who've been in this space for, you know, as long, if not longer in some cases, than I've been on the Earth, you know, we see lumps there. You know, the coming 50 years maybe - this is sort of a potentially reasonable supposition.

Dave Bittner: [00:19:43] Pardon, I don't know if this is a naive question. But when that moment comes, will we know?

Dan Faggella: [00:19:49] Yeah. That's not a naive question by any means, Dave. And it's a perfectly reasonable question. I will be frank. I think that it is a screaming shame. I put it way up there on the furthest distal issues with the human condition that we don't firmly understand sentience enough in terms of what it is, what constitutes it, how it emerges. Here's the deal, man. Here's the deal. Things aren't morally relevant unless they're aware of themselves. If you break your computer right now, just shatter it on your knee, that's going to be kind of annoying because someone worked hard to build that, 'cause you're going to have to go somewhere and get a new one but whatever. You just go recycle it.

Dan Faggella: [00:20:24] But if you do that with a dog, you will be fined and fined maybe a lot of money and maybe, you know, be relegated to have to do some therapy or something. If you do that to a child, then you may just go to jail for the rest of your life. And so the more self-aware - the more rich and robust the internal experiences of an entity are, the more moral weight it has. And we don't know how that arises.

Dan Faggella: [00:20:47] So what constitutes things that are morally relevant is predicated on this ephemeral substance of which we have essentially no understanding. That by itself - I just want to cry. And I think we really do have to understand consciousness and sentience itself. And I have some reason to believe that in maybe the coming two decades, we'll chip away a little bit more and more into what it is. But you are right.

Dan Faggella: [00:21:09] We may never really get to that root or far enough to that root, and we may develop self-aware machines that are aware of themselves in ways that we just can't detect because we don't end up chipping away at that core science of what self-awareness is. I think there's nothing more important. It's a tough one. We may get to the AGI and to the self-aware AI before we know how the heck to measure it. And you are darn well right about that. I hope not, but you're right about that.

Dave Bittner: [00:21:32] So what do you suppose the implications are going to be? As these technologies continue to develop and become more sophisticated, how do you see our interactions with them changing?

Dan Faggella: [00:21:42] Stephen Wolfram, the guy behind Wolfram|Alpha, has this interesting hypothesis that there is a potential singularity-like scenario whereby humans wholeheartedly, like, give up on their own volition because they work hand in hand with systems that recommend and coax and prompt them so well. So these systems will get you up on time, will get you feeling good, will prompt you to the right action, will set the right meeting, will recommend the product that is so much better than the one that you would've guessed at randomly.

Dan Faggella: [00:22:14] Like, you're just going to be so much more satisfied with the food it orders, with the movies it suggests, with maybe the movies it creates - builds entirely new programmatically-generated films just for your preferences beyond anything you could consciously ask for but hyper tuned in to your preferences on a bunch of deep levels. And that these systems - like, people may just completely bail on volition because these systems can prompt them and coax them through the world better than they can themselves. And that that's like a potential trajectory for where we're going as a species.

Dan Faggella: [00:22:45] I'm not necessarily going to get dystopian here, but I certainly think that there's a pull in that direction. I mean, famously, you know, Facebook and these folks are kind of, you know, under fire now, for better or for worse, for their ubiquitous influence over, you know, our actions and attention and anxieties and whatever else. And I think that that'll only become more and more embedded. I think we're at least going to be aware of these influences. I think, you know, you see GDPR, and there is going to be some emphasis maybe around children and technology. I could see potential regulation around those things.

Dan Faggella: [00:23:15] But all in all in the coming years, I think we're only going to become more and more embedded until the machines are actually part of our thinking in a physical and literal sense, which would mean chips. And I think that's part of the big grand trajectory - is when that kind of meld occurs. So I kind of see a melding in the metaphorical way that we have it now only increasing, and then an eventual melding all the way into extending our cognition in the very literal embedded-with-the-neurons kind of way in, let's say, 20 years, possibly even a little bit less.

Dave Bittner: [00:23:47] That is Dan Faggella. He is from Emerj Artificial Intelligence Research.

Dave Bittner: [00:23:56] And that's the CyberWire. Thanks to all of our sponsors for making the CyberWire possible, especially our supporting sponsor, ObserveIT, the leading insider threat management platform. Learn more at observeit.com

Dave Bittner: [00:24:09] The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our CyberWire editor is John Petrik, social media editor Jennifer Eiben, technical editor Chris Russell, executive editor Peter Kilpe. And I'm Dave Bittner. Thanks for listening.