Dave Bittner's Son: [00:00:01:02] Dad, we have all sorts of things at Thanksgiving, we have turkey, and mashed potatoes, and stuffing, and gravy, and the whole family comes over. It's really quite a good experience. Why do you keep lying on the CyberWire?
Dave Bittner: [00:00:17:12] I'm sorry, what's the address?
Dave Bittner's Son: [00:00:19:24] Thecyberwire.com/patreon?
Dave Bittner: [00:00:23:19] No, almost, that's backwards.
Dave Bittner's Son: [00:00:24:23] Patreon.com/thecyberwire.
Dave Bittner: [00:00:27:17] That's it. All right, yeah. Extra pie for you.
Dave Bittner's Son: [00:00:30:15] Okay, good.
Dave Bittner: [00:00:35:10] Our podcast team is taking a break this week for Thanksgiving, but don't panic, we've got brand new extended interviews with interesting people lined up for you. And you can get your daily dose of cyber security news on our website, thecyberwire.com, where you can subscribe to our daily news brief and get all the latest cyber security news. Stay with us.
Dave Bittner: [00:00:59:17] A quick note about our sponsors at E8 Security. They understand the difference between a buzzword and a real solution, and they can help you disentangle them too, especially when it comes to machine learning and artificial intelligence. You can get a free white paper that explains these new, but proven technologies at e8security.com/cyberwire. We all know that human talent is as necessary to good security as it is scarce and expensive, but machine learning and artificial intelligence can help your human analyst scale to meet the challenges of today's and tomorrow's threats. They'll help you understand your choices too. Did you know that while we might assume supervised machine learning, where a human teaches the machine, might seem the best approach, in fact, unsupervised machine learning can show the human something unexpected. Cut through the glare of information overload and move from data to understanding. Check out e8security.com/cyberwire and find out more. And we thank E8 for sponsoring our show.
Dave Bittner: [00:02:06:18] My guest today is Tiffany Li, she's an Attorney and resident Fellow at Yale Law School's Information Society Project. She's an expert on privacy, intellectual property, and law and policy, and her research includes legal issues involving online speech, access to information and Internet freedom. She's also co-author of the paper, Humans Forget, Machines Remember: Artificial Intelligence and the Right to be Forgotten, which will be published soon in Computer Security and Law Review.
Tiffany Li: [00:02:36:04] The right to be forgotten, generally speaking, is this concept in EU Privacy Regulation. It's this concept that people ought to be able to request that data or information about them be removed or deleted from a website, or, say, a search engine. Now, the right to be forgotten is something that is entrenched in EU Privacy Regulation, but it's not really relevant to US law, at least not yet.
Tiffany Li: [00:03:04:09] So, recently, I co-authored a piece on artificial intelligence and the right to be forgotten. Specifically, we look at whether the right to be forgotten is applicable in artificial intelligence, or even a machine learning context. And if it is, whether it's something we should be looking at doing more or less of, and how we should look at legal standards for right to be forgotten in terms of artificial intelligence.
Dave Bittner: [00:03:29:18] So, take me through that intersection there. How does artificial intelligence intersect with the right to be forgotten?
Tiffany Li: [00:03:36:14] The right to be forgotten is very interesting to me, I think, because it deals a lot with the concept of deleting information and deleting records. With artificial intelligence, or even with advanced machine learning, and here I have to note that I'm not discussing AI in terms of, say, the Terminator Skynet AI. I'm looking at AI and artificial intelligence in terms of they're advanced machine learning systems that can train themselves and develop new algorithms and new predictive results, based on data that is fed to them, or that they gather based on certain parameters that we program.
Tiffany Li: [00:04:14:00] If we look at this form of AI, and then we consider the right to be forgotten, we get into a few interesting questions that I don't believe are answered in the law right now. First of all, the law never really defines what it means to comply in terms of actually deleting a record. So, from a technological standpoint, deleting something is not as easy as you might think it is. It's not so simple as, for example, dragging a file from your desktop and, you know, throwing it into that little recycling bin icon. Deleting a file, or deleting a data record, can mean a number of things in the technical end. It could simply mean deleting the record of that data point from the system index. It could mean overwriting that data record. It could mean replacing that data record with a null value, and so on, and so forth.
Tiffany Li: [00:05:08:06] There are various ways to actually delete a record, especially in a machine learning or artificial intelligence environment, when you may have a large quantity of data and various ways that the researchers who are using that system want to treat the data. So, the first issue there is that the right to be forgotten as it currently stands in the law, and as it will be interpreted in the 2018 GDPR, does not really address this issue. You don't get a firm definition of what it means to delete information or how to really make this deletion permanent. This is problematic, because the GDPR and EU Privacy Regulation in general, requests that basically every tech company that has any reach into the EU, or that reaches EU residents, has to comply with this law.
Tiffany Li: [00:06:02:03] So it's a little difficult, you can imagine, to comply with the law when the law doesn't really make sense, or isn't clear on what could happen. So that issue of deletion not being clear, or not being defined correctly, is I think definitely a problem, and it's a problem that we address in our paper.
Dave Bittner: [00:06:20:03] Now, why do you think that issue hasn't been properly addressed so far? Is it an oversight? Or people specifically don't want to address it?
Tiffany Li: [00:06:30:08] The problem of the right to be forgotten and the GDPR in addressing those new technologies really lies in the sort of gap that we have between technologists and policy makers. There is definitely a need right now for more interdisciplinary research in law and technology. I think what often happens is you get all the technologists and tech company representatives in one room, there they can discuss how to actually create products, how to develop software, how to make tech-forward solutions for problems. In an entirely separate room, not even across the hall, but in a different building sometimes, there you have the policy makers and the lawyers, who are talking about these issues on a policy or legal level, and they're looking at the same issues. They're trying to figure out privacy. They're trying to figure out online speech. They're trying to figure out what sort of future do we want for our communication systems? But they're not in the same room, and that's a problem. It's a problem that isn't specific just to GDPR or the right to be forgotten, it's a problem that we have in really all of tech policy, both here and in the EU, as well as in other nations, too.
Tiffany Li: [00:07:46:02] So, I think the first thing we have to look at when addressing these issues is if we can simply solve them by increasing interdisciplinary research and interdisciplinary collaboration between technologists and lawyers and policy makers.
Dave Bittner: [00:08:00:17] From your point of view, what can we expect to change when GDPR does kick into effect next year?
Tiffany Li: [00:08:29:08] But a lot of what you'll see, I think, will be on the back end. There will be a lot of change, and there has already been a lot of change, I personally know, within a lot of technology companies right now, just preparing for the GDPR. So this means internal policies have been drafted and re-drafted, and edited, and sent to board members. Teams have been trained and re-trained. A lot of this change is happening on the back end.
Dave Bittner: [00:08:55:11] We'll have more from Tiffany Li after this short break.
Dave Bittner: [00:09:03:08] Time to share some news from our sponsor, Cylance. Cylance has integrated its artificially intelligent CylancePROTECT engine into VirusTotal. You'll know VirusTotal as the free online service that analyzes files and URLs to identify viruses, worms, Trojans and the other kinds of badness antivirus engines and website scanners pick up. Well, Cylance has pledged to help VirusTotal in its mission of making the security industry more perceptive, and the Internet a safer place. It's like public health for cyberspace. Free tools and services help keep everyone's risk down. Cylance sees their predictive approach to security as a contribution to the fight against cyber attacks, and they're now fully integrated as one of the analysis engines available in VirusTotal. Visit cylance.com and look at their blog for more on their contribution to our online immune system. And we thank Cylance for sponsoring our show.
Dave Bittner: [00:10:03:02] Are the Europeans taking a leadership role on this, that we expect to then flow through the rest of the world? It's commonly said about, particularly the Americans, that we will always trade security for convenience. And I think the Europeans have a different attitude to privacy than we do.
Tiffany Li: [00:10:26:11] I have a few different things to say about the points you just made. So, first, I definitely don't agree that the US would be willing to trade security for convenience. I think that is way overstating the current state of affairs. Americans do care about privacy, and I would argue that American tech companies do care about privacy too. Of course, they also want to innovate, they also want to hit the bottom line, their goals there. But privacy is important within the industry and for consumers.
Tiffany Li: [00:10:58:09] As for your point about the EU possibly leading in privacy generally, I think there are some interesting thoughts on that that you'll hear from people in the US, in the EU and internationally. So, as you can expect, most EU scholars and policy makers and practitioners that I've talked to, who work in technology, believe that the EU is leading the way in privacy, and many of them believe that this is a good thing. That they are kind of raising the bar for the rest of us, making privacy an important goal and a human right that we all have to protect. That is definitely one view, and I can definitely see arguments for that, specifically because EU Privacy Regulation does target literally every company in the entire world that operates in the Internet or technology space. So, sure, in that sense, they're setting a bar and they're setting a standard.
Tiffany Li: [00:11:54:05] There is also, though, I think, another view, which isn't necessarily mutually exclusive with that EU-forward, positive privacy idea. And it's this concept that if the EU is leading in privacy, and the EU is making basically every other company in every other country follow them, you sort of get into what lawyers call a jurisdictional issue. It sort of sounds like the EU is trying to legislate for the entire world. Because, right now, every tech company in the world, who has any EU customers, any EU residents, they have to follow EU Privacy Regulation. So, if you believe this is a good thing, sure, fine, that's definitely happening and it'll happen more. But it's a little concerning to me that one country or region can decide the laws for every other country in the world. And I think this is concerning, because if you think about the EU doing something, I would say many of us in what people call the western world might not really care that much. We might think, sure, privacy is great, the EU usually has values that we agree with, this is probably not a huge deal. But, what happens then is that you get this sort of precedent that's set.
Tiffany Li: [00:13:16:09] Now that we know the EU can make law that affects technology companies around the world, what does this mean for other countries? For example, China, or Russia, or countries that have values that may be a little different than what we see in the US or the EU. I think there is a significant danger in seeing this sort of legislative jurisdictional creep. I can definitely see the possibility for some countries to argue that they should be able to do the same thing. That they should be able to create their own laws and have tech companies around the world follow their laws. And there are two problems with this, I think. The first problem is, it's very difficult already for tech companies to comply with laws from basically everywhere around the world. It's hard for a tech company to comply with hate speech laws in Germany, free speech laws in the US, and political speech laws in China. Those are three entirely different paradigms and they have to comply with all of them, technically.
Tiffany Li: [00:14:18:15] If you take this sort of EU Privacy Standard example, and you push it forward, you also get into the issue of companies having to comply with value systems that maybe we don't want them to comply with. I think we take for granted that privacy and free speech and all those great democratic values are universal values, when they are definitely not.
Dave Bittner: [00:14:40:17] Haven't we seen that with Apple? Where, because of how much Apple relies on China, both as a market but also for manufacturing, that Apple has given into some demands from the Chinese government, that perhaps they would rather not have to give into?
Tiffany Li: [00:14:58:13] We've seen that happen. And different countries have created laws that specifically require that to happen. China's cyber security law could potentially affect tech companies outside of China. Russia's data localization law affects tech companies outside of Russia, and requires them to have data centers located within Russia to keep any data on Russian citizens. So, these are just a few of the many examples of ways in which countries are trying to get mostly US based tech companies to comply with their local laws.
Dave Bittner: [00:15:33:15] So, how do you see this playing out? When it comes to policy, with things like artificial intelligence, the right to be forgotten and privacy, what do you think the mechanism, as different cultures around the world, as different countries with different values, what's the push and pull going to be going forward? Do you have any sense for what the natural evolution of this is going to be?
Tiffany Li: [00:15:58:04] I think we're at an interesting time right now. We're at an interesting time, because these tech companies are no longer young and brand new, and they are facing regulation. They're facing regulation in the US, in the EU, internationally, and this sort of conflict of laws, this conflict of the laws of different countries, is affecting them and hitting their bottom line because it's hard to comply. It costs a lot of resources to comply with all these different laws. So what I see moving forward is most likely there will be some sort of work towards more internationally agreed upon standards, or at least regionally agreed upon standards. For example, the APAC region actually does have some sort of privacy guidelines, but they have not really been in play for many years. I can see something similar, we could have, you know, similar trade agreements. We could have a NAFTA for privacy regulations, for example.
Tiffany Li: [00:16:55:23] Some people even posit that there might be some sort of international law for the Internet, eventually, and I can see that happening. That could solve some problems. I do think, though, that right now we're in this really interesting space, where if you work in technology, or if you work in law or policy related to tech, you really have an opportunity to change how the entire world might see technology in the future. So, it's a huge opportunity, but it's also, of course, filled with many risks and a lot of room for things to possibly go wrong.
Dave Bittner: [00:17:32:01] Our thanks to Tiffany Li for joining us. Again, the research paper she co-authored is Humans Forget, Machines Remember: Artificial Intelligence and the Right to be Forgotten. And you can find it online.
Dave Bittner: [00:17:45:24] And that's the CyberWire. Thanks to all of our sponsors for making the CyberWire possible, especially to our sustaining sponsor, Cylance. To find out how Cylance can help protect you using artificial intelligence, visit cylance.com.
Dave Bittner: [00:17:58:22] The CyberWire Podcast is produced by Pratt Street Media. Our editor is John Petrik, social media editor is Jennifer Eiben, technical editor is Chris Russell, executive editor is Peter Kilpe, and I'm Dave Bittner. If you're celebrating Thanksgiving, we hope it's a good one for you. We're thankful that all of you listen to what we do, and find what we do valuable. On behalf of everyone here at The CyberWire, thanks for listening.