Caveat 5.13.20
Ep 28 | 5.13.20
Privacy is a human right.
Transcript

Jules Polonetsky: More and more, we're dependent on data. More and more, our world is intermediated by technology. 

Dave Bittner: Hello, everyone. And welcome to "Caveat," the CyberWire's law and policy podcast. I'm Dave Bittner. And joining me is my co-host Ben Yelin from the University of Maryland Center for Health and Homeland Security. Hello, Ben. 

Ben Yelin: Hi, Dave. 

Dave Bittner: On this week's show, I've got the story of a facial recognition research project that claims to be able to predict criminality. Ben takes a look at privacy in the face of contact-tracing apps and the coronavirus. And later in the show, my conversation with Jules Polonetsky. He's the CEO of the Future of Privacy Forum. And we're going to be talking about how privacy is better understood as a human right. While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. And we'll be right back after a word from our sponsors. 

Dave Bittner: And now some reflections from our sponsors at KnowBe4. What's a policy? We've heard a few definitions. Some say it's an elaborate procedure designed to prevent a recurrence of a single nonrepeatable event. Others say it's a way that the suits play CYA. Still, others say it's whatever happens to reside in those binders the consultants left behind them right before they presented their bill. How about a more positive approach? As KnowBe4 can tell you, better policy means better security, and getting the policies right is a big part of security. So is setting them up in ways that your people can actually follow them. We'll hear later in the show about how you might approach policy. 

Dave Bittner: And we are back. Ben, before we get into this week's stories, we've got some follow-up from a listener. This person writes in, and they say this is about the "Caveat" episode "You Don't Own Your Photos." Thanks for another great episode. Well, thank you. The question I have is... 

Ben Yelin: You're welcome. 

Dave Bittner: ...(Laughter) What if the photographer posted only a low-res copy to Instagram and made high-res copies available for a price? And let me back up here. The story we were talking about was about how Instagram's EULA basically states that they have rights to your photos, so if someone then uses Instagram's capabilities to embed your photo in another website, as the photographer, you don't have any control over that. You don't have any rights over that. So I'll continue with this listener's letter here. They continue, would the first person she licensed a high-res copy to be able to distribute that willy-nilly because she had posted a low-res copy on Instagram? In other words, does the Instagram EULA say, I give you this copy of the photo, or, I give any form of this photo? Ben, what do you think? 

Ben Yelin: So first of all, it's a great question. My read of it is according to the EULA, Instagram only retains those intellectual property rights in the photo that you post on Instagram. So any other site can embed it without you, the user, retaining those intellectual property rights. So to answer the question, I think if you posted the inferior low-resolution photo on Instagram, then Instagram would have the right to have that embedded on other sites. They would have no intellectual property rights in the higher-resolution photo that you did not post on their platform. So this might actually be useful for people who are doing professional photography. You know, it's sort of the equivalent to giving a sample at a store, which is a very pre-COVID phenomenon... 

Dave Bittner: (Laughter). 

Ben Yelin: ...Versus giving somebody, you know, the whole birthday cake. You can post a far inferior low-resolution version of the photo. 

Dave Bittner: Maybe it has a big watermark on it. 

Ben Yelin: Exactly, the 200-by-400 version, and be like, look; if you actually want my best version of my own photography, you know, please give me your PayPal information. 

Dave Bittner: Interesting. 

Ben Yelin: So that would be my interpretation. But it's just about Instagram only has rights to what you post on their site, not any other version of that photo. 

Dave Bittner: It's not the essence of the photo. It is the physical arrangement of those particular bits that you have sent to Instagram itself. 

Ben Yelin: Exactly, 'cause you could see, you know, why there might be a slippery slope there. If, you know, there was this distinction and any version of a photo, what if you had took successive photos of the same thing and you posted one of them on Instagram? Would Instagram retain rights in the rest of those photographs? You went to a beautiful location in the Grand Canyon and took 20 pictures and posted one of them on Instagram. Does Instagram retain the right to the other 19? I think almost certainly they do not. 

Dave Bittner: OK. All right. Interesting. Well, thanks to our listener for sending that in - a good follow-up question. Let's get to our stories. Ben, why don't you kick things off for us this week? 

Ben Yelin: Sure. So we are, of course, still in the midst of a global pandemic, and it's kind of worth stepping back once in a while to examine the privacy implications of potential solutions to get us out of this pandemic, and that largely relates to applications that do contact tracing. So my article this week comes from The Washington Post. It says "U.S. Gears Up For Privacy Debate as Coronavirus Phone Monitoring Expands Globally." The beginning of the article is an anecdote from a left-leaning individual from Israel who's been a long opponent of pervasive government surveillance. And they asked her about an application in Israel that does contact tracing by sending somebody a notification if they come into contact with someone who's tested positive for the coronavirus. And that's done through Bluetooth. And this person's reaction was - you know what? - I'm fine with that. I'm usually very wary of pervasive electronic surveillance, but desperate times call for desperate measures. 

Ben Yelin: And that sort of sets the scene about where we are in the United States. Obviously, the situation has gotten rather grim. We have a lot of cases. We have a lot of fatalities. Apple and Google have developed an application - we talked about that on a previous version of our podcast - that potentially would allow for this type of contact tracing. It would be voluntary. But you know, it's something that potentially could help us get out of this problem. But there are a couple of limiting factors here. According to some polling, a number below 100% of polled individuals in the United States would download that voluntary app. And if that number is below 60%, then experts estimate that that application would not be effective 'cause it just would not be capturing enough cases. 

Ben Yelin: Not to mention - there are a lot of people who still do not own smartphones in this country. And many of those people are the type of people who are particularly vulnerable to this virus - so older individuals, people with preexisting conditions. So those are a couple of potential pitfalls of this program. And then there are, of course, privacy concerns. There have been some lawmakers on the Republican side in the United States Senate that are introducing privacy legislation related to these potential applications. It would draw limits around personal health geolocation proximity data, protecting that information for the users. 

Ben Yelin: Democrats and also some privacy groups have expressed some opposition to this legislation for a couple of reasons. One, it doesn't have any sort of enforcement about people falsely posting that they're COVID-positive, even if they have not actually received a test. That could lead to a lot of different false reports. And then very importantly, it would preempt all state privacy laws, which is particularly dangerous because, as we know and as we've talked about, some states have digital privacy laws that are much stronger than what the federal government has right now. 

Ben Yelin: So that's sort of the lay of the land. The polling data they referenced is from a poll that was taken very recently. And it said that nearly 3 in 5 Americans are unwilling or unable to use a potential contact tracing application. And so we're right at that threshold where it probably would not be effective in the first place. So to answer the article's question, you know, for a number of reasons, I'm not sure we're ready to dive in and have this phone monitoring work in our country as it's worked in other countries across the globe. 

Dave Bittner: Yeah. And that's really a fascinating part of this - isn't it? - that you have other nations who are able to say to their citizens, you will do this. And (laughter) - you will install this app on your phone, or you will allow us to trace you this way. And from a public health point of view, you can imagine how that could help them get on top of this thing in a better way than countries like ours where, because of our values, we say these things have to be opt-in. 

Ben Yelin: Yeah. So you know, there are a couple of ways of looking at this. There's the China approach. And we've talked about this, also. China is not exactly an example of - a shining example of respecting civil liberties and privacy. The government assigns citizens a smartphone code based on their likeliness of exposure, and that dictates their ability to move about the country - at least that's how it works during the months-long Wuhan lockdown. That's just not going to happen in this country. It's against our political culture. 

Ben Yelin: But then there are other countries - small-D democratic countries - where you are seeing the introduction of this technology. We've seen it in various European countries. This article talks about Israel. It is happening all over the world. And whether it's because their political culture is different, you know, we are also a more diverse country. We're not as homogenous as, say, some of the northern European social democracies there. 

Dave Bittner: Right. 

Ben Yelin: So we might be more prone to this type of skepticism and political conflict. But for whatever reason, you know, we're facing our own obstacles on this, even though other countries have been able to work something out and they've done so effectively. The countries that have introduced any type of contact tracing - but specifically using applications - have done a better job than we have at suppressing the virus. 

Ben Yelin: So you know, I think in an ideal world, we would use every tool at our disposal. And luckily, you know, two of our biggest technology companies have given us a potential tool. But you know, it's important for us to step back and be like, is this really going to work in our country and why not? And I think what this article is getting at is there are a bunch of reasons - people's distrust of the government collecting data, the small percentage of people who don't own smart devices - that would really be limiting factors in this eradicating the epidemic. 

Dave Bittner: Yeah. It's really fascinating to think about. I can't help wondering if this is a situation where, if you had a message coming down from the top - you know, a leadership type of thing of saying, hey, everyone, you know this is - you know, looking back to, you know, buy bonds - right? You know... 

Ben Yelin: Yep. Buy your war bonds. Support our - yeah. Exactly. 

Dave Bittner: Right. Yeah, yeah. Install the contact tracing app. It's - you know, take one for the team. For your nation, we need you - to beat this pandemic, we all need to work together and, you know, maybe stretch our comfort zone here. Unfortunately, it doesn't seem like there's a place for that - where that's going to come from given the current situation and the type of leadership we're seeing at the federal level. 

Ben Yelin: Yeah. I mean, that's just not happening. Now, the CDC put out a document that outlines criteria for these digital contact tracing tools. And that's guidance to public health departments, other local entities to determine which of these applications are safe to use that will protect user privacy. So that is coming from the federal government. But you're right that there's no real leadership coming from the top persuading Americans that it is in our best interest for this limited amount of time to do contact tracing. And I don't see it happening. I mean, you look at something like - something as simple as wearing masks. It is not a panacea. It's not going to solve this epidemic. But there is evidence that if everybody wore a mask we could significantly lower the transmission of the virus. And certain states like ours in Maryland have actually mandated that people wear masks at grocery stores - masks or face coverings, that is - at grocery stores or public transportation. You know, the message from the federal government has been very mixed on that question, that seemingly obvious question. At first, they told us we didn't need face coverings. We should leave them for medical professionals. But that was revised. And at the beginning of April, they told us you should wear them if you can't practice social distancing. But then, you know, we've seen some of our leaders at the federal level sort of explicitly or implicitly downplay the importance of wearing masks. So, for example, the president at a event yesterday in Arizona went to a mask-making factory but was not wearing a mask himself and has not really spoken about - you know, compellingly about, you know, the need for all of us to make that small sacrifice. And that's a much lower ask, in my opinion, than telling everybody to download an application that the government is going to use to help track the spread of the virus. So I don't know if, you know, that type of federal leadership would actually work. You know, as I said we are distrustful of what our government tells us. And maybe some of that distrust is necessary. But, you know, I'd certainly be interested in seeing them try. We're just not seeing that at the state level, either. So it's not just the federal government that isn't really giving us direction. It's state and localities, as well. 

Dave Bittner: All right. Well, it's an interesting article. And it's certainly something worth pondering. We'll have a link in the show notes, of course. Moving on to my story this week, Ben, every now and then, we get one that is a real head-scratcher. And this, I would say, is one of those. 

Ben Yelin: Fits the bill, yup. 

Dave Bittner: It does. It takes the form of a press release from the Harrisburg University of Science and Technology, which is a real institution of higher learning. 

Ben Yelin: We both Googled it just to make sure. 

Dave Bittner: (Laughter) And when I describe what's going on here, dear listener, you will understand why. The title of their press release is "Facial Recognition Software Predicts Criminality." 

Ben Yelin: So that is the actual full headline. That's not us, you know, shortening the headline to to narrow it down to its essence. That's literally what the headline says, which was sort of hard for either of us to believe when we saw it. 

Dave Bittner: I checked this to make sure that it hadn't been released on April 1. I checked to make sure that Harrisburg University wasn't some sort of parody site. I've looked up the names of the professors and the researchers listed in here. They are real people. They have real resumes. They've done real work. Let me read on some of the things that this press release claims. It says a group of Harrisburg University professors and a Ph.D. student have developed automated computer facial recognition software capable of predicting whether someone is likely to be a criminal. With 80% accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face. The software is intended to help law enforcement prevent crime. Ben, before I go on... 

0:15:22:(LAUGHTER) 

Ben Yelin: Yeah, I'm just sort of - it's all kind of sinking in for me. I'm trying to process this. 

Dave Bittner: One of the researchers is quoted as saying, "we already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection. This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality. By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement and military applications that are less impacted by implicit biases and emotional responses. Our next step is finding strategic partners to advance this mission," to which I submit, good luck with that. 

Ben Yelin: Yes. 

Dave Bittner: (Laughter). OK. So let's just start at some high-level things here. The ability to determine whether or not someone is a criminal by the look of them. Ben (laughter)? 

Ben Yelin: I have never seen any research indicating a correlation between certain facial features and somebody's proclivity for criminality. Even if there were a correlation, I mean, bringing that to law enforcement would be highly problematic because you would, you know, get information on a person not based on their past actions, based on their actual criminality, but you would perhaps put a watchful eye on them because of a facial feature. That seems to me to be almost the definition of bias. And then they assert that they can do this with an 80% level of confidence. I know 80% sounds, you know, high in the grand scheme of things. To secure a conviction in a criminal court, you have to be far greater than 80% sure that a person has actually committed the crime. That's the nature of the beyond a reasonable doubt test. Eighty percent really does not seem like that high of a threshold to me. I would be more impressed if they said something like 99%. And then they said that this is not subject to racial bias, which - you know, they don't really offer any evidence for that claim. The first thing I saw when I saw the headline to this was, this is an absolute nightmare in terms of introducing racial biases. We already know that, just like humans, artificial intelligence systems are racially biased. And I have no reason to believe that this technology, which is identifying particular facial features, would make, you know, implicit assumptions based on the person's race. So yeah, to say the least, it's highly problematic. And their justification is, well, you know, crime is a problem. So any tool that we can use to help address that problem we're willing to make available to our strategic partners in law enforcement. Yes, crime is a problem. But just like all problems there are some solutions that almost supersede the scope of the problem. And that's kind of my interpretation here. 

Dave Bittner: Let me try to play devil's advocate here. And let's walk through a possible analogy. 

Ben Yelin: Sure. 

Dave Bittner: So let's imagine a security person at a department store whose job it is to sit behind the wall of monitors that are monitoring all of the cameras at that department store. And this person is a grizzled, old veteran who's been sitting there for years and years and years. And I would bet that if you asked that person, this person would be able to say that they could tell if they had a problem individual in their store by the look of them, by the way they carried themselves, by the way they were moving around the store, by - in other words, they would be naturally doing some sort of filtering process for who they should watch and who they should not. And I don't know what we think about that (laughter), as to whether or not that's... 

Ben Yelin: Yeah, I'm not sure I like that, frankly. 

Dave Bittner: But I think it's a reality - right? - I think it's a reality, right? 

Ben Yelin: It is, yeah. 

Dave Bittner: I think it's a natural thing for someone to do. Isn't this an automated version of that? 

Ben Yelin: I would say it's not because in that case, you'd be monitoring for certain behavioral tics - you know, if somebody looked nervous, if their hands were clammy, if they were looking around, if they're scoping out security cameras, that could indicate a reasonable level of suspicion that that person would be creating a crime. So it depends on what you mean. You know, if that security guard was saying by the look of the person, my guess is they'd probably be talking more about that person's behavior, not the look on - you know, what the shape of their face looks like. I mean, I think we'd all laugh at a human being who told us, I can tell, you know, that that person is a criminal because of where the wrinkles are located on their face. And if a person... 

Dave Bittner: And the shape of their skull. 

Ben Yelin: Exactly. 

Dave Bittner: (Laughter). 

Ben Yelin: You know, so if a person told me, you know, I'm a security guard at a mall, and I can tell just by looking at a person whether they're going to be a criminal the first thing that would come to my head is, wow, that person might have some racial biases (laughter). 

Dave Bittner: Right. 

Ben Yelin: And, you know, that's - that would just be so natural reaction because we are literally judging people by their appearance. That seems to be at a very base level of, you know, the impetus behind racial bias. So I would be very skeptical of a department store security guard who did that to the same extent that I'd be skeptical of any artificial system that could do that. 

Dave Bittner: Also, can we touch on this this notion of predicting crime of interfering with folks before they commit crimes. I mean, there's a fundamental problem with that in our system, right? I mean, we don't - that's not how we do things. 

Ben Yelin: It is not how we do things. So then, you know, the next step is let's say they were able to find some strategic partners who were interested in this technology. How would those partners use this? Would it be for predictive policing, where they would, you know, put more cops in certain neighborhoods? We do have predictive policing technology. Generally, it's based on data. You know, sometimes, the input of that data could be questionable. And it can itself be reflective of biases, but at least it is data. It's not the contours of somebodies face. 

Dave Bittner: What about sex offenders where someone has to register. Neighbors can be alerted if there is a sex offender in there in their neighborhood. And to me, I mean, that's a case where someone may have committed a crime. They may have paid their debt to society by either spending time in jail or paying a fine or doing whatever work the judge has said that they need to do. But yet they still have this mark on them because my understanding is that's an area where there's a lot of repeated crimes over the period of someone's life. There's a high likelihood of that. 

Ben Yelin: Yeah. And as you say, that's what the data reflects. The data reflects that people who commit those types of sex crimes are likely to commit them again. I've seen plenty of episodes of "Law & Order: Special Victims Unit" to be well aware of that statistic. 

Dave Bittner: (Laughter). 

Ben Yelin: So at least that's based on somebodies past behavior. Yes, they've paid their BET to society. But we have statistical evidence that people who have been convicted of sex crimes are more likely to convict them in the future. There's some proven link there. If there is a proven link here between an inert quality of an individual, which is the contours of their face, and that person's proclivity to commit crimes, I would need to - I mean, I think all of us would need to sort of look under the hood. How did they compile that data? Because if it's garbage in in terms of the data that's being inputted, it's going to be garbage out. And if there's a false correlation, you're going to get a lot of false positives. And no matter how a law enforcement agency uses this information, that could potentially lead to false arrests and false accusations. So it's just - it's very disturbing to me. I had sort of the same reaction you did when I when I came across this is - come on. Like, is this real? Could this possibly be real? 

Dave Bittner: What if it is? What if it turns out that these folks have come up with something new and they are able to predict with 80% accuracy whether or not someone's likely to commit a crime? What then? 

Ben Yelin: I mean, that's a great question. I still think that technology would not be worth using because it would put a watchful eye on individuals who have done absolutely nothing wrong. There's no suspicion that they have committed a crime. There's no suspicion that they would commit a crime. It's simply putting a mark of suspicion on somebody based on an innate quality for which they do not have control. And I think from a moral perspective, that's wrong. 

Ben Yelin: I hesitate to use this example, and it's going to sound rather crass. But if you had a statistic saying a certain racial group or a certain subgroup demographic group is more likely to commit crimes and, you know - so the cop on the street will say, well, if there is a such-and-such person in this neighborhood, it's more likely than not that they're a criminal. 

Dave Bittner: Right. 

Ben Yelin: I think we'd all have moral problems with that. So to me, this sort of presents the same types of issues. Even if it were a successful predictive tool, I would have an enormous difficulty, both ethically and from a practical sense, having law enforcement use this technology. And I'm pretty sure most people would feel the same way. 

Dave Bittner: Yeah. It's an interesting one. It'll be one we'll certainly have to keep an eye on to see how they progress here, see if they release more information as they publish and go through peer review - see what happens. 

Ben Yelin: Yeah. Apparently, there's going to be a future book series. Springer Nature research book series - Transactions on Computational Science and Computational Intelligence. I'd certainly like to read that to look under the hood. And I just hope the contours of our faces, Dave, do not indicate that we have a proclivity to commit crimes. 

Dave Bittner: (Laughter). 

Ben Yelin: I'm hopeful. 

Dave Bittner: I know. You and I know we both have that look about us, don't we? 

Ben Yelin: We sure do. We sure do. 

Dave Bittner: Hey, everybody. It's Dave here with a quick interruption and update. Since we recorded this episode, the folks at Harrisburg University have taken down the tweet and the announcement about this program. So we'll be keeping an eye on that. Normally, we have links to the project. And at the moment, they do not exist. So this is a developing story. And of course, we will follow up as things develop. Back to the show. 

Dave Bittner: It's time to move on to our Listener on the Line. 

0:25:53:(SOUNDBITE OF PHONE DIALING) 

Dave Bittner: Our Listener on the Line this week, his name is Paul (ph). And he writes in - he says, I'm a security operations manager in San Antonio, Texas. Have you discussed the Electronic Communications Privacy Act of 1986. In short, law enforcement can ask for any stored information, including emails that reside on a remote server - think cloud or Gmail - that is over 180 days old by sending a request, no warrant needed. The data is considered abandoned under the law and treated like trash left on the side of the road to be picked up by the waste disposal company. 

Dave Bittner: Ben, what's your take here? It's - this is one I think probably isn't on most people's radar. 

Ben Yelin: Yeah. So it's an excellent question. This relates to the 1986 law he references, the Electronic Communications Privacy Act. One of the elements of that law is something called the Stored Communications Act. Now, 1986 - which, not to age myself, that happens to be the year I was born - is a rather long time ago at this point. It's been 34 years. And the law has really not been updated since then. And the person who wrote in this question is correct - that the letter of the law says that any stored communication over 180 days old, law enforcement does not need to obtain a traditional warrant to gain access to that information. That obviously seems very outdated in an age of cloud computing, where most of us retain all of our emails. Google's servers are large. Those of us who... 

Dave Bittner: Right. 

Ben Yelin: ...Use Gmail, you know, we're probably talking about millions of emails dating back, you know, 10 or more years. 

Dave Bittner: Sure. 

Ben Yelin: So that standard seems greatly outdated. And a prominent court, the Sixth District Court of Appeals, a federal court, has agreed with us and said that that standard is outdated. The case was Warshak v. the United States. And it was decided actually around the time that - this person posted a news article about this in their question - and it's around the time that news article came out. So it was late 2010. 

Ben Yelin: That case held that a person does have a reasonable expectation of privacy in the content of one's stored communications - of one's emails. That means, according to that case, no matter how long that communication has been stored, the government needs a warrant - a traditional warrant based on probable cause - to extract that email from the service provider. 

Ben Yelin: Now, technically, that case only has applicability in the Sixth Circuit. I will note that most other courts across the country have extended the reasoning of that Warshak case. What Congress has tried to do numerous times is enact something called the Email Privacy Act. That law would take the Warshak case and make it the law of the land. So in every judicial circuit in the country, the government would need to obtain a warrant to obtain somebody's stored communications. The most recent version of this bill actually has passed the House of Representatives in the current session of Congress. It was passed as part of a larger intelligence policy bill. It has not passed the Senate. And this bill has been introduced a number of times. It has pretty broad bipartisan support. I think hopefully for a lot of us it's just a matter of time before that Warshak standard is applied nationwide. And I think, you know, the best way to look at it is the government does not need a warrant to obtain routing information. So the metadata, you know, when that email was sent, to which address it was sent to - and that's the way it's always been with traditional mail. You don't need a warrant to obtain the addressing or routing information on the outside of an envelope. But the government has always needed a warrant to obtain the information inside of that envelope, the private message that I'm sending you. And so I think that's sort of the rationale of the Warshak case and certainly the rationale that Congress pretty clearly believes in if, you know, this bipartisan support of the Email Privacy Act is any indication. 

Dave Bittner: All right. Interesting stuff. Did you have to look that up, or did you have that off the top of your head? 

Ben Yelin: I looked some of it up. 

Dave Bittner: (Laughter). 

Ben Yelin: I was not sure whether the Email Privacy Act had actually passed. Like, I obviously know about the Warshak case. 

Dave Bittner: Right, right. 

Ben Yelin: I did not know until today that it had actually passed the House this session. 

Dave Bittner: I see. All right, professor (laughter). Well, thanks to our listener Paul for sending that in - great question and really nice to be able to provide some clarity on that. 

Ben Yelin: Absolutely. Thanks, Paul. 

Dave Bittner: Of course, we would love to hear from you. Our "Caveat" call-in number is 410-618-3720. You can leave a message there, and we will answer it on the air. You can also send us an email. It's caveat@thecyberwire.com. Coming up next - my conversation with Jules Polonetsky. He is the CEO of the Future of Privacy Forum. We're going to be talking about how privacy could be better understood as a human right and some of the privacy risks that he thinks are going to be growing in prominence in the days ahead. 

Dave Bittner: But first, a word from our sponsors. And now we return to our sponsor's point about policy. KnowBe4 will tell you that where there are humans cooperating to get work done, there you need a common set of ground rules to ensure that the mission is accomplished but in the right way. That's the role of policy. KnowBe4's deep understanding of the human dimension of security can help you develop the right policies and help you train your people to follow them. But there's always a question of showing that your policies are not only sound but that they're also distributed, posted and implemented. That's where the Policy Management module of their KCM platform comes in. It will enable your organization to automate its policy management workflows in a way that's clear, consistent and effective. Not only that, KCM does the job at half the cost in half the time. It's your policy, after all. Implement it in a user-friendly, frictionless way. Go to kb4.com/kcm and check out their innovative GRC platform. That's kb4.com/kcm. And we thank KnowBe4 for sponsoring our show. 

Dave Bittner: And we are back. I recently had the pleasure of speaking with Jules Polonetsky, CEO of the Future of Privacy Forum. We covered a lot of different topics about privacy, some of the things that are on his radar in the days ahead. Here's my conversation with Jules Polonetsky. 

Jules Polonetsky: I started the Future of Privacy Forum about 11 years ago serving as the chief privacy officer at AOL, at the DoubleClick, you know, part of Google, as a congressional staffer, a state legislator, the consumer affairs commissioner for New York City. During all those roles, I thought there was a gap in the privacy debate. There were trade groups who supported their industry's interests - that's their mission - often had to have, you know, a lot of consensus before moving forward. And there were civil society and advocacy groups who advocated and litigated and criticized because, well, that was their mission. There weren't as many people in the middle who were, I'd say, optimistic about tech and data, really believing that we're getting new services, we're advancing research that can be helpful (ph). But, indeed, you will do bad things, and you will create a scary Orwellian society or discriminate or expose if you don't actually really work hard to figure out what are the safeguards? What are the rules? So our goal when I started the organization was to create a place that focused on what's coming next, what are the new technologies, where we don't have the rules yet. Maybe we don't have a law and it's not clear what's good practice. Maybe we have a law or need a law, but how do you actually do it in a way that is effective that lets you support the socially beneficial things and deter the negative things? And I didn't think that there was enough activity across sectors because nobody has the full truth there. Advocates are smart. They're worried. They're concerned. They may not know the full picture. Industry knows what it knows, very often sits in its place and can miss important issues that other stakeholders have. Government does its best, but government works in a certain pace and can have its challenges. Academics work in a different way and live in their particular disciplines. There wasn't, I thought, enough activity that brought all of those players together to argue it out, to learn from each other, to debate, to disagree. So here we are 11 years later. We've got about 200 or so companies involved, typically, the chief privacy officer of the world, the senior people. And when I say companies, I certainly mean, you know, the big platforms. I mean startups. I mean retail. I mean making (ph). But I also mean the leaders at cities who are trying to advance smart city projects, do better at delivering services to communities. I'm thinking of ed tech companies providing services to schools but also school districts and state boards of education - pretty much anybody who is interested in how we use data and what are the social, ethical, legal policy implications. How do I follow the laws? What should the laws be? How do I make sure I'm not being creepy? How do I set norms? How do I work with the different stakeholders? So as you can imagine, we're in the midst now of the - I hope the beginning of the end of the COVID crisis. Maybe we're still at the beginning. And the questions we're struggling with now are the questions that privacy commissioners and companies and everybody is struggling with. How can I help? How can I use the data I have - location data, browsing data, ISP data - what can I do to help without making the problem worse by exposing privacy, by violating rules? So that's who we are. 

Dave Bittner: It strikes me that certainly in the time that you've been at this, so over the last 11 years, the status of privacy has really been elevated in the conversation. It's really come to the fore. 

Jules Polonetsky: One thing I would love to say is that it's never really been about privacy. Well, let me qualify that. A little bit of it is about privacy, right? I mean, we don't want to get emails we don't want. We don't want to post the picture and then find that the wrong person saw it or that it got shared. It certainly is to some degree about exposing our information. Do we want others to see it? Do we want companies to have it? But that's a very small picture. The bigger picture - and the Europeans are perhaps more effective in talking about this - you know, this is data protection which is intended to support a wide range of rights and freedoms - speech, freedom of association, equity, discrimination, bias, power, right? Somebody has data, they've got a lot of power. Now, do I trust them to have that power? Maybe I do, right? In a time of crisis, I might want the government to know who is infected and they should be telling me where I should go or not go. I may want them to know where the disease is spreading. And so things that I might otherwise want held very close, I actually may now say, well, wait a second. This impacts, you know, these challenges. So I want the data to go, but, whoa, I don't want it to go everywhere. I don't want it to be used against me. I don't want to be locked up. I want to know that I'm being helped or that society is being helped. 

Jules Polonetsky: So it's true that more and more we are dependent on data. More and more, our world is intermediated by technology. And so it shouldn't be surprise that the - you know, the issues that have always been issues in the world - the power of government, the power of big companies, how we make decisions about each other - and then lots of new decisions - right? - self-driving cars and and filter bubbles about the content we see. So there's certainly a whole host of new ways, but it's an old story, right? It's who's in power? Do I trust them? How do I interact with other people in my community, in my village, in my city? It's more dependent on data before, but it's the same debates and the same questions. What's happened is that I think what we're esoteric questions about cookies and tracking are now, like, who's actually in charge of our democracy? Are the Russians, you know, fooling us with their ads? Are the platforms shaping what we see and believe so that - and we're never going to solve those things, right? I mean, we've been battling those questions for generations. These are things that, like, the Constitution determines. And suddenly, the way a particular technology company structures itself is wired right into that exact question. 

Dave Bittner: Over the course of time, over the course of history, do these things swing back and forth like a pendulum? The balance between individuals' protections and how they view privacy and so forth, are there cycles to these things? 

Jules Polonetsky: Yes and no. So, obviously, let's just take a look at the swing even just during my career. When I started out, I was at DoubleClick and the big concern was cookies and ad banners and personalization. And that was new, and it was concerning who would know what sites you went to. And was it too creepy that ads were tracking you? What should the rules be? And then the ad market crashed, and all the things that these companies wanted to do wasn't going to happen or not so quickly. It's all happening today. And then 9/11 came and debating cookies seemed really trivial because now we were debating, why didn't the government know enough to put these pieces together? Like, there was information about people, all right, learning to fly planes. There was information about people's backgrounds and people coming into the country. How stupid is the bureaucracy that these things all exist, but they somehow were not connected? And so we spend a lot of time working through the technical ability to scoop all that up and put authorities in place that would allow that. And then we spent the next number of years reacting to, wait. Did the NSA go too far? Or did they even go further than we thought? Did they interpret their authorities in a way that lets them bulk collect all the information? And then the Snowden revelations, you know, surprised many people with the extent of the way the NSA had interpreted some of these authorities. And was it an intrusion? And then 9/11, although it's certainly always going to be in the memory of many of us who, you know, saw the towers fall and the like - but it's a few years in history. And it's receded from the day-to-day debate. And guess what? The companies are back. And now they're even more essential to our lives. It's not just about ad banners, right? It's about social media, which wasn't as big a factor back then. It's about smart speakers in my home. And so all of a sudden, it's about self-driving cars. All of a sudden, biometrics, DNA is being analyzed. And, you know, small commercial companies have detailed insight into my DNA. So all of a sudden, it's back. And it's actually happening. And it's trivial to think of how DoubleClick had cookies. Well, now everybody knows everything about you the minute you're online and is auctioning and competing to bid for your data to try to personalize an ad and get your attention. And so that definitely is a cycle and a swing, I think, back and forth. But I think the issues and the structure - you know, the reason I've been able to do this for 20 years is it's the same set of questions over and over and over again, right? What's being collected? Do people understand? Is it a surprise? Is someone going beyond what is the norm? Is the norm in the industry out of sync with what actual people understand? And are we all shocked when we learn the extent of location tracking? Are there safeguards? Are there limits? Are there restrictions? Is the data deleted? Does it go to the government? Does it go to everybody, where I don't have any control of it? Can I do something about it? What actually is being harmed? Do I only care about whether I have financial harm, identity theft - someone's being hurt - someone's being stalked? Do I care about more ephemeral or more esoteric harms - right? - that I'm in a filter bubble, and someone's deciding all the information I see? Do I care that I have some sense that I'm being watched, and maybe I don't search for things that I'm interested in? So when I say, you know, ephemeral, esoteric, I just mean maybe not immediate physical or financial harm. Are there emotional or intellectual or societal, you know, impacts? Same set of questions. The attention swings for sure. Look. We're debating now the access by government or others to location data, to be able to trace quarantine and to understand the movement of - the spread of the pandemic. And I've answered question after question from media and others - are we risking weakening our privacy standards because of this? And my answer is it's not a weakening of any privacy standard. Privacy or data protection always recognizes that there are moral and ethical ways data can and should go. And there have to be safeguards to make sure that that goes as intended. And so there is a new context now. There is an emergency. I'm not losing privacy when I'm at my doctor, and I take off my shirt. But in the middle of the street, I would. When I take that same garment off with my loved one, there's no a privacy issue there, right? I want to be naked. 

Dave Bittner: Right. Right. 

Jules Polonetsky: But each of them can even mess that context up, right? If I'm at a nude beach, well, that's not really a privacy problem, right? If my doctor has the door open and someone peeks in or, you know, something inappropriate happens, well, even there that's messed up. And even my loved one could take a picture of me and share it, you know, inappropriately. So context matters. Professor Helen Nissenbaum, probably the leading philosopher in privacy today, an academic, wrote a fabulous book years ago that's become a real standard called "Privacy in Context." And she helps define how what we consider, quote, unquote, "private" is always defined by the individual expectation I have. I gave you the note. I didn't expect you to do anything else with it. You've published it. You handed it on. You violated my expectation. And then societal expectations, right? Because if everyone expects certain things to always be shared, then that's not really something I should expect be kept private. And that might be different in different cultures, different countries, different relationships, right? 

Dave Bittner: An organization like yours, like the Future of Privacy Forum - how much of the work you do is advocating for one position or another vs. facilitating conversations and bringing everyone to the table. 

Jules Polonetsky: A good chunk of our time we spend, I'd say, educating and helping. And by that, I mean core members, core supporters, core donors of ours are the chief privacy officers of, you know, a wide range of companies. And many of them are very smart and very sophisticated. But if I'm the chief privacy officer of a big bank, I probably don't work on facial recognition issues every day. But you know what? There might be some place where that's going to be used in my thing - maybe to secure my - to let me log into an app, right? I might not be an expert on deidentification. I'm probably a real expert in all the banking laws. I'm not a machine learning scientist, right? I'm a lawyer. I'm a policy person. So everyone needs to understand more deeply these adjacent technologies because your company is using it or others are using it or your partners are using it. And you can't ask the smart questions. You know, if you're sitting down with a machine-learning expert in your company and they say, hey, we're going to do this - is there any problem? - and you need to know, well, wait. Is their personal information in a model? Or wait. What question should I be asking about bias? And you know, I need to know enough to be thoughtful about what I'm doing. And not just companies - right? If I'm a regulator, what experience do I have? I don't have big teams of machine learning experts. But I'm being asked to make decisions here. Or right now regulators are being asked - hey, what are the rules around location data? Well, guess what. That's pretty complicated. Right? 

Jules Polonetsky: So we are scrambling to pull together, from both experts internally and the fact that we know a lot of the experts, right? Some - we have location companies in our camp. They know what carrier data looks like, how precise it is. They know Wi-Fi data. They know Bluetooth. They know GPS. They know - how do these things get integrated so that when my device says, here's what an app might know about location or here's how we're mapping you, what really goes into that? Because guess what - if you're an epidemiologist, you better understand how accurate or messy or phony or precise the data that is, you know, being handed to you now - hey, this can help you do your job. This can help you spread data. So we do a lot of cross-sector education. 

Jules Polonetsky: Now, does that help when it comes to policy? Yes, right? 'Cause if somebody is saying well, hey, I think there's problems around location - we got to have a regulation. So no one can collect location without express permission. And then we say, yeah. But - so every time you come to a website, you've got an IP address. And guess what. That reveals sometimes some kind of precision around location. But you can't really turn that off. Right? Oh - and by the way, you probably don't want to because it's also used for fraud and for these purposes. You actually perhaps want to define what you mean by precise. And then you probably want to allow, by default, certain uses like this, this, this and this and this. 

Jules Polonetsky: But we're typically not going to be sort of saying - and therefore, here's what you ought to do. Right? We're more likely to say, let's explain how it works - right? - both not just the technology - 'cause you can hire or find technologists and experts. It's hard to actually understand, what's the full flow, right? What happens at the computer, at the phone, at the network, the business models? Who's buying? Who's selling? Who's got what? So we certainly will have some some opinions, and we do work on policy and legislation. But we're not the lobbyists. We're not the public policy people as much as trying to provide expertise to the people who are at the table coming to it by trying to actually understand well enough and being able to communicate and explain. So our site at fpf.org is full of sort of infographics about how it works. 

Dave Bittner: All right. Ben, what do you think? 

Ben Yelin: It was a really interesting interview. You know, I had sort of never thought of the privacy issue in the way that he described, where it's not necessarily about privacy, per se. Or it is, you know, in part, but it's more about some of the fundamental rights we hold dear - you know, our ability to participate in democratic society, our freedom of speech, our freedom of association. Privacy laws implicate all of those rights. And I've sort of never heard it articulated the way he articulated it. And that's kind of, you know, the main takeaway of that interview for me. 

Dave Bittner: Are you in agreement with where he's going with that line of thinking? 

Ben Yelin: I am. You know, I think privacy is sort of a nebulous concept. There is obviously some worth to privacy for privacy's sake. But a lot of it is we want to protect our more fundamental rights, our rights to meaningfully participate, to get our ideas out there without government suppressing them or you know, making them unavailable to the general public, as we've seen in authoritarian countries. 

Ben Yelin: It certainly can relate to I don't want the government to see my embarrassing pictures on Facebook, but it's so much more than that. It's about our freedom to communicate with one another in a way that we feel we can do so freely without being interrupted by an overbearing central government. I think that's a very good way of framing the privacy issue beyond what we sort of narrowly think of as I don't want my emails to be read by somebody else, you know? And he talked about that, too. Like, the issues were narrower in the past. We talked about - oh, this website has cookies. You know? 

Ben Yelin: And then when you start to think more broadly, that's when you realize the far broader implications of why we need to protect privacy. And I think, you know, to get people to care about these issues, you sort of have to talk about it in that more broad sense because I think it's - obviously, people in our community care a lot about privacy for privacy's sake. But I think, you know, to all the individuals that we've talked to who say things like - well, I've got nothing to hide. So who cares if my communications are read by a government agency? 

Dave Bittner: Right, right. 

Ben Yelin: I think explaining to them that we're dealing with larger forces here is important, and I'm glad that Jules was able to do that in this interview. 

Dave Bittner: Yeah. Well, our thanks to him for joining us, for taking the time. And, of course, we want to thank all of you for listening. 

Dave Bittner: And we want to thank this week's sponsor, KnowBe4. If you go to kb4.com/kcm, you can check out their innovative GRC platform. That's kb4.com/kcm. Request a demo and see how you can get audits done at half the cost in half the time. Our thanks to the University of Maryland Center for Health and Homeland Security for their participation. You can learn more at mdchhs.com. 

Dave Bittner: The "Caveat" podcast is proudly produced in Maryland at the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our coordinating producers are Kelsea Bond and Jennifer Eiben. Our executive editor is Peter Kilpe. I'm Dave Bittner. 

Ben Yelin: And I'm Ben Yelin. 

Dave Bittner: Thanks for listening.