The attempted insurrection, Congressional cybersecurity, and data ethics.
Dennis Hirsch: That's what data ethics is. It is essentially beyond compliance risk mitigation for the algorithmic economy. It's corporate social responsibility, if you will, for the algorithmic economy.
Dave Bittner: Hello, everyone. And welcome to "Caveat," the CyberWire's law and policy podcast. I'm Dave Bittner. And joining me is my co-host, Ben Yelin from the University of Maryland Center for Health and Homeland Security. Hello, Ben.
Ben Yelin: Hello, Dave.
Dave Bittner: On this week's show, we unpack the aftermath of the riot and attack on the U.S. Capitol and the security and privacy issues that have arisen as a result. And later in the show, my conversation with Dr. Dennis Hirsch from the Ohio State University's Risk Institute on his research on data ethics.
Dave Bittner: While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney.
Dave Bittner: All right, Ben, let's dig in here. Really, (laughter) there's one thing that's top of mind for both of us today, and that is what happened recently at the U.S. Capitol.
Ben Yelin: Yeah. So I don't know if you heard, but there was a attempted insurrection...
Dave Bittner: Yeah.
Ben Yelin: ...About a week ago, as we're recording this. A series of, probably, thousands of people breached Capitol security. We obviously know what the real-world impacts of this event were. Obviously, the most tragic is that five people have died as a result of the attack...
Dave Bittner: Right.
Ben Yelin: ...As a result of this terrorism. About, you know, almost 100 people have been injured, many of them Capitol police officers. This was an attack on our democracy. There is another angle here that - you know, it almost feels weird talking about this because it kind of pales in comparison to the loss of life and the injuries, but that does relate to what we talk about here. And that is the significant cyberthreats that are emerging as a result of this attack.
Dave Bittner: Mmm hmm.
Ben Yelin: So there's this really good article that came out, I think the day after the attack, from Business Insider. And it talked about how the siege on the Capitol by this mob of Donald Trump supporters poses a major threat to our nation's cybersecurity. So not only did, in the words of this article, they breach barriers and they smashed windows, but they also appear to gain control of the laptops of members of Congress, some of the hardware, et cetera.
Dave Bittner: Mmm hmm.
Ben Yelin: And that leaves a lot of open questions about congressional cybersecurity. What this article says is that various experts believe drastic measures need to be taken. There is confidential information that surely exists on some of these stolen devices. We know that some devices were stolen because A, some of it has been captured on video and B, many members of Congress reported that their laptops or other hardware was missing. So for those devices, those are going to have to be completely wiped.
Ben Yelin: In terms of more extreme measures, you know, many experts have talked about - beyond securing existing hardware, you know, potentially wiping all computers in the Capitol since there's been such a serious breach and rebuilding its entire IT infrastructure from scratch. That would not be unprecedented. The IT department at the U.S. Capitol has experience with this because, you know, you have a whole series of new members that come in every two years. Oftentimes, laptops or other hardware is bequeathed from previous members. And so there is some wiping involved there. But, you know, I don't think it's ever been done to this scale.
Ben Yelin: And there are also things that we just don't know. Is it possible that some of these people who breached the Capitol grounds - and we're talking about thousands and thousands of people - are potentially sophisticated cybercriminals? That's an unanswerable question at this point. And the implications are kind of terrifying because law enforcement, including the FBI, has said that it's possible that on these devices was national security-sensitive information. So again, this is a secondary aspect of the story, but I think it's also going to be critically important going forward.
Dave Bittner: Yeah. It's interesting to me because you think about - reading a story like this makes me think about what the folks who are in charge of IT at a place like the U.S. Capitol must go through just by virtue of the types of people they're dealing with - their clients. You know, we've been seeing stories today as we record where, you know, they put up metal detectors...
Ben Yelin: (Laughter) Yes.
Dave Bittner: ...For folks to go through. And some of the members are refusing to go through them. There's - you know, they're going around them. They're saying...
Ben Yelin: Do you know who I am? Yeah.
Dave Bittner: (Laughter) Well, they're setting them off and then refusing to be wanded, you know, to - so I think, unsurprisingly, that speaks to a certain attitude, as you say, of do you know who I am. And I think anybody who's dealt with - I don't know - any sort of self-important executive who doesn't have the time to deal with security things, (laughter) your heart goes out to the folks who have to wrangle this particular herd of cats, trying to keep them safe and secure.
Dave Bittner: And so I suspect - and many of these stories have pointed to the fact - that they would have the types of things in place that you would expect where, for example, they could remotely wipe a laptop that has been either stolen or gone missing. You know, these are basic things that you'd expect today at this sort of level.
Ben Yelin: Right.
Dave Bittner: And it seems as though reporting says that they have those kinds of things in place. But on the flip side, you know, cybersecurity folks will tell you that if someone has possession of your device, quite often, that could be the ball game.
Ben Yelin: Yes, that's very, very bad news.
Dave Bittner: Yeah.
Ben Yelin: Yeah. So, you know, they - in this article, they mention some of the cybersecurity experts they talked to talked about a couple of potential solutions. So for the devices themselves, geolocate them. That could give you some information as to whether, you know, you want to try to retrieve it. If it's still in the Capitol, they dropped it in a basement, maybe that changes things. If it's at somebody's house in Florida, then...
Dave Bittner: Go knock on their door (laughter).
Ben Yelin: Yeah. Go knock on their door and arrest them, but also completely wipe that clean.
Dave Bittner: Yeah.
Ben Yelin: And, you know, one of the other experts, this guy named Bob Maley, chief security officer of NormShield, said that congressional IT staff will likely have to wipe all federal devices to ensure that they haven't been infected with spyware or have otherwise been compromised.
Dave Bittner: Mmm hmm.
Ben Yelin: And like you said, there is going to be resistance among members of Congress to taking these types of drastic actions, especially as the people's business goes on. Obviously, this might cause an interruption in House and Senate operations, and there's going to be a lot of resistance to that. But the other side of the coin is, you know, if there is spyware now on our federal networks, that's a huge effing (ph) problem, so to speak (laughter).
Dave Bittner: (Laughter).
Ben Yelin: And, you know, I think it's something that kind of strikes me with a little bit of fear.
Dave Bittner: Well, and it's - I mean, it's not going to be an insignificant effort and/or expense to go through this process. As you say, it could certainly slow down or get in the way of our representatives doing the work that we send them to Washington to do. But also, this is our tax dollars spent on cleaning up a mess that shouldn't have had to have been cleaned up.
Ben Yelin: Absolutely. And we can only hope that taxpayers will be reimbursed once they take the assets of the criminals who have stormed the Capitol.
Dave Bittner: (Laughter).
Ben Yelin: So I can dream.
Dave Bittner: (Laughter) You can dream, Ben. Yes, exactly.
Ben Yelin: But that's a long-term solution.
Dave Bittner: (Laughter) OK.
Ben Yelin: Yeah. So I just thought this was striking. Again, it's one angle of a story that's still unfolding and is going to be unfolding for a while. But it's certainly a very concerning angle.
Dave Bittner: Yeah, absolutely. Absolutely. A physical security breach that extends into the cyber realm. Yeah, chilling for sure.
Ben Yelin: Absolutely.
Dave Bittner: You know, not 100% unrelated to that and certainly related to the breach of the Capitol, we've been following this story about Parler, which is the online social media platform...
Ben Yelin: Mmm hmm.
Dave Bittner: ...Where a lot of these folks frequented. Parler had advertised itself - has promoted itself as being a place for free speech. So folks who perhaps had been frustrated by being banned on Facebook or Twitter or other places, Parler welcomed them with open arms and said, this is a place where you can talk about all the things you want to talk about and chances are you will not be censored or have your things deleted in any way. Well, we found out that when they said things aren't deleted, they meant it (laughter).
Ben Yelin: Yeah. This story was really something.
Dave Bittner: So this is a story by Andy Greenberg, course, a wonderful writer over at WIRED. And the title of the article is "An Absurdly Basic Bug Let Anyone Grab All of Parler's Data." So as these things were working their way through the system, we saw Parler being deplatformed in that Google took them off the Play store, Apple took them off the App Store. And then ultimately, Amazon took them off of the AWS servers...
Ben Yelin: Mmm hmm.
Dave Bittner: ...Which is where - that's Parler's infrastructure, that's where Parler was running. When this was announced, it began a bit of a race against time from researchers - and I want to put a pin in that word because I want to come back to it - for researchers to grab the public content of Parler and archive it so that law enforcement would have access to it and folks who wanted to do research on Parler and the things that were posted publicly so that all this data would not be lost.
Dave Bittner: And the folks who went at this were successful. They say they grabbed about 99% of all the things that were publicly accessible on Parler.
Ben Yelin: Whoops. Yeah.
Dave Bittner: Well, other interesting tidbits about this is that evidently, Parler did not strip any of the location data from photos or videos that people uploaded. As our listeners are probably aware, when you take a picture on your smartphone, for example, that photo is tagged with metadata, which is information about the photo. And usually, that includes GPS coordinates.
Ben Yelin: Just if I could jump in on that real quickly - there were pictures going around yesterday of all of the devices within the Capitol where they geolocated Parler users.
Dave Bittner: Mmm hmm.
Ben Yelin: And it was - there were a lot of dots on that map, Dave.
Dave Bittner: (Laughter) Yes, indeed there were. So again, they were able to successfully grab stuff from Parler.
Dave Bittner: Now, when this initially happened, as is the case with these things, there were a lot of rumors circulating around. There were different stories about what was going on. And part of what people were saying was that perhaps these folks were grabbing private messages on Parler, that they had found a way to get admin access, and so they were archiving that stuff as well. The folks who are behind this are saying, no, that's not the case. That all they were doing was grabbing anything that was publicly accessible. They pointed out that Parler's security was laughably weak. They were using - (laugher) the methods they had in place wouldn't have been acceptable in a Computer Security 101 class...
Ben Yelin: Yep.
Dave Bittner: ...According to some of the folks they've interviewed in this article. So all that being said, I want to swing back around to you, Ben, and look at this from a legal point of view because I can't help but wondering, does this effort run afoul of the Computer Fraud and Abuse Act?
Ben Yelin: That's a great question. So the Computer Fraud and Abuse Act prohibits unauthorized access to another network. My - and I haven't fully researched this, and I don't think, you know, we could possibly have a definitive answer at this point because we don't really know. My guess is that this would not violate the Computer Fraud and Abuse Act simply because they were scraping public information.
Dave Bittner: In other words, they didn't have to use someone's password to get in. They weren't they weren't accessing someone's private account to grab data. This was all stuff that anybody could grab publicly.
Ben Yelin: Could have grabbed - right - using relatively simple means. Yeah. So I don't think this would run afoul of the Computer Fraud and Abuse Act because I'm not sure that this qualifies as unauthorized access if all the information was public-facing. They didn't use anybody else's password. It wasn't a hack. They didn't steal any hardware. So that would be my initial interpretation of this.
Ben Yelin: It's also something where Parler would probably have to be the complaining witness here. And I'm not sure that Parler would want to do that for public relations reasons and because it would reveal kind of their laughably weak cybersecurity protocols. So I significantly doubt we're going to deal with CFAA issues here. I don't want to rule it out 100%. But...
Dave Bittner: Right.
Ben Yelin: ...I have significant doubts.
Dave Bittner: What about the notion of Parler being deplatformed in this way? I mean, Parler is a private company. They were relying on other private companies for their ability to be out there in front of the folks who they were doing business with. There are lots of people out there saying that this is chilling from a First Amendment point of view. Of course - well, I'll let you explain why that may or may not be.
Ben Yelin: (Laughter).
Dave Bittner: But what is your overall take on that side of it?
Ben Yelin: Right. So the First Amendment only applies against government action. It doesn't apply against private companies. The first words of the First Amendment are Congress shall make no law abridging freedom of speech. So, you know, we can talk about freedom of speech, the constitutional right. When you're dealing with private companies, it does not implicate that right. Now we can talk...
Dave Bittner: Yeah.
Ben Yelin: ...About free speech as a value. And I think that's very different. And there are concerns about deplatforming. If we're giving, you know, a series of young tech executives the power to deplatform large groups of people, including the president of the United States, you know, I think without context, that would be concerning.
Ben Yelin: There actually is very important context here in that on Parler, they were planning an armed insurrection of the United States government. You know, if you look at some of these archived Parler posts, they're really terrifying.
Dave Bittner: Yeah.
Ben Yelin: And when Parler - when the Apple Store went to Parler and said, you have to remove these posts or we're going to take you out of the App Store, it seems that Parler actually refused to do that. So, you know, when you're talking about incitements to violence, you know, that might be a value that supersedes the value of free speech.
Ben Yelin: So I guess my main point would be, you know, there's no legal recourse here under the First Amendment because Amazon Web Services, Apple, Google can put you on their - in their app store or not. That's their right to make that decision.
Dave Bittner: Right.
Ben Yelin: You know, Amazon Web Services is not compelled to host Parler. That's just, you know - that's something that they can decide for themselves. But in terms of the values of free speech, yes, that's implicated here. But I think you have to weigh that against what we're dealing with, which is pretty explicit calls to violence - not just calls to violence, but, you know, planning locations and timing, et cetera, and putting people's lives in danger.
Dave Bittner: And execution of those plans - I mean...
Ben Yelin: Right.
Dave Bittner: ...They did it. They did it.
Ben Yelin: They actually did it. Yeah, this is not hypothetical. So, you know, I think some of these companies are going to have to make some broader decisions about deplatforming. I think Twitter was hit with these charges of hypocrisy because they let, you know, the communist Chinese government tweet really offensive things about the Uighurs. You know, they've let the supreme ayatollah of Iran tweet things that were - could be read as calls to violence.
Ben Yelin: So I think they have to be consistent. But once you develop these policies, which are explicitly related to incitements to violence, and you have an incitement to violence and that violence does happen, I think you have to follow through on those policies if you're the companies here.
Dave Bittner: Yeah. It's fascinating to watch, I mean, this notion of just the - I don't know - community standards, I guess, of saying, we will not do this here; we will not allow this. This is - you are a guest in my house, and if you're going to speak that way or say those things, I bid you good day, sir.
Ben Yelin: Yeah. I mean, and that's really the proper way to view this. You know, it's not the government saying that these people - it's not a content-based restriction on speech. In other words, these people can find a different avenue for the speech as long as it's constitutionally protected.
Ben Yelin: Now, some of the speech, if we're talking about an incitement to imminent lawless action, is not protected under the First Amendment. You can't do that anywhere. Some of the - what has been deplatformed is protected First Amendment speech, but we're not preventing people from saying it.
Ben Yelin: If you want to build your own web services host - you know, web hosting service and host Parler from there, that's their right to do so. The government is not stopping them. If you want to develop your own app store - so, you know, it can be the app store for free speech applications, then the government's not stopping you from doing that. Now, there are questions because many of these companies, potentially, are monopolies.
Dave Bittner: Right, right.
Ben Yelin: And so that's been part of the criticism. But I really don't think First Amendment issues are properly implicated here.
Dave Bittner: Yeah. All right. Well, again, it's a story by Andy Greenberg over on WIRED. It's a good one. Do check it out. We'll have links to all of our stories in the show notes here.
Dave Bittner: We would love to hear from you. If you have a question for us, you can call us and leave a message. It's 410-628-3720. Or you can send us an email to caveat@thecyberwire.com.
Dave Bittner: Ben, I recently had the pleasure of speaking with Dr. Dennis Hirsch from the Ohio State University's Risk Institute. And we were discussing some of his research on data ethics. Here's my conversation with Dennis Hirsch.
Dennis Hirsch: I'm a privacy law professor, as I mentioned, at the Moritz College of Law and at Capital Law School. I'm also a research fellow at the Risk Institute, which is at the Fisher College of Business at Ohio State. But I've been teaching and writing about privacy law for quite a while now.
Dennis Hirsch: And I started to hear about this idea of data ethics. Companies were talking about doing data ethics. There was starting to be some writing about this. And I wondered, you know, what is this? You know, what is data ethics? What are we talking about here? What's emerging? And I actually put together an interdisciplinary research team at Ohio State to dig into this because, as it turned out, we needed an interdisciplinary group to understand it.
Dennis Hirsch: But here's what we figured out after talking to a variety of companies and spending about two years both through interviews and surveys researching business data ethics. It really starts with a fundamental economic change that we - our society has experienced in the last decade or so, which is the increasing prevalence in business and government - but we're focused here on business - the increasing prevalence of advanced analytics and AI.
Dennis Hirsch: And so more and more companies are using this technology or these technologies in their business operations and their human resources. We're using them at Ohio State in school admissions and advising students. They're being used in HR departments. They're being used, you know, in many different places in our economy and government. That's a fundamental change. And these technologies are being used because they produce significant benefits, social and economic benefits.
Dennis Hirsch: But they also pose important new risks - risks to privacy, risks of manipulation. As we saw with Cambridge Analytica, for example, they were - you know, the controversial thing they were doing was using advanced analytics to motivate, to manipulate voters. Risks of bias and discrimination against protected classes, risks of greater opacity and blackbox decision-making, risks of increasing inequality - these are all really significant issues that are emerging around the use of advanced analytics and AI. And as we talked to these companies, we learned that in their view, privacy law doesn't successfully or adequately address these risks.
Dennis Hirsch: I mean, usually, what companies would do to kind of address risks from their use of data would be to comply with privacy law. But they found that privacy law was not sufficient to to protect people and so to protect the companies' reputations themselves with respect to these technologies.
Dennis Hirsch: And the reasons kind of boil down to two core things. One, the essence of privacy law is to give people notice of the fact that their personal information is being collected and some degree of choice as to whether to allow that collection and allow the use, right? So notice and choice is at the heart of privacy law. But generally, we cannot understand what analytics can learn from our data, what can be inferred from our data.
Dennis Hirsch: We - you know, the customers at Target, in a famous example, may have thought that they were just giving their purchase information - the female customers. But in fact, the company was using that to infer with a great degree of accuracy whether they were pregnant. You know, so people didn't know what they're giving up. And if you don't know what you're giving up, you can't understand what can be inferred from your personal information through the use of advanced analytics. You can't make meaningful choices about it. So privacy law is really not able to protect people from these risks.
Dennis Hirsch: And the other reason being that the risks go well beyond privacy to bias against protected classes, to manipulation, to blackbox decision-making, et cetera. So what the company said is in order to protect individuals, protect society and so protect our own reputations, essentially - because I think that's a core motivation there - we need to go beyond the law. We need to do more than what privacy law requires. We have to go beyond the realm of law to the realm of ethics. And that's how they use the term.
Dennis Hirsch: And so that's what, at least as it is being used in the business world and, you know, among kind of companies that use a lot of data - these are the types of companies we talk to. That's what data ethics is. It is essentially beyond compliance, risk mitigation for the algorithmic economy. Its corporate social responsibility, if you will, for the algorithmic economy.
Dennis Hirsch: And, you know, I come to privacy law. Originally, I was a professor of environmental law before I got really interested in this fascinating area of data and privacy. But we've seen beyond compliance behavior in other areas, for example, with respect to greenhouse gases. We're all familiar with companies, you know, trying to achieve zero carbon emissions, for example - right? - even though the law doesn't require it. You know, beyond compliance behavior is something that we have seen in other contexts.
Dennis Hirsch: I think we're seeing it here in the area of advanced analytics and AI because of the significant, very real risks that these technologies create, along with their many benefits, and the fact that the law as yet has not caught up to those risks, is not adequate to address them and protect people. And so for reasons of what I'll call kind of, you know, enlightened self-interest, companies are going beyond what the law requires and entering what they call the realm of ethics.
Dennis Hirsch: And so, you know, some people hear the word ethics, and they think, oh, you know, these companies are trying to align their operations with some moral philosophy, you know, Aristotelian ethics or Kantian ethics or something like that. And there's a bunch of discussion about that in the literature. I don't think that's really what's happening here.
Dennis Hirsch: What's going on is we're seeing a new form of beyond compliance risk mitigation for companies that use advanced analytics and AI. And that's how I understand and how, you know, from our interviews with about 23 leading companies and their lawyers and consultants and our survey - we had a sample of about 246 companies - they didn't all respond, but we did a broad survey of companies, as well. That's what we're hearing them say they mean when they talk about data ethics.
Dave Bittner: What did you discover in terms of what is motivating these companies? What goes into the decision making process In terms of - I think of, you know - there's that - sort of the notion that just because we can doesn't mean we should.
Dennis Hirsch: Right.
Dave Bittner: And, you know, you mentioned environmental laws. And that makes me think of - you know, there are some companies who will go right up to the line and say we're going to, you know, pollute or, you know, that the waste from our factory will be exactly right up to where we're allowed to do. And others will say, no, no, you know, we're going to strive to do much better than that. But what sort of things are motivating these companies to make those decisions as to where they're going to land on that spectrum?
Dennis Hirsch: Yeah, that's a great question. And it's one that we inquired into in our research. I don't think that the companies we talked to who are kind of recognized leaders in this area - and that's who we sought out, the ones who are really being thoughtful and really paying attention to this - I don't think they're representative, necessarily, of all companies that are using advanced analytics and AI. And I'm sure there's a broad continuum, as you were just mentioning, you know, in the environmental area from those who are pushing it right up to the line and doing everything they can without getting enforced against and those who are trying to be more proactive and be seen as responsible actors in this area.
Dennis Hirsch: But you raise a great, great question. If the law does not require data ethics, then why are companies doing it, right? Why are they making, in some cases, significant investments in personnel and technology and other things? I think that it's not idealism, in my view. You know, there are some companies who have corporate values that they take very seriously and that motivate them. Either a founder or CEO has really made those part of the culture and the DNA. But that's not primarily what we saw. I would call it kind of enlightened self-interest in a way that - and I think it breaks down especially along two lines.
Dennis Hirsch: One is the threat of future regulation. And we're seeing regulation in this area in Europe, and we're starting to see proposals for it in the United States, as well. It's emerging here. So if you're a company that's using advanced analytics, perhaps you want to get in front of that and show that you can be a responsible actor, that, you know, others can be responsible actors and that regulation is not needed. So we would call that preempting regulation. That's a very common motivation for self-regulation that we see in other industries and other contexts, including the environmental context. I think that's one dimension of it here.
Dennis Hirsch: Another is shaping regulation, like an influencing regulation. So if you can show how this is done in a way that works for your business and hopefully convince regulators that it also protects the public, then maybe you can have some influence on how this regulation is - you know, takes form. And from the business's perspective, you know, influence it to be both protective and workable with the business models.
Dennis Hirsch: And finally, you know, there is aligning your systems with the regulation that is coming so that when it does come, you can deal with it more cheaply and effectively than your competitors - so kind of a relative advantage. So for all of those reasons - and we heard each of those themes. But again, those are quite common when you think about self-regulation in other fields in other contexts.
Dennis Hirsch: What really stood out here, though, that you don't hear as much in some other contexts was the importance of reputation. And I think reputation is really critical in this area where the business model is premised on using people's personal data. If you are seen as being not trustworthy, you know, people will hold back, or other companies won't share data with you. If you lose your reputation as a good steward of that data, it can really undermine your business.
Dennis Hirsch: And so I think these companies are saying, look. If we are accused - there's a front-page newspaper story of us being, you know, biased, racially biased, for example, in our data analytics or some kind of privacy invasion like the Target story I told a minute ago of, you know, using analytics to infer whether people are pregnant or manipulation like Cambridge Analytica did to infer people's psychological types and then send them manipulative, targeted ads related to voting that speak to their unconscious and manipulate them - those can be very controversial and have a huge impact on the company's reputation for being a trustworthy steward of data.
Dennis Hirsch: So you put it really nicely a minute ago. You said these were the things - you know, companies have to decide, should we do this or not? Here, we're dealing with advanced analytics and AI projects and practices that are legal and are technically feasible. So companies can do them. The question for them is, should we do it? And some companies actually refer to this as the should-we questions. That's what they're dealing with in data ethics. And as they think about these should-we questions, the reputational question comes up very strongly.
Dennis Hirsch: So they worry about reputation with their users. They worry about reputation with their customers. They worry about reputation with regulators. We expected all that. The other areas where they worry about reputation that were a little bit surprising to us were their business partners. We had one very large technology company lawyer there who deals with data ethics say, look. Individuals can't really understand what we're doing with their data. But our business partners, they do due diligence on us. And if they're going to give us data, they need to know that we are going to be a good steward of it because otherwise, it's going to have an effect on them. So we have to be good stewards of data for our business partners.
Dennis Hirsch: The other area that came up that was a little interesting to us - and this was a Silicon Valley firm that we talked to - said, look. To be successful in Silicon Valley, you have to have great ideas, and you have to have capital. But most importantly or equally importantly, you have to have the talent. That is critical to success.
Dennis Hirsch: And, you know, this is data scientists, young people kind of coming out of grad school, coming out of - you know, out of the university. And they have progressive values. And if they think that we are not doing things that are socially acceptable, if they think we're abusing people, if they think that we're being evil, they're going to leave us. And they can leave us. There's so much demand for them, and we will lose the talent. So this is critical to our business for employee retention and reputation among employees.
Dennis Hirsch: So for all of those reasons, you know, reputation, I think, is really a key motivator here. And the companies tend to put it in terms of trust. We want to be trustworthy, and we see being trustworthy as, you know, an essential strategic commodity for us in the algorithmic economy. That's a motivator here even more so than you see in some other areas.
Dave Bittner: You know, you bring up the notion that a lot of what happens behind the scenes with these algorithms is kind of in a black box. And the users - there's no transparency to the users. And it's quite possible that most people wouldn't really understand what was going on...
Dennis Hirsch: That's right.
Dave Bittner: ...If they - if it was explained to them or if it were more transparent. And something we've discussed here on this show is this notion of kind of using, you know, public health as an analogy. And is there a role - do we need an equivalent of, like, the FDA for these algorithms where, before you turn them loose on the public, much in the same way that you - a pharmaceutical has to be tested, you have to - first, you must prove to us that there will be no harm done with this before we as a people allow you to turn it loose on our society? I wonder if you have any thoughts on thinking along those lines.
Dennis Hirsch: We absolutely need that. And frankly, we need a new form, a new paradigm of regulation with respect to data for exactly the reason that you describe. So our paradigm to date - the privacy law paradigm has been to empower individuals, essentially give them notice of the collection and use of their data and let them make choices, right? But if this is so opaque - and sometimes, you know, with machine learning, it's highly opaque to the individuals whose data is being used. It's opaque to the companies themselves. They sometimes don't know how the machine got to where it did, right? So that's kind of double opacity in a way.
Dave Bittner: Right.
Dennis Hirsch: Then kind of just empowering individuals, which is our primary regulatory paradigm in privacy law - it's not all that privacy law is. There's other things as well. But it's at the heart of it - really is not sufficient to protect people.
Dennis Hirsch: And so to protect people - and I do think, you know, there's a role for society, for government to - you know, these are significant risks. And I think there's a role for protection here. We need regulation that draws substantive lines, that draws lines between what's appropriate and what's inappropriate, what's fair and what's unfair, what's acceptable and what's hurtful. And that's a very different approach to regulation of data and data technologies than the privacy law paradigm, which is just give notice, allow some degree of choice and then basically whatever people accept is legitimate. This is saying no, like, we have to think about what our values are, and we have to make some value judgments as a society as to what we think is acceptable and good for people and what we think is not and draw some substantive lines.
Dennis Hirsch: And frankly, that's what companies are starting to do themselves. That's the should we question, right? Like, is this consistent with our values and what we think the values of society are? And so I've actually written about this new paradigm of regulation. And there's other scholars who are starting to map out different ideas for this as well. It's a fascinating emerging area of law and policy scholarship - is trying to think, what should this next step be to protect people in the algorithmic economy where they can't understand how their data is being used? What's the net - it's not - doesn't replace privacy, but what else is needed?
Dennis Hirsch: But I think we're seeing these companies, you know, slightly fumbling way - feeling their way in the dark a bit, you know, try to map out some of these frameworks for themselves to be seen as trustworthy and to uphold their own values and for all the reasons that we discussed a couple of minutes ago. But yeah, I think we do need regulation here, and we need a new type of regulation. And we're starting to see it emerge. In fact, some of the privacy bills that are in Congress right now also have elements of this. It's kind of mixed in with the more traditional privacy law approach, but it's in there, and it's emerging.
Dave Bittner: All right, Ben. What do you think?
Ben Yelin: Very interesting conversation. I liked it 'cause it's really a conversation about values and the ethics of private companies. And it's kind of heartening to see that private organizations - maybe for selfish reasons, you know, for their own reputation...
Dave Bittner: (Laughter).
Ben Yelin: ...But maybe not. I mean, maybe just wanting to get out in front of some critical issues, they're setting their own standards for AI and, you know, these analytics. And they're getting out of front of federal regulators and state regulators. And I just think it's promising that we're seeing that from the private sector. So compared to the otherwise very dark topics that we discussed in our...
Dave Bittner: (Laughter).
Ben Yelin: ...Podcast, I thought this was more encouraging than not. It's good to hear that we have people from The Ohio State University who are so dedicated to this new field of the ethics around advanced analytics and AI.
Dave Bittner: Yeah, it's interesting to me that it could be seen as a competitive advantage, you know, that people are going to find value in doing business with companies who are doing the right thing for its own sake.
Ben Yelin: Absolutely. And that's what we're always looking for. I mean, in the absence of federal regulation, which we've talked a million times about, not only does the court system move very slowly; Congress moves very slowly. They're always very reactive on these issues. I think our best hope is the private sector. So if we see that it does give organizations a competitive advantage to factor in some of these ethical concerns, I think it benefits all of us 'cause there's going to be sort of - instead of a race to the bottom, there's going to be a race to the top where companies are encouraged, for the purposes of their bottom line, you know, to really take these issues into consideration.
Dave Bittner: (Laughter) It's an amazing thing to ponder - the notion of corporations trying to out-ethics each other - right?
Ben Yelin: I know. But you know what? It's kind of encouraging. And many of the companies, as he said, really do have values based on, you know, either the values of their CEO or just corporate values. Many of them don't, and they're doing it for selfish reasons...
Dave Bittner: Right.
Ben Yelin: ...Which is good in a way. But, you know, I just think this is going to be a growing field in the cybersecurity realm because I think there are a lot of ethical issues we need to tackle. I think it needs to be multidisciplinary. It's good to have lawyers looking at this, but it's also - we need technologists in the room as well.
Dave Bittner: Yeah.
Ben Yelin: And I think it's - this is just going to be something that we continue to talk about as these technologies develop.
Dave Bittner: Yeah, it's a trend worth supporting (laughter).
Ben Yelin: Yes, absolutely.
Dave Bittner: Well, we thank Dr. Dennis Hirsch from The Ohio State University for joining us.
Dave Bittner: We want to thank all of you for listening.
Dave Bittner: That is our show. The "Caveat" podcast is proudly produced in Maryland at the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our coordinating producers are Kelsea Bond and Jennifer Eiben. Our executive editor is Peter Kilpe. I'm Dave Bittner.
Ben Yelin: And I'm Ben Yelin.
Dave Bittner: Thanks for listening.