Ken Dort: I think the really hard core, serious groups are pushing either the CCPA or something slightly even more draconian than the CCPA.
Dave Bittner: Hello, everyone, and welcome to "Caveat," the CyberWire's law and policy podcast. I'm Dave Bittner, and joining me is my co-host Ben Yelin from the University of Maryland Center for Health and Homeland Security. Hello, Ben.
Ben Yelin: Hi, Dave.
Dave Bittner: On this week's show, I've got the story of a judge ruling that powering up a mobile device requires a warrant. Ben takes a look at Twitter adjusting their policies in response to the president. And later in the show, my conversation with Ken Dort - he's a data privacy lawyer at Faegre Drinker. He's got insights on the online data privacy bills that are under consideration in New Jersey. While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney.
Dave Bittner: All right, Ben, we've got some good stories to share this week. Why don't you kick things off for us?
Ben Yelin: Sure. So I have a pair of stories that relate to the same topic, and I'll start with a New York Times story entitled "Twitter Refutes Inaccuracies in Trump's Tweets for the First Time." So Twitter has taken a lot of heat over the years for the fact that they allow some forms of misinformation if they come from various heads of state. Twitter does have policies in its terms of services, especially related to civic engagement. So any information that would discourage people from voting, that provides false information about an elections process or any other civic engagement is generally prohibited, and Twitter can actually delete those tweets.
Dave Bittner: So for example, if I were to put out a tweet that said - hey, everybody, you know, because there's going to be thunderstorms this afternoon, voting has been postponed, no need to go out and vote today...
Ben Yelin: Exactly.
Dave Bittner: ...Twitter would say that's no good. That's a lie, and we're going to take that down.
Ben Yelin: Exactly - or the old trick that, you know, because we're expecting big crowds at the polling places, Republicans vote on Tuesday, Democrats vote on Wednesday. That sort of thing.
Dave Bittner: Ah (laughter), gotcha. Gotcha.
Ben Yelin: Yeah - oldest trick in the book. Yeah. So that's prohibited according to Twitter's terms of services. They've been very cagey as it relates to heads of state, including the president of the United States. So those of you who've been following the news recently, the president has tweeted extensively about mail-in balloting. So because of the pandemic, more voters than usual are expected to vote by mail. You know, as it is, about a quarter of voters in 2016 voted by mail, including all voters in a number of states such as Washington state, Utah, Colorado. So it's a practice that's already pretty well established in this country. It's just going to be more established this year as people are fearful of going to the polls.
Ben Yelin: And the president has been tweeting about how this is going to lead to massive voter fraud. He specifically tweeted, I think the other day, about California is going to be sending ballots to people who don't even live in the state. And this really does violate, in a technical sense, Twitter's terms of service. So instead of deplatforming the president or deleting the tweet, Twitter put a little notice under the tweet, and it's a little logo with an exclamation point saying click here to find out true and factual information on mail-in balloting. And it links to the Twitter Moments page, which has sort of a fact check on the president's claims. This is the first time they've done that. It's a pretty bold move for Twitter considering that they've been so reticent in the past to take action against the president. And of course, there was a big blowback. The president was not happy about it...
Dave Bittner: I was going to say, is it really bold, Ben? Is it really bold? Is bold the right word for this? I'm not sure.
Ben Yelin: I don't know if...
Dave Bittner: I mean, it's something. I will take it. But (laughter)...
Ben Yelin: It is something. It's perhaps the bare minimum. But the president wasn't happy. His supporters weren't happy. They were...
Dave Bittner: Sure.
Ben Yelin: ...Quick to say - well, why aren't you fact-checking, you know, this thing that Joe Biden said? - or where are the fact checks saying that the Russian investigation was a hoax? - et cetera, et cetera. And so that led to a presidential tweet the morning that we're recording this where the president says that Republicans feel that social media platforms totally silence conservative voices. We will strongly regulate or close them down before we can ever allow this to happen. So this is a threat backed by the full force of the federal government to close down Twitter.
Ben Yelin: You might actually ask, can the president close down Twitter? Of course, the answer is no. I will say - I should couch that a little bit. There is probably some regulatory mechanism he could use in extraordinary circumstances. But for what he's talking about here, a simple no will suffice. He can't shut down Twitter. And if he did, you know, he would probably be the most hurt by anyone because his message would not be able to penetrate in the way it does. So there's that.
Ben Yelin: The thing I'll say from a legal perspective is we've seen some arguments that there's perhaps a First Amendment violation. This violates free speech. Even having an exclamation point, no warning, under a presidential tweet violates his free speech rights. And that's entirely backwards, to put it mildly. So Twitter can do whatever it wants. It's a private company. It creates its own terms of service. It can ban users. It can censor people. It is not a government entity. So in most circumstances, it's just not liable for First Amendment violations. What is a First Amendment violation would be a government entity, perhaps the president, telling Twitter you can't have these fact checks under tweets that you're labeling false. That would be a First Amendment violation because it would be the government violating Twitter's right to speak on its own platform, which violates both the letter and the spirit of the First Amendment. So that's where we are. The latest Twitter controversy now is crossing over into the legal realm.
Dave Bittner: Yeah. It's interesting to me that you have this bit of tension here because it seems as though, as you pointed out, Twitter and the president both sort of need each other in a way.
Ben Yelin: Yeah.
Dave Bittner: If Twitter's whole point is to - if what keeps them afloat is engagement, well, you know, I would say probably the top 10 people who trigger engagement on their platform, the president certainly has to be in that list. So it would be, you know, against their own interests to remove him from the platform - not to mention, of course, all the political heat they would take for doing so. But it's an interesting dilemma in just that they - everyone has their own interests here and there are these tensions between them.
Ben Yelin: Absolutely. And I think, you know, Trump's certainly aware of that. He constantly reminds Twitter, you know, you are a platform that largely exists because of me - because of my tweets that generate a lot of clicks and engagement. I don't know if that's entirely true, but there's certainly an element of truth in that. As you said...
Dave Bittner: Yeah.
Ben Yelin: ...He does lead to a lot of Twitter engagement. People see more advertising; you know, Twitter makes money. So he's right to a certain extent. And like, you know, I think you said there, he also is very reliant on Twitter. That's his main form of getting his political message out. That's - to me, that - those are - his purest political thoughts are not expressed in speeches or even press conferences. You know, I think the most unfiltered version of the president and the way he reaches his political supporters is through Twitter.
Ben Yelin: So they are really co-dependent on each other, and I think that's why Twitter has been so reticent, to this point, to regulate him. You know, another thing I'll say - and I think this is important context - is there is an action that Twitter did not take this week. So the president has been spreading a rumor that's frankly false that TV host Joe Scarborough was behind the death of one of his former interns back in the early 2000s.
Dave Bittner: Right.
Ben Yelin: This - you know, it's just - it's not the case. This was an accident. There was an autopsy that confirmed it was an accident Joe Scarborough was in Washington. I don't need to go through all that.
Dave Bittner: Yeah, it's a horrible rumor and conspiracy theory that doesn't deserve attention.
Ben Yelin: It doesn't. And the widow of this woman who passed away 19 years ago wrote an open letter published in The New York Times, I believe, to Jack Dorsey, the CEO of Twitter and said, you need to delete these tweets. They're abusive. They're harmful. They're hurtful to me and my family. And any other citizen were to tweet something like this, you certainly would delete them and there shouldn't be a separate standard for the president of the United States. And Jack Dorsey didn't respond directly. But Twitter, through a spokesperson, said no, we're not going to take that down.
Ben Yelin: And as - so you know, this is - when we're finally seeing sort of one check against the president's behavior on Twitter, this Scarborough fracas indicates, at least to me, that they're still very reticent to confront Trump on Twitter. And you know, I think we'll keep seeing that going forward. It's just an interesting juxtaposition.
Dave Bittner: How much of this is Twitter relying on this notion that they're a platform - you know, that they're not responsible for the things that people post? And I'm thinking specifically of some of the things that the president has posted about that medicine - the name's escaping me.
Ben Yelin: Hydroxychloroquine? Yeah.
Dave Bittner: Yes. Yes, the medicine, you know, that if he's inaccurate about the - his claims that this medicine would be good for people with coronavirus and he makes those claims on on Twitter, I guess Twitter is counting on the fact that they hold no liability for someone getting sick or dying for following through on those claims.
Ben Yelin: Absolutely. I mean, we've talked about Section 230 of the Communications Decency Act. They would not be liable in that circumstance, and that's a big advantage for them. I mean, they can sit back and say, we are a platform to facilitate free speech. Now, we're going to put some limits on it because, you know, we're a company that wants to maintain a reputation. We don't want to be the proximate cause of, you know, something very harmful happening. But we can - will not be held liable. So I think that probably has a lot to do with it.
Ben Yelin: If for some reason Section 230 were overturned and they would potentially be liable for harm resulting from users' tweets, I think they would be much more aggressive in policing their service. But that's just not the way it is. And you know, to be fair to Twitter, they allow people to report tweets where somebody expresses a desire for violence or self-harm. So - and they will delete those - or things that are really abusive, like doxxing people, posting private information. But yeah, I mean, you're right. If they were to be held liable, I think they'd be much stricter about what can and cannot be posted on their platform.
Dave Bittner: Yeah. It really is fascinating how many aspects there are to this. I mean, it's not easy. And regardless of how you come to this, you know, what your political beliefs are or anything, there's just so many gears in motion with this. It's fascinating to watch it play out.
Ben Yelin: Absolutely. Yeah. And you know, I can completely understand why Twitter would be very reticent to not only, you know, make a head of state unhappy but stand in the way of a head of state spreading a political message. I mean, I think that really would be problematic. And you know, in this case, it might be used against somebody, you know, one doesn't like. But they might use it against somebody one does like in the future, so it might be setting a pretty dangerous precedent. So you're right.
Dave Bittner: Yeah.
Ben Yelin: It's a very complicated and difficult issue.
Dave Bittner: Well - and who would have thought that we'd find ourselves in the situation that a head of state would use Twitter as their primary communications mechanism? It's unintended consequences, right?
Ben Yelin: Exactly. You know, Jack Dorsey probably thought people were just going to be talking about the great meal they ate at a restaurant back - when he started this in 2006.
Dave Bittner: Right. And cat pictures, yeah.
Ben Yelin: Exactly.
Dave Bittner: Here we are.
Ben Yelin: We thought this was going to be all cat photos. It was just going to be a 140-character version of Myspace. But yeah, it is not that.
Dave Bittner: Nope, nope.
Ben Yelin: And it really hasn't been that for a while - I mean, going back to the Arab Spring in 2011, when Twitter was the preferred medium of communication. It's taking its own path.
Dave Bittner: All right. Well, it's fascinating. And of course, we'll keep an eye on it. My story this week, I think, is fascinating. This is written by Kate Cox over at Ars Technica. The title of the article is "Just Turning Your Phone on Qualifies as Searching It, a Court Rules" (ph). So this is - a district judge in the U.S. District Court in Seattle ruled that when the FBI turned on a suspect's phone to look at it, merely activating the phone's lock screen I guess qualified as a search and therefore required a warrant. Unpack this for us, Ben.
Ben Yelin: This is a fascinating case, comes from Washington state. This was a federal district court, but the incident happened in Washington state. It was a suspect was using a smartphone. He was arrested. Incident to his arrest, an officer was just doing a search incident to arrest, hit the power button to bring up the phone's lock screen. Officer didn't attempt to unlock the phone. The court and the decision actually didn't have a problem with what the arresting officer did in this circumstance.
Ben Yelin: But during the investigation, the FBI turned on the phone to take a photograph - a screenshot of the phone's lock screen. All it says in this article is that the lock screen displayed the name Streezy on it. My guess is perhaps that identified him as a suspect, so that potentially could have been incriminating. But they took a photo of the lock screen. And what this court has said is that qualifies as a search for Fourth Amendment purposes and, therefore, would necessitate a warrant. Now this is a fast ending for a number of reasons.
Dave Bittner: (Laughter) Yeah.
Ben Yelin: So just to unpack the history very quickly - up until the 1960s, there was no Fourth Amendment search unless there was some sort of physical intrusion on a person's property or a person's stuff. That standard changed with a case called Katz v. United States, where the new standard was more about a violation of a person's reasonable expectation of privacy, whether there was a physical invasion or not.
Ben Yelin: That was sort of the new and enhanced understanding until 2012, a case called United States v. Jones. A majority in that case said, yes, reasonable expectation of privacy is still a valid standard for determining whether there has been a Fourth Amendment search, but a physical trespass still suffices to qualify as a Fourth Amendment search. So that still - even if a reasonable expectation of privacy is not violated, if there has been a physical trespass, that's still a Fourth Amendment search.
Ben Yelin: And that's the decision, United States v. Jones, that the judge seems to be relying on here. The judge is saying, you know, a person does not have a reasonable expectation of privacy in their lock screen. It's something that's pretty public. I mean, you know, if your phone falls out of your pocket, your lock screen's going to be available. That's something...
Dave Bittner: Right.
Ben Yelin: ...That really anybody could see if you leave it on a table. But the search here was the act of taking that screenshot. That requires physically touching the device and physically pressing, you know, whatever it is - the two buttons that take the screenshot. So this is almost like a 19th-century understanding of the Fourth Amendment, where it comes down to whether there has been a physical trespass on somebody's stuff. Somebody's effects is the legal term of art.
Dave Bittner: Yeah.
Ben Yelin: So it's a fascinating case.
Dave Bittner: What do you suppose the repercussions could be here - because I have to say, I was - I'm surprised by this. I'm left with my eyebrows up thinking, what is going on here? It's interesting, but I can't really feel like I understand it completely.
Ben Yelin: So the implications are that even absent an expectation of privacy - even if it's something that a person is not really trying to conceal to public view, there can be Fourth Amendment protection if there is some type of physical intrusion. And the physical intrusion here is extremely minimal. You know, when we think about physical intrusions in the trespass context and the Fourth Amendment context, you know, you and I would probably picture police officers storming into somebody's house or a strip search or something like that. Here we're talking about pressing two buttons on the side of a device.
Ben Yelin: The reason this line of thinking is so bizarre to me is that there are far more intrusive types of searches that are done without these physical trespasses that it seems bizarre to me to make a Fourth Amendment determination on the level of physical penetration into one's device. So we've had cases dealing with cell site location information and getting the contents of one's device incident to arrest, like in Riley v. California. And those are all electronic means of doing surveillance. Here we're talking about what seems like a very minimal act of pressing a couple of buttons. And suddenly, that's the thing that triggers Fourth Amendment protection. It just seems pretty incongruent to me. And that's sort of the problem I've always had with this Jones case.
Ben Yelin: And in that Jones case, Justice Alito, in his concurrence, expressed a similar concern. In that case, it was determined that because an officer physically placed a GPS device on a suspect's car, that counted as a search, whether that suspect had a reasonable expectation of privacy in his own movements. And what Alito said is it's not the physical act of placing that GPS device on a person's car that brings up privacy concerns. It's the following the guy around using GPS tracking that presents the privacy concerns. And that's sort of the way...
Dave Bittner: Oh, interesting.
Ben Yelin: That's the way I think about these issues. The invasion of privacy, to me, has little to do in these digital cases with the level of physical force applied. So that's sort of my perspective here.
Dave Bittner: So what happens next? This is a district court judge from Seattle. How does this ruling flow through - you know, what is its influence from this point on?
Ben Yelin: So as for this defendant, this conviction is vacated, at least as it relates to the FBI search into the phone. There's going to be a future proceeding to determine what happens with the original police search of the device incident to arrest. That hasn't been adjudicated yet. So don't yet know where this is going to go in the future. Because it's a district court case, it doesn't exactly set any precedents nationwide, although it could be a persuasive argument for other courts adjudicating similar issues.
Dave Bittner: So it'll - this'll get people's attention.
Ben Yelin: It is. I mean, it's a novel case. It's an interesting case. And you know, I have never seen this type of scenario come up in the past in terms of - relating specifically to the lock screen. And I guess it's sort of rare to me that any slam-dunk evidence would be uncovered simply on a on a person's lock screen, but here we are. And now we might see more cases like this in the future. I guess the lesson is be careful what's on your lock screen. Mine is a picture of my kids. If that's going to incriminate me in any crime, you know, I'm going to be in a lot of trouble. So just keep the stock photos of the beautiful waterfall or mountain and you'll be safe.
Dave Bittner: Yeah, yeah. Right, right. Don't leave your manifesto for your plans to how you're going to kill your spouse as your background thing for your lock screen, right?
Ben Yelin: Exactly, exactly.
Dave Bittner: It could come back to haunt you (laughter).
Ben Yelin: Or your - you know, your known aliases, you know? Try and...
Dave Bittner: Right (laughter).
Ben Yelin: Keep that hidden behind your lock screen.
Dave Bittner: (Laughter) Right, right, right. All right. Well, those are our stories for this week. It is time to move on to our Listener on the Line.
(SOUND OF PHONE DIALING)
Dave Bittner: Our Listener on the Line this week, his name is Paul (ph), and he writes in with an interesting question. He says, one question I wondered since the Equifax hack - why does a financial institution send any data to any credit bureau? Is this a law, a regulation, a policy or is it written in a contract? When I ask about it, I've received many different answers but nothing definitive. There are many so-called private banks, but how private are they if they send your transaction data to every credit bureau? What if the financial institution offered a service fee not to send data to any credit bureau for those that want privacy? I know some federal agents, law enforcement, public figures and possibly a law professor that would pay to use such a bank. Plus, that would lock the person to that financial institution for loans and other services, so it's a win for the customer and a win for the financial institution. Maybe this already exists. If it does, I have not heard of it.
Dave Bittner: This is an interesting question, and it's something I've wondered about, too. It seems to me - and correct me if I'm wrong here - that this whole sort of reporting to the credit bureaus is kind of a relic that sort of - we've always done it this way, and so we still do. Is that right?
Ben Yelin: Yeah. I think there's a lot of that to this question. And it's an excellent question. So there is something called the Fair Credit Reporting Act, which generally governs financial institution reporting credit information to the credit bureau. I will say there is no actual requirement written into that law or any other law saying that financial institutions have to submit information to any credit bureau, let alone the three major credit reporting bureaus.
Ben Yelin: There is a reason that most financial institutions do. It is to their own advantage, even though it's relatively burdensome on them costwise. It's costly to submit that information to the credit reporting bureaus. But these are the same financial institutions that want to make smart loans to their consumers. And in order to have the requisite information to make those loans that are going to help their bottom line, they want information from these credit reporting bureaus. So you know, you can see why that would be a symbiotic relationship.
Ben Yelin: To me, there's nothing that would stop a private bank or an institution from not reporting to credit bureaus. However, this might not be that great of a solution in terms of somebody who wanted to keep their financial arrangements private because there are other laws and regulations that would allow the government or other government entities like the IRS to obtain your financial information - for example, any lawful subpoena.
Ben Yelin: And you know, just be - if your financial institution does not report to credit bureaus, that means they don't report your good credit history or your bad credit history. And that means, you know, if you wanted to buy a house or buy a car at some point in the future and were looking to take out a loan, you know, if you were only banking with private financial institutions that did not report credit information, you'd have no credit record. And it's unlikely that another institution would want to lend to you. But to the question itself, there really are no hard and fast requirements in the Fair Credit Reporting Act or any other federal statute. It's something that financial institutions do on a voluntary basis.
Dave Bittner: Well, I wonder, too - I mean, can you envision us running into a situation where, for example, something like California's privacy law - or one of the many privacy laws that the various states are writing - could come into conflict with this sort of reporting to credit bureaus? But would it be within a consumer's right to go to a bank or go to a credit bureau and say, hey, knock it off. I never gave you my express written permission to share this data?
Ben Yelin: So it would not be, and there's a very specific reason for that. The Fair Credit Reporting Act contains all sorts of requirements as to what financial institutions are allowed to report, the consumer's right to be informed about negative reporting to a credit bureau. And since those are federal regulations, they would supersede state regulations or state laws like the CCPA. The federal government has occupied the field on consumer privacy protection related to credit reporting because they passed the Fair Credit Reporting Act.
Dave Bittner: I see.
Ben Yelin: So I think we'd have a preemption issue here. The federal law would preempt those state laws.
Dave Bittner: All right. Well, good information - and thanks to our listener Paul for sending in the question. It's a good one. We would love to hear from you. We have a call-in number. It is 410-618-3720. That's 410-618-3720. You can call and leave your question. You can also email it to us at firstname.lastname@example.org.
Dave Bittner: Ben, I recently had the pleasure of speaking with Ken Dort. He is a data privacy lawyer at Faegre Drinker. And we spoke about, among other things, some of the data privacy bills that are under consideration in New Jersey. Here's my conversation with Ken Dort.
Ken Dort: It started in California, not so much because California's legislature was leading the charge but because California has a rather idiosyncratic referendum protocol, which allows groups that are really well-organized to get things on the ballot for, you know, popular vote. And that's what happened here. You had some rather well-organized privacy groups that put together the initial version of what was to become the CCPA. And it was much more draconian than what the CCPA actually became. And what happened was in response to that effort, the California legislature moved to put together the legislation to kind of preempt what was going to be on the referendum ballot.
Ken Dort: And that explained - because they did it so fast, that explains why there were so many kind of gaps and inconsistencies in that first enactment and which required all of the various amendments that happened after that. And so that kind of explains why California, in addition to being kind of a leading edge in the privacy area, did it so quickly - because they were kind of, you know, jailed (ph) into action by these privacy groups. But you know, having that in place in conjunction with the GDPR already having come online in May of 2018, you know, all of the states pretty much then started, more or less, seeing the future and deciding to kind of get on the bandwagon, so to speak. And that explains why you see the various states, you know, in the various levels of the action, only a very small handful of which have actually passed legislation just because of the fact that the subject matter is very complicated.
Ken Dort: And California was able to do it because it was at knifepoint...
Dave Bittner: (Laughter).
Ken Dort: ...Because it was either get the CCPA passed or you were going to have this really, you know, hard-edged referendum passed. And so that kind of explains where we're going. Some of the larger states are in the process of at least looking at it - Illinois, New York, a whole variety. They mirror the CCPA in various components. New Jersey's, we'll talk about, I mean, is almost a clone of the CCPA, you know, with some material differences. Particularly, you know, national level companies - if you're large enough, you're clearly doing business in California.
Ken Dort: And so what you've got companies having to do is, one, they have to comply with the CCPA because if - you know, they're in California. But then they have to kind of decide, OK, do we have a bifurcated, you know, consumer system, or are we just going to assume that this is the way of the future and we just basically treat everyone as a Californian? And quite a few companies are going that route. And then what they're going to do is just, as other states come online, they'll just compare and contrast to make sure that they're in compliance with all the states that are relevant to their operations and then go from there.
Dave Bittner: Is there any sort of lobbying effort that's going on? Are there organizations shopping around, you know, prepackaged recommended sets of potential legislation to - as you mentioned, to kind of simplify things so that you have some consistency from state to state?
Ken Dort: Not that I'm aware of. I mean, actually I think the really hard core, serious groups are pushing either the CCPA or something slightly even more draconian than the CCPA, something more comparable to the GDPR. They'd like to see every state pass the same thing. The problem with democratic government is that you might submit a - the same thing to each of the 50 state governments. But more likely than that, the result is going to be something slightly different, you know, across the waterfront, which explains why, you know, they are - these groups are going maybe (ph) to the larger states, just because they have a larger bandwidth. But then they're also trying to push Washington to enact comprehensive federal-level legislation that would preempt states and create a one, GDPR-like, you know, consistent set of laws and regulations that, you know, national companies can look at and don't have to worry about keeping up to speed with 50 different jurisdictions.
Dave Bittner: When you use the term draconian, what are the elements that elicit that description from you?
Ken Dort: Well, I mean draconian - I don't mean that in an opinionated or pejorative sense of the word. I just mean that kind of in the spectrum of what, you know, privacy legislation there is. I mean...
Dave Bittner: I see.
Ken Dort: ...You can have something which is, you know, kind of voluntary, you know, in terms of participation. Or you can have something, you know, on the other end, which would be like the GDPR, which requires that you get consent from the persons from whom you're collecting their information. And you need opt-in. You need rationalizations or legal bases for everything that you're doing with that personal information. Effectively, you're giving the people whose information it is control over how it's going to be processed, used and shared.
Ken Dort: And you can't allow yourself, then, to get kind of ahead of the curve. I mean, if you don't have the appropriate consent in place under that kind of a scheme, your whole setup could fall flat if, let's say for example, you don't either - under the GDPR, you don't have one of the legal bases for processing, either you know, consent or some kind of legitimate business purpose or - everything has to be based on permission or a legal basis. And if you don't have it, you can't go forward.
Dave Bittner: I see. Well, let's dig into some of the specifics of what New Jersey is proposing here. What are some of the elements that have caught your eye?
Ken Dort: This effort is a true clone with some differences from the CCPA. Now, keep in mind that the legislation that we're talking about, which is bill No. 3255 that was introduced only as recently as February 25, I believe - and so the legislation we're looking at is like the first draft. And so this is what came off the table - the printer of the sponsors. So this really hasn't had an opportunity for any true legislative vetting. As I said, it was introduced in February 25, but nothing really more was done procedurally. It was going to be brought back for discussion in front of the New Jersey Assembly's, I think, science and technology committee on March 16. And unfortunately, other events seemed to transpire and overtake not only New Jersey but the rest of the country with the COVID-19 emergency.
Ken Dort: And so as it sits right now, you know, the legislation is, you know, kind of in abeyance, you know, in recess until New Jersey's assembly comes back online. And then depending on how long that takes and then once it is back - you know, what are - what will its priorities be, you know, in those first few weeks, months? You know, you may have a - you're having - you know, having dealt with the crisis for several months, this legislation might not be high on their list of things to look at immediately. So in terms of what the next step will be in terms of timing, it's hard to say.
Dave Bittner: As various states have been proposing and passing their own legislations, I mean, do these track with, for lack of a better term, the personalities of the individual states - the priorities, the cultures as we look state to state?
Ken Dort: I think they do. I think they do a little bit. I mean, you know, California having - you know, being one of the most highly progressive states, you know, really, I mean, it explains why the CCPA, you know, is people-leaning as it is. It's not quite as comprehensive as, let's say, the GDPR is. But that explanation lies a lot with just the different cultural makeups between the EU and United States. You know? And so, you know, what you see with these other states, you see them enacting components of the CCPA but not necessarily all of it.
Ken Dort: And also, you know, California and the CCPA, it was the first one out of the chute. And so you know, it was going to be, you know, say, the experimental one. And as we could see, you know, it's been subject to a handful of amendments since its original enactment. And so just as with its breach notification statute, you know, three, the other states are kind of taking that as a template and then, you know, massaging it a little bit here and there to meet their own, let's say, specific needs and requirements while keeping kind of the general focus, you know, intact.
Ken Dort: And that's what you see here with the New Jersey legislation. As I said, it's almost a clone. I mean, you know, you've got really the same set of definitions. You know, it applies to, you know, businesses that, you know, are - you know, do business in New Jersey. And they have the same three applications that the CCPA has. You know, you either have annual revenue of over 25 million or, you know, you collect and sell information on over 50,000 people or you generate at least half of your annual revenue from the collection, selling, sharing of personal information.
Ken Dort: Then it just kind of falls from there. It's got the same comprehensive set of definitions as to what a consumer is. You know, it's basically not just a consumer, but it's actually just a person who's a resident of New Jersey. Just like in CCPA, it's a person who's a, you know, resident of the state of California. So it's more than just a consumer. You know? It's much broader than that. But it defines - you know, the information question, its version of personal information is personally identifiable information. And it covers the same broad span of information that the CCPA does, from basic information like name and address but - then, you know, and government IDs to biometrics and basically any kind of information that is likely to lead to the identification of a person. So it's very, very, very broad.
Ken Dort: It provides people with the same level of rights as the CCPA does. You know, when you're providing your information, you have the right to know what kind of information they're going to be collecting. You have the right to know what - the purposes for which that information is going to be used. You have the right to know who they're going to be sharing it with.
Ken Dort: And then contrary, though, to California - where California basically has an implied opt in, you don't have to literally opt in, you know, if you're giving information, you know, you're implicitly conceding that point - whereas in California, though, you have the opt out, the do not sell button. So to speak. What New Jersey is doing is a little bit the opposite. You have to opt in with respect to the collection of your data, and then you have to opt in separately to provide permission for the sharing of your information. OK?
Ken Dort: So with that in place, in New Jersey or - I'll say New Jersey residents or anyone subject to the statute - then also has a right, as under the CCPA, to request each year what types of information that they have collected, what types of information are they sharing to other third parties and then, also, you know, for what purposes are they sharing that information. And just like in Cal - under the CCPA, if a business collects information within certain frame of reference in the notice and they're given the specific terms in terms of we're going to do A, B and C with your data or we're going only going to collect this data but they decide afterwards to change it, just like under the CCPA, you have to then provide notice of those additional changes. And then you're going to have to get consent from the people in question.
Ken Dort: You know, so it is very much a parallel with some differences. And as with the CCPA, the consumers have the right to seek - submit requests for the information. They have a right to submit a request to delete. They - just like in California, they have a high level of control over the data.
Dave Bittner: Where do you stand on the potential for there being federal legislation to kind of supersede these state-by-state laws?
Ken Dort: You know, before I answer that, I hit you with some of the similarities. Let me tell you what some of the differences are, too.
Dave Bittner: OK.
Ken Dort: Just as in - the CCPA has various exceptions to its application - like you know, if you're a business that's already subject to HIPAA or Gramm-Leach, those same carve-outs apply. But I mean, one of the things, though, that is different is that under the CCPA, employment information could be treated just like regular consumer information. And that received a lot of pushback, and the result towards the end of 2019, before the CCPA went active, was an amendment which carved out employment and HR-related information from treatment under the CCPA.
Ken Dort: However, the amendment was only passed on the condition that it was good for one year. So in other words, unless they make it permanent by the end of this year, on January 1, 2021, that exception will disappear. New Jersey has made it clear that it doesn't agree with that approach, and it has that exception carved into the actual statute. So if the statute - if the legislation was passed as is, that exception would be in there with no timeframe. Another difference is that the legislation, as it's currently worded, is - it would go effective upon enactment. There is no grace period. So it's not like if it's passed, you know, in June, it's going to not go effective until some future date. The way the legislation is written, it goes effective upon the signature by the governor. That probably, most definitely, through the legislative process, will get revised to provide some kind of grace period.
Ken Dort: In particular, there's also a provision that gives the attorney general - New Jersey attorney general - the authority to develop and enact regulations, just like the California attorney general has. But the New Jersey attorney general has six months from enactment to do that. Well, it'll be kind of interesting because we know the attorney general in California was busy well before the effective date of the CCPA getting those regulations, you know, developed. And he's still working at it. OK. And those are supposed to be done relatively soon and then go live July 1. So there's a timing thing that probably will have to get ironed out in the New Jersey legislation that hasn't really been discussed yet.
Ken Dort: Similarly, the penalty provisions are different. Unlike the CCPA, which has its own penalties and fines and all of that, the New Jersey legislation basically says that a violation of it will be deemed to be a violation of its Consumer Fraud Act. And so the question that arises is, exactly how will violations be handled? So the risk paradigm is a little bit uncertain at this point, in addition to which there isn't any express mandates for reasonable security protocols that are in the CCPA. So that's something that may likely be added in when the legislature comes back.
Ken Dort: So there are many more similarities than there are differences, but there are some differences that I suppose, you know, may have piqued the interest of the sponsors that may not survive the entire legislative process. But you know, we'll have to see.
Dave Bittner: I would like to get your take quickly on this notion of whether or not we might see some superseding federal legislation.
Ken Dort: At this point, I find it rather unlikely simply because the various interests between the privacy folks and business folks and, frankly, the priorities facing Washington at the moment probably aren't favorable to overall privacy legislation at this point. I just don't see the interest kind of coming together right now because the voices on Capitol Hill right now are so disparate. And this kind of legislation requires, I would say, true bipartisanship that I don't think is there right at the moment. I would not be optimistic about any kind of comprehensive legislation coming out of Congress on privacy. I mean, you can kind of compare and contrast its effort a few years ago with respect to comprehensive federal breach notification legislation, and that didn't really get very far either. And that was something which - you know, it's breach notification...
Dave Bittner: Right. Not exactly controversial.
Ken Dort: Right. Nobody's for a breach, but the problem was, you know, what - who do you notify? How much power do you give, like, the Federal Trade Commission? You know, most legislation at this level, it shakes out in the details. And that's where it kept kind of running into a problem. And something like with breach notification is much more, you know, condensed and straightforward than, like, you know, privacy legislation, you know, at the level of the CCPA or the GDPR. And so it just makes me just pessimistic on that front, at least at this point in time.
Dave Bittner: All right. Ben, what do you think?
Ben Yelin: So very useful information about what's going on in New Jersey. It's another incident where the COVID epidemic really interrupted everything else that was going on in the digital privacy world that they are not able to see this legislation through the way California was. But it seems like New Jersey's effort is just as ambitious if not more ambitious than California's effort to pass a data privacy piece of legislation.
Ben Yelin: Another interesting thing he noted - and this interests me as a - somebody who grew up in California is the role that the state propositions have in the drafting of the California Consumer Privacy Act. Because the tech companies were fearful about what those pesky voters would get on the ballot in terms of ballot propositions, I think that was the impetus for legislators to try and take a more responsible tack - do the background work, do the research and make sure that this was a responsible consumer privacy act that could endure through legal challenges.
Ben Yelin: You know, it's really not that hard to get your proposition on the ballot in California. There are all sorts of crazy ones that show up on both the state and local ballots. I'm sure for our California listeners out there, they know that the California Voter Guide is like, you know, a 500-page phone book that they have to read before they go to the polls. So that's just another interesting angle that I haven't really explored, and I'm glad we got to hear about that in this interview.
Dave Bittner: Well, our thanks to Ken Dort for taking the time to speak with us. That is our show. And of course, we want to thank all of you for listening.
Dave Bittner: Our thanks to the University of Maryland's Center for Health and Homeland Security for their participation. You can learn more at mdchhs.com. The "Caveat" podcast is proudly produced in Maryland at the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our coordinating producers are Kelsea Bond and Jennifer Eiben. Our executive editor is Peter Kilpe. I'm Dave Bittner.
Ben Yelin: And I'm Ben Yelin.
Dave Bittner: Thanks for listening.