Caveat 5.25.23
Ep 173 | 5.25.23

It's a generations thing.

Transcript

Mathieu Gorge: What you want to do is you don't want to advertise yourself as the easiest targets. And so the way to do that is to put as many obstacles as you can so that the attackers are going to go to the low-hanging fruit.

Dave Bittner: Hello, everyone, and welcome to Caveat, the CyberWire's Privacy Surveillance Law and Policy Podcast. I'm Dave Bittner, and joining me is my cohost Ben Yelin from the University of Maryland Center for Health and Homeland Security. Hello, Ben. 333 Hello, Dave.

Dave Bittner: Today Ben has the story of the FBI misusing Section 702 authority to surveil Americans. I've got a recap of Congressional testimony about AI security. And later in the show my conversation with Mathieu Gorge from Vigitrust. We're discussing generational attitudes toward privacy. While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. All right, Ben. Lots of news this week.

Ben Yelin: It was a big news week, wasn't it.

Dave Bittner: It really was. Why don't you kick things off for us? What have you got here?

Ben Yelin: So my story was first reported by The Washington Post. It is entitled, FBI Misused Surveillance Tool on January 6 Suspects, BLM Arrestees, and Others, and it is by Devlin Barrett in their technology section. So I'll give a little bit of background here. I know we've talked about Section 702 in the past. But, just for some context, Section 702 is part of the Foreign Intelligence Surveillance Act. The FISA Amendments Act, which contains Section 702, was passed in 2008. And this allows for the surveillance of non-US persons reasonably believed to be outside of the United States without a warrant. So we can collect online communications by sending selectors to tech companies. And, generally, we can collect that information. Most Americans don't really care about the surveillance of non-US persons --

Dave Bittner: Right.

Ben Yelin: -- living outside the United States. Unfortunately, Section 702 ends up capturing a lot of US persons communications because of something called incidental collection. So if a US person is communicating with an overseas target, that conversation is eligible to be collected under Section 702. It goes into a VAST database, and that database is searchable. And only in limited circumstances -- circumstances is a warrant required to search the Section 702 database. This has been controversial for a decade now, ever since Edward Snowden reported on how Section 702 worked and how data collection worked. It was reauthorized in 2018, and it's next up for reauthorization at the end of this calendar year 2023. And I have to say its chances of being reauthorized at least in its current form appear to have taken a big hit. So the impetus for the story comes from a declassified FISA court opinion which lists a bunch of deficiencies in the querying process. This opinion was handed down about a little more than a year ago, in April 2022. But it was only unsealed last week as we're recording this, meaning it had been kept secret for a year. And this is a frequent pattern of FISA, in particular, and also Section 702 that we only find out about abuses, misuses of the data a year or two down the line because of these classification issues. FISA proceedings are notoriously secretive. They don't want to release the modes and methods of surveillance. So, frequently, the intelligence agencies will release redacted versions of these opinions if it's in the public interest. But we're looking at deficiencies in the program that existed one, two, three years ago. And it's not necessarily a window into what's happening now. But that opinion wrote about a bunch of misuses of this database. The opinion alleges that the FBI misused the Section 702 database more than 278,000 times, including against crime victims; January 6 riot suspects; and people arrested at protests after the killing of George Floyd in 2020. Additionally, there was surveillance of 19,000 donors to a congressional candidate. So, obviously, that's improper for a program that's supposed to deal with foreign intelligence. Now, the FBI, under Director Christopher Wray, has recognized that there have been compliance problems with Section 702 in the past. They have guaranteed or at least have presented that these compliance problems have been alleviated since this opinion was drafted. They have some evidence on their side. So when they released data on Section 702 searches last year, searches of US persons in the Section 702 database appear to drop by something like 90 percent. That's preliminary. It's not a file -- final number. But it certainly was encouraging.

Dave Bittner: Interesting.

Ben Yelin: But we've seen a pattern. This is noted by members of Congress and just people who have been close followers of Section 702 and the FBI generally that the FISA court will release opinion outlining all of these deficiencies. They do this repeatedly. I mean, I can probably cite six or seven of these opinions over the last decade. The FBI claims that it has solved all the problems inherent in the previous opinion. And then, two years later, you got another opinion saying, actually, those problems have not been resolved. We are still seeing these types of abuses. So it's one of those classic fool me once, shame on me; fool me twice, shame on you where we've been in this position before. And I don't think we can be trusting of FBI Director Wray when he says that all of these compliance issues with Section 702 have been resolved. Really the problem here is we are seeing, if this data is correct, some nearly 300,000 backdoor searches on Americans that have been done outside of the Fourth Amendment require -- outside of the Fourth Amendment, outside Fourth Amendment requirements, and outside of warrant procedures. Section 702 searches for analysts, they're supposed to list in writing why they think whatever intelligence that they have collected has relevance to a foreign intelligence investigation. And, apparently, that was done improperly in many of these circumstances. It was either poorly documented, or the agents made a mistake and had some sort of conjecture about a nexus with foreign intelligence investigations but didn't have the requisite level of proof to actually make the search. So that's definitely very disturbing. I think it bodes very poorly on the prospect of Section 702 being renewed, at least in its current form this December. It was already in trouble. Both political parties oppose Section 702 largely for different reasons.

Dave Bittner: Right.

Ben Yelin: Republicans and conservatives distrust the FISA process. They believe that the FBI has been weaponized against conservatives. And members on the political left like Senator Ron Wyden believe that this is a violation of civil liberties that the communications of US persons are being incidentally collected, and that's going to have a chilling effect on First Amendment rights to free speech. All of this would be easy. I wish we could just say, You know what? Scrap the whole program. We've had all these compliance issues, the FISA court's weighed in a million times. It's not easy because this is the crown jewel of our foreign intelligence apparatus.

Dave Bittner: Right.

Ben Yelin: Intelligence officials from every single political -- of every single political stripe from the Trump administration, from the Biden administration have testified that this authority has stopped acts of terror in this country. There are new and innovative ways, although these have been redacted. But supposedly there are new and creative ways in which Section 702 is being deployed in cyber investigations, espionage investigations. So the Biden administration supports its renewal. The Biden administration opposes reforms that would require the government to seek a warrant every time they search that Section 702 database. I just don't know how tenable that position is going to be given the record here, given that we have these abuses, and given the fact that this has happened repeatedly over the last decade.

Dave Bittner: I'm trying to understand these numbers. So this article from The Washington Post says this tool was used more than 270 1000 times. And it says, in one case, it was used with 19,000 donors to a congressional candidate. So we don't have the details as to what constitutes a -- and I'm going to use air quotes here -- use of the database.

Ben Yelin: Right.

Dave Bittner: So if I have a list of 19,000 donors to congressional candidates, and I feed that into this tool, right, in an automated way because that's the only practical way to do it, right. So I've got 19,000 names. And if I feed them into this tool, essentially, doing a fishing expedition, and I'm -- please let me just -- clear. I'm being -- I'm totally speculating here, right. But let's say that --

Ben Yelin: You're not actually doing this.

Dave Bittner: No, no. But I guess I'm trying to understand and make sense of this for myself, if I feed those 19,000 names in and let's just say for argument's sake nothing comes back, right, none of the 19,000 donors had any communications with foreign people of interest, does that count as -- does that count against our 278,000 times using this tool?

Ben Yelin: So it gets a little complicated because there's this difference between raw Section 702 information and Section 702 information that has been processed. So raw Section 702 information means electronic communications such as emails or other messages that haven't been filtered to first see if they meet the criteria of foreign intelligence information or evidence of a crime.

Dave Bittner: Okay.

Ben Yelin: So when this traditional decision from the FISA court says that the tool itself has been deployed 278,000 times, it is unclear to me if the FBI received 278,000 entries of raw data or whether the database, once that -- once that data had been processed and collected, was searched 278,000 times.

Dave Bittner: Right.

Ben Yelin: So I'm actually unclear as to what those numbers refer to. And I don't think it's clear on these articles. I tried to read through the FISA court decision. But, as you can imagine, much of it is blacked out. So there just aren't a lot of details I can glean from it. So I'm not exactly sure specifically what those numbers refer to, whether it's raw data or whether these are individual queries. But I'm sure some of our listeners might have more insight on that, especially those who are in the intelligence community.

Dave Bittner: Yeah. I guess part 2 of my question is to what degree does it matter? In other words, 278,000 unlawful accesses is as good as 10, right? I mean, it still shouldn't happen.

Ben Yelin: Right, right. I mean, it's still violating people's civil liberties. It's still unlawful collection because this is a foreign intelligence program.

Dave Bittner: Right.

Ben Yelin: And it's being used for purposes other than foreign intelligence or, what has also been authorized under the statute, investigations of previously predicated crimes. Clearly, that's not happening here. We have these -- when we see these numbers of whether it's raw data or processed data, hundreds of thousands of inquiries into individuals who are involved in First Amendment activity or even criminal activity that didn't have a nexus in foreign intelligence and this wasn't, like, a specific criminal investigation, I think that certainly presents a lot of concern, even if it were simply ten records. But the fact that we have such a high number I think gives us even more cause for concern.

Dave Bittner: So the FBI says that they have fixed this. How do we verify that? Is this up to Congress? Is this up to FISA itself? Who's watching the watchmen here?

Ben Yelin: So this program has to be authorized annually. So, presumably, there has been a 2023 FISA court opinion or there soon will be a 2023 FISA court opinion that either authorizes or demands mass changes in collection practices for Section 702. That opinion will probably be released to the public sometime in 2024 or 2025.

Dave Bittner: Right.

Ben Yelin: The relevant committees in Congress are supposed to be apprised of this information. They go into their SCIF, their secure area, and they receive briefings if they're on the Select Senate Committee on Intelligence or the House Intelligence Committee. But that's -- it's hard to get that information filtered down to the public for a couple of reasons. One, there are very few lawmakers who understand the ins and outs of Section 702 to the degree that they can speak intelligently on it, to be honest. And I think, more compellingly, there's just not much that they can release to the public, given that these FISA court opinions are classified. And any public information out there could jeopardize methods of collection that might be important for counterintelligence or counterterrorism purposes. So I think the only practical solution is to use the one leverage point that Congress really has, which is the sunset of this program. You're not presented with this opportunity very often. When a program is authorized for five years and it's not authorized indefinitely, it means you have an opportunity to demand changes. Whether you want to call this hostage -- hostage lawmaking or this is just using your leverage point to try and amend this law to protect civil liberties, I think that's what we're going to see. So I suspect that we'll have a robust debate in the House and Senate with cross ideological coalition's on the future of Section 702. I think we can end up with really three distinct outcomes. One would be continuing Section 702 largely in its current form with these FBI minimization procedures that have apparently alleviated some of this inadvertent collection. That would be Option 1. Given the high profile nature of some of these articles and criticisms of Section 702, I find that to be pretty unlikely. The second option, a second option would be reauthorizing Section 702 with significant reforms. The reform that's been on the table that civil liberties advocates have been pressing for, for years would be a warrant requirement for any search of a US person in that Section 702 database. The administration is against it. Whether the administration is willing to put up a fight and threaten to veto the bill if that provision is in there, I'm not entirely sure. I don't frankly know how strongly they feel about it. And I'm not sure we will know until that legislation comes to the House and Senate floor for consideration. The third option is the FBI and the intelligence apparatus, the security state, as some people call it, has just become so toxic that they refuse to reauthorize Section 702. I think there's a decent chance of that happening. I think that is what keeps people in the intelligence community up at night. Given the political opposition to this, they, I think, have a reasonable fear that they're going to lose one of their most valuable counterintelligence tools. So Director Wray and his counterparts at the Department of Justice and other agencies, including the National Security Agency, are going to have to be very persuasive in front of members of Congress saying we swear this time we've actually solved the problems. And if quotations from members of Congress are any indication, that's going to be a very difficult task. I could see spittle coming out of Ron Wyden's mouth when he was quoted as part of this article, saying that this is a danger -- dangerous overreach by US intelligence officials, and there have been shocking abuses of FISA Section 702, and the American people need to see these abuses before the law is renewed. So I think it's going to be an absolute dogfight. Once we get through this whole debt ceiling thing and figure out the budget side of things, the next legislative battle this year is going to be on this intelligence program. And I'm sure we'll be -- we'll be following it every step of the way.

Dave Bittner: Okay. I'm curious on your take as to whether or not it is reasonable to require a warrant for American citizens. And why does -- I guess it's a two-part question. Why -- why wouldn't you require a warrant anytime a US citizen is involved in any sort of data collection? And they're here. They're citizens. They're on US soil.

Ben Yelin: Right, right.

Dave Bittner: Why do they get their Fourth Amendment protections stripped away from them because -- when they're communicating with someone overseas? Terrorism? Because terrorism?

Ben Yelin: Well, that's partly because terrorism. Yes. But don't shoot the messenger. I'm just the person --

Dave Bittner: Yeah.

Ben Yelin: -- analyzing the law here.

Dave Bittner: I'm sorry, Ben. I kind of got a little carried away there.

Ben Yelin: You did. There have been a series of court cases that have applied FISA to something called the incidental overhear doctrine.

Dave Bittner: Yeah.

Ben Yelin: Basically, in the predigital communications age, let's say the FBI got a warrant to search -- a legal warrant to search person X. And they caught person X saying something incriminating to person y, and person Y in that conversation also said something incriminating. Courts have held that law enforcement would need a separate warrant to use that evidence from person Y in a judicial proceeding.

Dave Bittner: Oh.

Ben Yelin: And I think that's the rationalization here is that the collection, if it's done for foreign intelligence purposes, is lawful. So if I'm surveilling person X under Section 702, that person presumably -- and I put a lot of emphasis on that because, clearly, that's not happening all the time --

Ben Yelin: Right.

Dave Bittner: -- but that person is presumably a non-US person who's overseas, so we don't need a warrant to surveil them. And if a US person just happens to be talking to that person, that is incidental overhear. And, therefore, we don't have to obtain a separate warrant to use that communication in a legal proceeding. That's been the legal justification. But I think the more we see these types of alleged abuses, it's just going to become less tenable because it's clear that many of the so-called properly predicated searches under Section 702 haven't been properly predicated, and eventually somebody's going to be prosecuted on evidence where Section 702 shouldn't have been used; and that evidence should be suppressed. Before we get to that case, I mean, I just think, based on what we know now, relying on that incidental overhear doctrine isn't an adequate solution to the problem because I don't think we can say with any level of certainty that the individual searches have been properly predicated because now we have evidence from this article and many others that, very frequently, however many times it was, whether it was 10,000 or 200,000 --

Dave Bittner: Yeah.

Ben Yelin: -- it's frequent enough that these searches are not properly predicated on foreign intelligence; and, therefore, I don't think we should rely on that in future legal proceedings.

Dave Bittner: Right, right. Sadly, the FBI has not earned our trust on this one.

Ben Yelin: I would say they have not, although I am always open. You know, I think it's encouraging that Director Wray has taken some of these Section 702 compliance issues to heart. We do have preliminary data that perhaps they're doing a better job minimizing US persons' communication in the collection of Section 702 data. And, you know, I think, trust but verify is the name of the game here. We can believe that the FBI is acting in good faith. But, while we keep receiving these partially redacted FISA opinions from one to two years ago telling us, hey, these problems still exist, I think we just have to keep a very skeptical eye on these surveillance programs.

Dave Bittner: Yeah. And these organizations, I can imagine Director Wray saying to his staff, you know, Hey. You know, knock it off or we're going to lose this tool altogether, and that -- we don't want that to happen.

Ben Yelin: Yeah. I mean, I think they're probably banging their heads on the desk there because what they wouldn't give up to go back in time and eliminate those improper searches because they really rely on this intelligence authority. And they really shouldn't have used it to investigate members of BLM protests or January 6 protesters. If there was some reasonable basis to assert that protesters were connected to a foreign adversary, then maybe those searches would have been predicated, properly predicated. It seems pretty clear that that was not the case here. At least from what the FISA court opinion says, there wasn't enough evidence in the record showing that there was any connection, foreign intelligence connection --

Dave Bittner: Right.

Ben Yelin: -- with many of these searches. So, yeah. I mean, I think they're probably -- people who work in the intelligence community are probably upset that this has thrown their crown jewel of the intelligence apparatus into question.

Dave Bittner: Yeah.

Ben Yelin: They want to see it successful. They want to see it opera -- operationalized in a way that's both effective and legally sound. And, so far, they've yet to find a way to do that. So I think whether that can happen is still to be determined.

Dave Bittner: Yeah. All right. Well, more to come for sure. We'll keep an eye on that one.

Ben Yelin: Yes. I have a feeling we'll be talking about this, for better or worse, many times prior to the end of this calendar year.

Dave Bittner: Right, right. All right. Well, my story this week also comes from the Washington Post. This article is about the testimony that the CEO of OpenAI gave before Congress. Sam Altman is their CEO. And, of course, OpenAI is the company who makes ChatGPT, which has captured the imagination of many folks, the large language model artificial intelligence. So he testified before lawmakers, answered a lot of their questions about AI and where he thinks things are going and where they stand. Is it interesting -- I don't -- Ben, did you -- have you spent any time looking at this hearing, or was it something that was on your radar?

Ben Yelin: It was kind of on my radar in the background. I was following the story, but I didn't watch the hearing closely. The one thing I will take away just at the outset is it wasn't nearly as adversarial as most hearings that we tend to watch in front of Congress --

Dave Bittner: Right.

Ben Yelin: -- people who are from Silicon Valley or in the tech industry.

Dave Bittner: Yeah. I noticed that too. And I wonder what is that about? Is it -- like, why -- why were they almost deferential to him? Is it that they acknowledge that AI is something outside of their area of expertise? I mean, that's never stopped them before.

Ben Yelin: It's never stopped them in the past.

Dave Bittner: Right. So I don't know. It's puzzling.

Ben Yelin: Do you want the cynical view?

Dave Bittner: Yes, please.

Ben Yelin: The cynical view is that this isn't connected to anything that's explicitly partisan. So nobody's going out there trying to get their five minutes on Fox News and MSNBC by yelling at a disfavored witness in front of a committee.

Dave Bittner: I see.

Ben Yelin: So, you know, when it was the previous CEO of Twitter, when it was Jack Dorsey, conservatives were very angry at him for what they saw as improper censorship of conservatives.

Dave Bittner: Right.

Ben Yelin: So they were really mean to him at these hearings. Those would be clipped. They'd be put on Fox News. It would be easier for those lawmakers to raise money.

Dave Bittner: Yeah.

Ben Yelin: I think luckily for us right now, and I don't know how long this is going to -- this is going to continue, ChatGPT and chat bots just haven't become politically polarized. So there just isn't that type of incentive to go off the rails and yell at somebody at a hearing.

Dave Bittner: Interesting. Give it time. Give it time.

Ben Yelin: Yes. And I know that's very cynical, but I do kind of think that's what's happening here.

Dave Bittner: Okay. Well, that's a really interesting insight. Will Oremus wrote about this hearing and laid out the three-point plan that Altman has for regulating AI. He's very -- seems to be very pro regulation, which is -- I guess we hear that from Silicon Valley from time to time. But I often think it's, you know, please don't throw me in the briar patch.

Ben Yelin: Right. It's like we'd like to be regulated but on our terms.

Dave Bittner: Right.

Ben Yelin: It's kind of trying to get ahead of more invasive regulations by recommending your own set of regulations.

Dave Bittner: Yeah.

Ben Yelin: Now, I actually think the regulations he laid out in this hearing are relatively reasonable. They're good ideas. Some of them are those that you and I have expressed in the past.

Dave Bittner: Right.

Ben Yelin: A new government agency charged with licensing large AI models and empowering it to revoke that license for companies whose models don't comply with government standards --

Dave Bittner: Right.

Ben Yelin: -- I mean, that would be a pretty radical thing for the government to do, to set up an oversight agency for this emerging technology. But that would be a pretty strong form of regulation.

Dave Bittner: Yeah.

Ben Yelin: Creating safety standards for AI models, including evaluations of their dangerous capabilities. I know you've talked about that.

Dave Bittner: Right. An FDA for AI.

Ben Yelin: An FDA for AI. They would have to pass certain tests for safety -- and I'm quoting here -- such as whether they could self-replicate and exfiltrate into the wild, that is, to go rogue and start acting on their own. Cue the horror movie

Dave Bittner: Right. Cyberdyne systems goes self-aware.

Ben Yelin: Exactly, which does seem to be happening a little bit. So I think there's a basis for that fear. And then requiring independent audits from independent experts. I think all of that is good. I think there's recognition within the industry that we're moving very fast here. The technology is developing faster than we're able to control it and that, if we don't get ahead of this, there are going to be really serious consequences. And once this cat is out of the bag, it's going to be impossible to just turn around and institute post-hoc regulations.

Dave Bittner: Right.

Ben Yelin: So, I mean, I take these recommendations in good faith. And I think it's encouraging that Congress is willing to hear him out on this, and they're willing to have this productive conversation.

Dave Bittner: Yeah. I think it's interesting. This article points out that, absent from Altman's proposals is requiring AI -- AI models to offer transparency into their training data or prohibiting them from being trained on artists' copyrighted works, as some legislators have suggested.

Ben Yelin: Yeah. Those are two separate issues. I think the transparency into the training data, I understand why the companies would be against that because it's proprietary.

Dave Bittner: Right.

Ben Yelin: And they don't want to tip off their competitors or future competitors. I'm not -- I don't necessarily think they're trying to hide anything. I just think any company wouldn't want to share their secret sauce. So, you know, I'm not sure how realistic it would be to include that in regulations.

Dave Bittner: Yeah.

Ben Yelin: The separate issue on being trained on artists' copyrighted works I think is something that lawmakers really could potentially resolve. I mean, we've been adjudicating copyright disputes throughout our country's history. Congress is explicitly empowered to do so in Article I, Section 8 of the Constitution.

Dave Bittner: Yeah.

Ben Yelin: And there should be a way to protect the intellectual property rights of creative works while allowing this technology to flourish. So I think, you know, I would be more bullish on that type of regulation than I would on transparency, even though I think there certainly would be upsides to transparency into some of the -- some of the training data.

Dave Bittner: Yeah. I'm -- I'll tell you, I'm not sure how I feel about prohibitions on training these things on copyrighted works. And the basis of my argument is that let's say, Ben, you're an artist, and I hire you to paint something. And I say to you, Listen.

Ben Yelin: I apologize in advance.

Dave Bittner: Say, Listen, Ben. I really want this painted in the style of Renoir.

Ben Yelin: Right.

Dave Bittner: And you say, Ah. No problem. And you go to your local museum, and you walk around in the gallery, spending time soaking in --

Ben Yelin: Right.

Dave Bittner: -- the great artistry of Renoir. And then you go back to your studio, and you paint something in that style. Now, putting aside the fact that Renoir's paintings are probably in the public domain now, certain --

Ben Yelin: Right. Put that issue aside. Yeah.

Dave Bittner: Yeah. Where does that leave us? I mean, you haven't violated copyright by doing something in the style of someone else, which I think an argument could be made with AI that's what they're doing.

Ben Yelin: Right.

Dave Bittner: I think the problem that we're having with it is that we have never had to contend with this kind of scale or velocity. And maybe that's enough of a difference to be the difference. But I -- but I'm wary of it.

Ben Yelin: Yeah. I'm wary of it too. I mean, I think that's a very legitimate concern. You can have something that's inspired by the creative works of somebody else or in the style of those works. That shouldn't constitute a copyright violation. I guess I'm just saying that, like, this is something that Congress, at least theoretically, has the ability within them to figure out a way to regulate so that they're doing the least amount of damage. You're protecting people's intellectual property rights while also not overly constraining these platforms. I don't know the best way to do that. I just feel like there is a way to do it in a way that would be -- that would be equitable for everybody.

Dave Bittner: One of the other articles I saw here on the Post, and this was from Cristiano Lima, who was making the point that some lawmakers are wary of concentration in the AI market, that too many -- that we're going to end up with a handful of the usual suspects, the big tech companies, the Microsofts of the world, the Googles, the Apples, the Amazons having all the power here. What resonates with me about this is, you know, what -- a couple of weeks ago, we were at the RSA Conference. And I was --

Ben Yelin: Using the royal we there. I didn't get my free trip back to San Francisco.

Dave Bittner: Sorry.

Ben Yelin: Sorry, listeners. Maybe next year, Ben. Maybe next year. So we, and by we I mean me, was at the RSA Conference. And going into the conference, I was really curious to see where things were going to hash out when it came to using this particular flavor of AI, these large language models, when it came to specifically to cybersecurity and cybersecurity companies. And one of the things I came away with was the big companies, the Microsofts, the Googles, they are all in on this.

Ben Yelin: Right.

Dave Bittner: And, because of that, it's extraordinarily hard for the smaller players to stay out.

Ben Yelin: Right.

Dave Bittner: And I hadn't really thought that through before I got to RSA and sort of had it laid bare. And I witnessed the way things were flowing, right.

Ben Yelin: Right. This is -- this is the future. And if you're a smaller company, you're going to want to glom onto this because whatever functionality your company provides is going to be hitched to this type of language model one way or another.

Dave Bittner: Right, right.

Ben Yelin: Yeah.

Dave Bittner: Whether we're ready for it or not, this is the way the industry is going.

Ben Yelin: Yeah. I thought it was really interesting at the hearing, Altman, OpenAI chief said that he thinks there's going to be a relatively small number of providers just because it's hard to develop something like this at scale. But he said that that could actually be a benefit because it'd be easier to regulate if there are only a few players. I -- to put it mildly, it's just I'm very skeptical of that sort of argument because he would personally benefit from fewer players in the marketplace.

Dave Bittner: Right. Does that work with pharmaceutical companies?

Ben Yelin: I don't think so.

Dave Bittner: Okay.

Ben Yelin: I don't think so.

Dave Bittner: Can you imagine? Oh, my gosh, Ben. I just had a horrible, horrible flash forward of every time we turn on the TV seeing ads for different AIs.

Ben Yelin: Oh, it's coming. It's definitely coming.

Dave Bittner: Like the way we see pharma ads, you know. Like, Ask your doctor about -- ask -- ask your tech provider about using Microsoft AI or about Google AI.

Ben Yelin: If your AI has hallucinations that lasts more than four hours --

Dave Bittner: Right.

Ben Yelin: -- yeah, please --

Dave Bittner: -- contact tech support.

Ben Yelin: Exactly. Exactly.

Dave Bittner: Oh, my gosh.

Ben Yelin: It's going to be a dark era. It also seems like, I don't know -- it feels this way. I don't know if it feels this way to you. But every day we have a new story about an innovative use of OpenAI or a similar platform that just kind of blows my mind. I mean, for me, I focus a lot on academia and now it's mastering master's level, doctorate level courses. The level -- the ability to create artwork through artificial intelligence --

Dave Bittner: Right.

Ben Yelin: -- that can depict things that hadn't previously been depictable, if that makes sense. Like certain emotions that people aren't able to properly articulate, when you put them into this type of language model, they're able to spit something out that represents that type of emotion.

Dave Bittner: Wow.

Ben Yelin: Like loneliness or something. The one I saw today which is just wild is that there was an AI-generated image of the Pentagon exploding.

Dave Bittner: Oh, yes. I saw that.

Ben Yelin: -- a massive fire at the Pentagon. It was all generated by AI. And it was tweeted out by the verified account, supposedly, of a news source. Of course, anybody can get a verified account on Twitter these days by paying $8 and having their phone number verified somehow.

Dave Bittner: Right.

Ben Yelin: And this was not actually a new source. It was a false story. And the Pentagon had to actually put out a statement saying, Guys, we're not actually on fire.

Dave Bittner: Yeah.

Ben Yelin: But I just feel like something like this is happening every single day now. It is moving -- I know I'm being maybe overly reflective here, but it's just -- it's moving very quickly.

Dave Bittner: It really is. And I'm not sure how our legislative system reacts to this because it seems to me -- and you're certainly much more of a scholar than I am when it comes to this sort of thing, but it seems to me like a slow deliberative pace had -- had always been considered a feature.

Ben Yelin: Yeah. Yes. And it's certainly not exactly a feature when you have to potentially regulate something that's moving at the speed of lightning. And we've seen that in our legal system and with Congress. My one hope is that I just -- if this does not become another political issue that's unduly polarized, there's something that some scholars have referred to a secret Congress. Basically, like, things members of Congress work on that aren't politically -- they don't carry -- they don't pack a political punch. It's not something that you're going to see on cable news.

Dave Bittner: Right.

Ben Yelin: But they're just kind of obscure and nerdy policy issues that Congress just resolves with nobody noticing. And I'm kind of hoping this terms -- turns into one of those issues that can get through the secret Congress. But I'm dubious.

Dave Bittner: I think it's way too flashy for that.

Ben Yelin: I do too.

Dave Bittner: But here's hoping.

Ben Yelin: I do too. I think you're right and I'm wrong, but we can be hopeful.

Dave Bittner: Yeah, yeah. All right. Well, we will have links to these stories in the show notes. And, of course, we would love to hear from you. If there's something you'd like us to consider for the show, you can email us. It's caveat@thecyberwire.com.

And I recently had the pleasure of speaking with Mathieu Gorge. He is from a company called Vigitrust. And our conversation centers on generational attitudes toward privacy. Here's my conversation with Mathieu Gorge.

Mathieu Gorge: So I think that everybody throughout their lives have had different levels of privacy concerns and privacy levels. If I look at myself, for instance, I distinctly remember when I was a child playing around at a restaurant with my sister and other kids and the parents having some wine on -- in another area in the restaurant, and nobody really cared about security or privacy. Then I remember being in school, in primary school back in the '80s and starting to work with computers like the Thomson MO5, and TO7 for those old enough to remember those. And then going to university in Northern Ireland in the mid-1990s and seeing, like, huge labs with 200 computers all connected to the internet and hearing for the first time the notion of getting lost on the internet from one page to another to another. And if you look at that evolution throughout all those years, my feelings on privacy have evolved. And my feelings about security have evolved, as well, long before I went into the security industry. So when I look at life today where I have kids that are less than 10 years old, and I have my mother who is in her nearly mid-80s, both generations using the internet very differently with very different risks. Like the risks for kids is for them to be bullied or to be groomed or to be assaulted online, whereas the risks for the older generation is for them to lose money. And so I think that it's well -- well worth recognizing that we all have different views on privacies and on privacy and on security levels. And there's also, of course, the cultural differences from one continent to another or from one country to another.

Dave Bittner: You know, kind of like what you described, you know, my wife and I have had many conversations about our own kids about how they view privacy and how they share things to a different degree online than we would be comfortable with. And one of the things that we talk about is how we try not to be overly judgmental in that, you know, what -- there might not be good or bad or better or worse. Sometimes things are just different. How much do you think that applies to how we approach this?

Mathieu Gorge: Yeah. I mean, I see teenagers. You know, they tend to share their passwords with their friends. They don't really understand the value of passwords and -- because their view is that they want to share as much as possible. So what's the point in having a password because, in any event, they're going to share what they want to share. And they're probably going to overshare. Anyway, on Snapchat, on Instagram, or whatever. So they have a different view of it. My view, to put it bluntly, my view of a password is, I mean, it's like your underwear. You don't -- you don't share that with anyone. It's just -- it's one of those things that is just for you. And I have been groomed, so to speak, by the industry to have different passwords for different types of applications and for -- like, for instance, my passwords for banking would be extremely different from my passwords from -- for an airline or for some sort of a loyalty card, very, very different. Depending, they're all very strong passwords, but they're certainly from a banking perspective, where money or health data is concerned, I would tend to not only use the passwords but also look at the two-factor authentication or multifactor authentication that is, generally speaking, included free of charge. But very few people turn it on. So I think there's an education problem here, as well, not just for younger and older generations but, like, for everyone, that when you start using a new application or a new system, don't just use the password as your -- as your only obstacle to malicious parties coming into your data or your systems. Sometimes you have different levels of authentication that you can turn on, and it's all included in the solution that you've just purchased.

Dave Bittner: You know, I'm the age where, when I was a child growing up, I remember when seatbelt laws started to come into play here in the United States. And there were a lot of people who were against that, that they felt it was, you know, an imposition on their freedom and so on and so forth. But over time, these days, it's hard to imagine getting in a car and not buckling up your seatbelt. But that's because it was a requirement, you know. You could get fined if you didn't do it. I'm wondering if we need similar things for password protection online for security and privacy. Should some of these things be compulsory so that people don't have so much of a choice to protect themselves?

Mathieu Gorge: Well, certainly in the banking world, you need to have multifactor authentication. And, in fact, I was -- I was setting up access to new accounts for myself yesterday. And I nearly lost the will to live because of the various levels of identification that I needed to provide from the phone, from a different device and then back onto the computer and then using a fob. But, I mean, the reality is, if it's a pain for me, it would be pain for an attacker. And even if the attacker is very skilled, they will probably go after the low-hanging fruit rather than going after a system that has three or four different levels of security. It's always the balance between convenience and security. But think of it that way. If you look at the enterprise world where we need to have compliance as well as security, the analogy would be that you're driving a motorcycle, and you've got like a helmet that is super light and only covers your skull. That is compliant in some countries. And, specifically, in some states in the US, you can actually drive a very powerful motorbike wearing that. Are you compliant? Yes. Are you safe if there's an accident? Absolutely not. So that would be the same analogy as are you compliant? Like, I can force you to have a password. But, like, if you're going to share a password with everyone, well, that's not safe. So there's definitely an education process here. And I think that needs to come from the top. So if you look at the initiatives from the DHS in the US around Cyber Awareness Month where they did a very good job this year at promoting multifactor authentication and promoting security and being vigilant and so on. And then you look at an ENISA, the European Network and Information Security Agency that did the same the same month, we know, everybody knows that it's important, right? So if you choose not to -- not to use the levels of protection that are made available to you free of charge and you don't actually need to be a technical wizard to use it, then you're putting your data at risk. And you'd be surprised how many people in the business world do not use the multifactor authentication on LinkedIn, for instance. It's there. It's available free of charge. But very few people turn it on because I don't think they're aware of it. Because certainly it's there. It's free. It's not really cumbersome. Even if you log in from a different device, you just -- you just get a one-time password by text, and off you go. What you want to do is you don't want to advertise yourself as the easiest target. And so the way to do that is to put as many obstacles as you can so that the attackers are going to go to the low-hanging fruit. Can we break security, even if you have all of the various levels? Sure. At some stage, we can. There's no 100 percent security. Are they going to try and do that? No. They've -- easier targets to go after.

Dave Bittner: How much do you think this should be the responsibility of the user versus the provider? I'm thinking about my parents, as well, who are -- who are older, in their 80s. And a lot of this is really challenging for them. The complexity, they have a hard time dealing with it.

Mathieu Gorge: So you don't want security to become an age problem or a social problem. So, in other words, you don't want it to become a dividing factor where some people get left out of major opportunities to have an easier life, to monitor their blood pressure using an app that's connected to some sort of a watch or whatever. So you -- what you really don't want to do is alienate some people because of the technology because the technology actually adds value. And it really definitely, in terms of health, it helps reduce incidents. And, therefore, it will help reduce costs, and we will have free up emergency services for other emergencies. So you need to be careful there. If you put the onus on the user to have a super strong security, but your target market is 80-plus years old, then I suppose as is under, my opinion would be that you need to make sure that you've got a very simple helpline that can help them do that. If you just leave them there and they end up not using the solution because of -- because of security, you're then defeating the purpose.

Dave Bittner: Where do you suppose we're headed with this? So this -- is the future one where we no longer use passwords and usernames? Where do you suppose we're going to end up here?

Mathieu Gorge: Well, I mean, biometric authentication is easy at any age because it's basically something you are, and you carry it with you. And so there's no training here. You just say, Hey, give us your fingerprint, or we'll look at your eyes. That's the way we will identify you. And it's reasonably secure, and it's easy to deploy. But the technology itself from a vendor perspective is not necessarily super cost effective just yet. But my guess is that, in order to simplify everything, it's going to be based on something you are and something you have as opposed to something you know and something you have, which is what we have now with the password and, say, a fob. So if you look at design thinking and all of the science fiction stuff that -- that we see at the moment, you don't see anyone in any of those science fiction movies putting in a password. It's all, like, based on their movement or based of their -- the way their body reacts or the way they look or whatever. And the challenge is -- the challenge with that is, of course, that allows you to do social -- social monitoring in a way that you're actually monitoring the physical activity of people, and their body becomes their ID. So, either way, there's going to be a trade-off. And there's going to be issues with privacy because whilst it's -- might make it super easy for me to authenticate on an application, do I really want the vendor to know if my house is going -- doing better or worse. I don't want them to upsell me or -- you know, there's all of those -- all of those things need to be taken into account. Now, of course, comes the idea of AI on top of that where AI might be able to solve some of those problems with regards to more elaborate ways to authenticate. But AI, I see as a double-edged sword because it can be used to facilitate access to various systems and to secure them, but it's also being used by the bad actors to essentially launch more complicated attacks. So, again, that -- that's an area that I would watch very carefully.

Dave Bittner: What about some of the existing technology? I'm thinking you sent -- you know, things like face ID on iOS devices where, you know, there's something I have. I have my mobile device. And then it scans me to see if I am who I am and then can pass on that verification to even a third-party. You say, Yes. You know, this is the person that says they are. How close does that get us to where we need to be?

Mathieu Gorge: It's a step in the right direction. But, I mean, obviously, it means that you agree tacitly that the provider is going to have a picture of your face and, actually, will have a copy of your ID, so to speak. So, again, who really reads the terms and conditions when we click on next, next, next in order to get access to the system? Very few people do that. And, generally speaking, there are -- there are a lot of clauses in the contract that say, Hey, if anything was to happen to that ID, our liability is very restricted, and so on. And you accept that it's very restricted. So, from a technical perspective, definitely going the right direction. From a legal perspective, a bit of a minefield right now. But I do think that you're going to see that more and more, like, you know, your fingerprints, your eyes, your face, parts of your body, and/or how your body reacts will become your ID moving forward.

Dave Bittner: Ben, what do you think?

Ben Yelin: I thought it was really fascinating. I mean, I hadn't really thought of privacy from a generational perspective and how generations have different expectations and just thinking about a future without passwords.

Dave Bittner: Yeah.

Ben Yelin: I mean, I think that's definitely happening. It's something that I don't think our kids are going to have to remember passwords. I think it's going to be using biometric identifiers.

Dave Bittner: Right.

Ben Yelin: It's just a really interesting window into this world. So I thought it was -- I thought it was a really good interview.

Dave Bittner: Yeah. Well, thank you. And we appreciate Mathieu Gorge from Vigitrust joining us. We appreciate him taking the time.

And that's our show. We want to thank all of you for listening. We'd love to know what you think of this podcast. You can write us an email at cyberwire@n2k.com. Your feedback helps us ensure we're delivering the information and insights that help keep you a step ahead in the rapidly changing world of cybersecurity. We're privileged that N2K and podcasts like Caveat are part of the daily intelligence routine of many of the most influential leaders and operators in the public and private sector, as well as the critical security teams supporting the Fortune 500 and many of the world's preeminent intelligence and law enforcement agencies. N2K's strategic workforce intelligence optimizes the value of your biggest investment: your people. We make you smarter about your team while making your team smarter. Learn more at n2k.com. Our senior producer is Jennifer Eiben. The show is edited by Elliot Peltzman. Our executive editor is Peter Kilpe. I'm Dave Bittner.

Ben Yelin: And I'm Ben Yelin.

Dave Bittner: Thanks for listening.