Caveat 4.13.23
Ep 168 | 4.13.23

Addressing National Cyber Strategy.

Transcript

Danielle Jablanski: I love the callouts for critical infrastructure. I am even more pleased to see industrial control systems and operational technology highlighted. I think that every company and critical component in the United States, whether it's digital or, you know, kind of classic legacy technologies, should compete on security.

Dave Bittner: Hello everyone and welcome to "Caveat," the CyberWire's privacy, surveillance, law and policy podcast. I'm Dave Bittner and joining me is my cohost Ben Yelin from the University of Maryland Center For Health and Homeland Security. Hello, Ben.

Ben Yelin: Hello, Dave.

Dave Bittner: Today Ben brings us the story of immigration and customs enforcement using its authority to gather data from schools and medical clinics. I've got the story of ChatGPT's lies and the Biden administration's possible regulatory reaction. And later in the show, my conversation with Daniel Jablanski of Nozomi Networks to discuss one year of Shields Up, the national cyber strategy, and Russia's war on Ukraine. While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. All right, Ben, we've got some good stories to share this week. Why don't you start things off for us here.

Ben Yelin: So my story comes from "Wired," and it's about Immigrations and Customs Enforcement using a little-known statutory authority to collect electronic data from public schools, abortion clinics, other entities that we would think would restrict giving data to government agents. So we're referring to administrative subpoenas known as 1509 Custom Summons. These are supposed to be used only in criminal investigations about illegal imports or unpaid customs duties. "Wired" did an investigation and determined that ICE has been seeking records of items that have nothing to do or little to do with these types of violations. And they asked a bunch of legal experts, and many of these experts were puzzled at the extent to which ICE is collecting data from organizations that would seem to have nothing to do with illegal imports or unpaid customs duties. So using a Freedom of Information Act request, "Wired" found that agents issued custom summons more than 170,000 times during a six-year period. So that's about 75 times per day, I believe --

Dave Bittner: Wow.

Ben Yelin: -- if I'm doing my math correctly. So rather a significant number of summons. Most of these summons are issued to big-tech companies, the ones that we talk about all the time; Google, Microsoft, et cetera. These are administrative subpoenas that generally do not have to be signed by a judge. So one of these companies, whether it's Meta or whomever, gets the subpoena, and they are compelled to turn in records. And some of the records that are turned in to Immigrations and Customs Enforcement are pretty puzzling. They mention a surveillance video from a major abortion provider in Illinois. Student records from an elementary school in Georgia. Health records from a major state university student health services. Data from three boards of election or elections departments. And data from a Lutheran organization that provides refugees with humanitarian and housing support. In separate instances they found that ICE is actually using these summons to pressure news organizations to reveal information about their sources, which is a major First Amendment no-no. So this is sparking concern in the digital privacy community. There's an incredulous quote by somebody who works for the Electronic Frontier Foundation saying that "The frequent and widespread use of these summons creates a situation where the agency can go rogue just by using this tool for investigations that fall outside the scope of the law." So there's sort of plausible deniability here, because with the information we have from this Freedom of Information request, we can't definitively determine that any of these summons were issued improperly. It is very possible that some type of discreet information from the abortion clinic in Illinois or from one of these public schools or from one of these media sources relates to the purpose of the statute, which is criminal investigations about illegal imports or unpaid customs duties. But I think we can look at this with a very skeptical eye, because, when there was a scandal several years ago about this authority being misused, Immigrations and Customs Enforcement had an inspector general -- had an inspector general complete a report. And that report found that a pretty significant percentage of these summons were based on requests that are outside the purview of the agency and its jurisdiction that were for things that were improper uses of this Section 1509 authority. So this certainly presents civil liberties concerns. It's one of many statutory authorities that exist across our federal legal system that allows for warrantless collection of pretty private data from a variety of sources for some sort of broader public policy purpose. We've talked about it in the context of foreign intelligence surveillance, national security surveillance, but this is just another context in which the government with this -- with an exercise of this broad authority can really collect a lot of private information. And, sadly, we only know about this because "Wired" did a FOIA request. And it leads us to question whether the government is misusing other authorities in the same way to collect data in the way that shreds constitutional norms.

Dave Bittner: Help me understand the process here. So is this a -- is this a matter of ICE making a request, let's just say, to Google for all of the e-mails from this abortion clinic that have to do with X, Y and Z?

Ben Yelin: Yeah. So the -- yeah, so the administrative subpoena goes directly to the big-tech companies. Now, most of the big-tech companies want to comply with these subpoenas. They don't want to get into trouble with the government. It's generally more trouble than it's worth to try and fight them. You know, we've seen the high-profile cases. Apple fought the subpoena in 2015 for the San Bernardino terrorist attack. But in most instances they're -- these tech-companies just kind of want to get this out of the way. So they're generally pretty compliant. Many of them release transparency reports, either annually or semiannually, that disclose how many of these requests they received and how many of them they complied with. And it's a relatively significant amount. When we're talking about 170,000 of these requests and the majority of them going to big-tech companies, we're talking potentially about 100,000 records that have been collected over a six-year period.

Dave Bittner: Now, in our hypothetical here, this made up abortion clinic, would they know that their records had been summoned?

Ben Yelin: They would not. So, generally, the subject of the surveillance would have no idea that their information or whatever electronic -- discreet piece of electronic data was collected. They'd have no idea that it had been collected, because it's the tech company that's receiving the subpoena. That's what I think is particularly disturbing about this and brings out my inner Ron Swanson libertarian, which doesn't come out very often.

[ Multiple Speakers ]

Dave Bittner: Buried gold in the backyard.

Ben Yelin: Yeah. Impulse is usually pretty suppressed.

Dave Bittner: Right.

Ben Yelin: But the idea that the government can subpoena records from a big-tech company outside the knowledge of these institutions that are having their data collected is troublesome. And what's more troublesome is that we would have never known about this without enterprising journalism from "Wired" to collect these records. So it's something that I think requires more congressional oversight, frankly. Now that the story is out there, I think we should have congressional hearings exploring -- or some type of investigation exploring whether these 170,000 records were actually within the jurisdiction of ICE, if they actually concerned illegal imports or unpaid customs duties. ICE might come back and say, every single one of these passes muster under the authority of the statute. They, in responding to requests for information for this article said, we do a lot of things with CSAM. For example, many investigations relating to international narcotics. So we require a bunch of different types of investigatory data. It's just part of our everyday work, and we're not going to tell you exactly why we collected or requested more than 100,000 records here. They're not going to explain every single one of them. But I think in the aggregate they would be able to say, trust us, we're doing this for justifiable purposes.

Dave Bittner: I mean, does ICE have a point here, that they -- they wouldn't -- I mean, they're not -- they're not requesting these things just for the fun of it; right? But I guess the point here is that they're -- they're bumping up against and perhaps crossing over what they have the authority to collect.

Ben Yelin: Yeah, I think we can definitively say they're not doing this for the fun of it. However, when analysts look at this, there're some records collections that are so puzzling because there's almost no way they could have any relation to the type of authority that Immigration and Customs Enforcement has. When you're collecting -- when you're issuing summons to hospitals and elementary schools and high schools and universities, it's hard to imagine how a student or health record could possibly be relevant to a customs investigation under the law. Now, that's not dispositive. It could be relevant to one of those investigations. It just kind of strains credibility to grant ICE that presumption. So I think that's where the concern comes in.

Dave Bittner: Is this the kind of thing where people would feel better about it if ICE had to make their case in front of a judge?

Ben Yelin: Absolutely. I mean, one of the problematic aspects of this program is with these administrative subpoenas. You don't have any Article 3 approval or authorization from a judge. And, therefore, it's up to the discretion of the executive branch. Now, I think that Immigration and Customs Enforcement, in most cases, are acting in good faith, but you can never have that assumption. And without judicial oversight, there at least is the potential that this type of power could be misused to target political opponents or political dissidents or any other disfavored group just as many of these other surveillance authorities would allow that type of abuse as well. And without having that extra gate, that extra check on executive power, we are leaving this collection to the whims of executive agencies. And that's something that I think goes against the system of checks and balances that's inherent in our Constitution.

Dave Bittner: Has there been any -- any noise from any of the -- the usual suspects in Congress. Has this gotten their attention?

Ben Yelin: I was hoping that there'd be some quote at the end of this article saying Ron Wyden heard about this.

Dave Bittner: Right, right.

Ben Yelin: He's already convened a Senate Finance Committee hearing on --

Dave Bittner: Right, he's rounded --

[ Multiple Speakers ]

Ben Yelin: -- ICE.

Dave Bittner: -- a posse.

Ben Yelin: Yeah. It's possible that, by the time this segment airs, that will have happened.

Dave Bittner: Yeah.

Ben Yelin: I will -- I'm not a gambling man, but I'm will to go bet, if any lawmaker gets out in front of this, it's going to be Senator Wyden or maybe Senator Ed Markey of Massachusetts. I think this is the type of thing where you, at the very least, might have some type of oversight hearing in Congress to determine whether this data is being misused. I think Immigrations and Customs Enforcement would defend this authority rigorously, because it is valuable in conducting these types of criminal informations. And so they don't want Congress snooping around and trying to determine the validity of every single one of these requests. But certainly this is within the purview of Congress to try and institute some type of oversight here to make sure that this authority isn't being abused.

Dave Bittner: All right. Well, that's an interesting one. We'll keep an ion it for sure. Again, we'll have a link to that in the show notes. My story this week comes from "The Washington Post." This is a story by Pranshu Verma and Will Oremus, and it's titled "ChatGPT invented a sexual harassment scandal and named a real law professor as the accused." Boy, ChatGPT, what are you going to do?

Ben Yelin: I know, I know. I mean, this article, it makes me laugh, and I will reveal my biases. I'm not a fan of the law professor that is quoted in this article. But that does not bear on the problematic nature of ChatGPT making up false sexual harassment allegations.

Dave Bittner: Yeah.

Ben Yelin: So certainly this is a major public policy concern.

Dave Bittner: Yeah. Well, let me unpack this for you. This article states that law professor Jonathan Turley got an e-mail from another lawyer in California who had gone into ChatGPT and asked it to generate a list of legal scholars who had sexually harassed someone. And Professor Turley's name was on that list. And ChatGPT said that Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska. And it cited a March 2018 article in "The Washington Post" as the source of the information. Ben, you want to guess what happened next?

Ben Yelin: I'm going to guess that there was not actually a "Washington Post" outlining this 2018 incident.

Dave Bittner: Right, right. None of it had ever happened. There was no sexual harassment. There was no allegation of sexual harassment. There was no article in "The Washington Post." ChatGPT had created this out of -- or synthesized this. And Turley is someone who is known in legal circles. He's someone who would show up in articles about the law and other things. And --

[ Multiple Speakers ]

Ben Yelin: I think they called this hallucinations, by the way --

Dave Bittner: Yeah.

Ben Yelin: -- in the ChatGPT parlance, which I love. I love the idea that this non-sentient being is just having these drug-induced hallucinations about Jonathan Turley and sexual harassment.

Dave Bittner: Right.

Ben Yelin: It's so bizarre.

Dave Bittner: It's just drunk on power.

Ben Yelin: Yeah.

Dave Bittner: Unlimited processing power. So, obviously, I mean, we can see what the issue is here; right? And this is a problem, you know, for Professor Turley who has been wronged. But where this leads us to is, to what degree is open AI, the folks who make ChatGPT, responsible for their tool saying bad, untrue, potentially harmful things about people? Where do we stand here, Ben? What's your -- is this -- is this a thing where they're protected by Section 230? Or where do we think this is going to play out?

Ben Yelin: I think the legal system is just ill equipped to deal with this question, which is problematic. Let's say that this wasn't ChatGPT, and it was a real human being who wrote an article and published it, saying Jonathan Turley committed sexual harassment. Here's a fake citation to a "Washington Post" article. The legal system has a mechanism to deal with that, and it's called defamation and liable lawsuits,

Dave Bittner: Right.

Ben Yelin: You could sue the publisher of that information saying, you harmed my reputation, pay me damages. And the standard for that looks into the intent of the person who published that information. So "New York Times v. Sullivan" holds that you can only be held liable in that circumstance, when we're talking about a public figure, if you either -- if you exhibited actual malice, which is basically knowing the information is false or having reckless disregard for the truth. That looks into the intent of the person publishing that information. How do we look at the intent of ChatGPT? ChatGPT doesn't necessarily have an intent.

Dave Bittner: Right.

Ben Yelin: We don't know what the secret sauce is that produced this entirely false, made-up allegation. So that's going to be really difficult to litigate. But Professor Turley here, at least hypothetically, would have suffered some type of legal wrong and should be entitled to some restitution. I think you're right that ChatGPT will say, hey, we're just the platform here, Section 230 protects us. It was the user, in this case the person who entered in the input that generated the content. But it really isn't the user that generated the content, it is the artificial intelligence that generated the content. So I just don't think our legal system has developed to the point that it can deal with something like this. And they're going to have to -- there is going to be a case at some point. I don't think it's going to be Jonathan Turley. But at some point in the future, there's going to be a real defamation case, and the court is going to have to wrestle with these Section 230 issues and looking into whether a non-sentient, artificial intelligence platform can show actual malice under "New York times v. Sullivan." And I don't know how a court is going to wrestle with that. I suspect there are going to be academic articles instructing the courts how to wrestle with this. And I'm sure Orin Kerr is -- Professor Orin Kerr is drafting a academic paper on this as we speak, as are many others. But it is a major cause for concern.

Dave Bittner: Yeah, this article points out that "The Washington Post" tried to recreate this situation. They did the exact query in both ChatGPT and Bing. And Bing is driven by ChatGPT, that's my understanding of that. The free version of ChatGPT declined to answer. It said that doing so would violate AI's content policy, which prohibits the dissemination of content that is offensive or harmful. But Bing just repeated the false claim.

Ben Yelin: Right. And repeated it because the information was already out there from the stories on ChatGPT releasing that false information.

Dave Bittner: Right. So it was -- so the reporting of the false information reinforced the false information, as far as the AI was concerned.

Ben Yelin: It sort of -- if we're losing the -- if we're using the hallucination metaphor, it's like you're high is starting to dissipate, and then you take another toke to reinforce the high that already exists. Sorry, that's a very strange metaphor, but I just love the idea of hallucinations, so I'm running with it.

Dave Bittner: Yeah, yeah. Well, sort of as a second part to this story, "The Wall Street Journal" reports that the Biden administration is weighing possible rules for AI tools like ChatGPT. They had an article written by Ryan Tracy. And they are asking the Commerce Department to put out a public request for comment on what they're calling accountability measures, including whether AI models should go through some kind of certification process before they're released.

Ben Yelin: Something you've been pushing for for years, I might add.

Dave Bittner: Yeah.

Ben Yelin: At least as it relates to algorithms.

Dave Bittner: Right, right. Yeah, let us -- let us look under the hood. Let us -- the same way that we -- that pharmaceuticals have to go through a testing process to make sure that they do no harm, let's do that with this. It seems to me we're already seeing so many signs of potential harms to individuals and to society itself. And these engines are only getting more powerful, for better or for worse. And certainly the companies who are making them are trying to make them more powerful for better. I will say that in my own just playing around with ChatGPT, I've noticed that it's become a lot more circumspect about things. Like it's -- it's saying -- it's being more deliberate about saying, listen, this is what I come up with, but keep in mind that I only know so much, and you really shouldn't trust me.

[ Multiple Speakers ]

Ben Yelin: My knowledge was cut off in 2021 --

Dave Bittner: Right.

Ben Yelin: -- so.

Dave Bittner: Right.

Ben Yelin: Yeah.

Dave Bittner: There are a lot of people who have the same name as this person, so it might not be this -- this information might not be accurate. Which makes sense. And I'm, you know, to a certain degree, I'm sure is just CYA --

Ben Yelin: Right.

Dave Bittner: -- from the organizations. And who can blame them for that? But, yeah, I think this is an area ripe for some kind of regulation. Of course, the companies who are creating these things will say what tech companies always say, which is regulation is going to keep us from innovating.

Ben Yelin: Right. Don't stifle our innovation, bro.

Dave Bittner: That's right.

Ben Yelin: Yeah.

Dave Bittner: That's right. Which is not an unreasonable thing to say.

Ben Yelin: Not at all.

Dave Bittner: But when you have something, again, as we've said here, the potential to actually be an inflection point in humanity, perhaps it's at least worth taking a look to see if some sort of regulatory regime is in order here.

Ben Yelin: Yeah. I mean, the concern with that is that the regulations can never keep up with the technology. By the time we go through notice and comment and we get some type of administrative rule from the FCC or FTC or whatever, we'll be on ChatGPT 6.0.

Dave Bittner: Right.

Ben Yelin: And the capabilities will have exceeded what regulators knew about several months ago when they started the investigation. So that's going to be very practically difficult. I mean, I do think it's promising that people within the industry and government, public and private sectors, are sounding an alarm saying, this -- this has the potential to grow beyond our control and have all these dangerous after-effects. And I know we don't want to stifle innovation, but it's worth taking a six-month pause and discussing the ethics of this. I'm glad that those conversations are happening. I'm glad we're not just sort of zombie-ing into this brave new world of intelligent generative AI, because that brave new world has a bunch of potentially scary consequences. So I think our regulatory system needs to adapt. I think our legal system needs to adapt. The concern, again, is that those adaptations are probably going to take place on a much slower timeline than the technology, which even in, what, the four months since ChatGPT was released --

Dave Bittner: Yeah.

Ben Yelin: -- from what I've heard, ChatGPT 4.0 is substantially better and more accurate than ChatGPT 3.5, which is the free version most of us have used. So the clock is really ticking here. And --

Dave Bittner: Yeah.

Ben Yelin: -- and.

Dave Bittner: I'm thinking about the 2024 election bearing down on us, you know, and just all of the potential ramifications of when you combine ChatGPT, you combine some of the other AI tools that are able to do deep fakes, both audio and video --

Ben Yelin: Right.

Dave Bittner: -- we're seeing a level of realism that we have not seen before, where they are completely convincing in their ability to imitate specific voices. I just think we're in for interesting times. And, as you say, I -- I don't know if our reactive nature is prepared to keep up with this.

Ben Yelin: I don't think so either. I mean, you talk about the realistic voice generation and photo generation, the Pope has been kind of -- become a meme on these generative AI photo applications --

Dave Bittner: Yeah, yeah.

Ben Yelin: -- where they're putting him in like "Game of Thrones" outfits, and like it looks extremely convincing.

Dave Bittner: Right.

Ben Yelin: Imagine doing that to a presidential candidate and then matching that up with a speech in which the presidential candidate allegedly says all of these derogatory things. And it's believable because the technology is such that the voices sound very realistic.

Dave Bittner: Yeah.

Ben Yelin: I don't think we're really prepared for that world, especially given how fast we know that disinformation travels. So I think there's -- we're going to have people believing things that are false based on something that's generated through AI. I think that's just an unfortunate reality for future political contests. And our political system is going to have to adapt to that. I think, you know, we're going to see denials pop up among candidates in 2024 saying, no, I did not commit sexual harassment on a school trip to Alaska. I've never been to Alaska.

Dave Bittner: Right.

Ben Yelin: This was developed through generative AI.

Dave Bittner: Right.

Ben Yelin: And I think that's going to happen.

Dave Bittner: Yeah. Despite the fact that you have photos of me committing sexual harassment.

[ Multiple Speakers ]

Ben Yelin: In Alaska, yeah.

Dave Bittner: In Alaska, right. Standing in front of the Alaskan flag and a grizzly bear.

Ben Yelin: Right. Denali in the background.

Dave Bittner: That did not happen. Yes, exactly, exactly. Interesting times; right?

Ben Yelin: It sure is. I mean, I don't think there has been -- since we've started recording this podcast, there hasn't been as much of a fascinating/slightly terrifying development as the fast-pace development of these chatbot AIs.

Dave Bittner: Yeah.

Ben Yelin: So it's something that we will follow closely. I don't think we've ever seen anything like it. And even in the one to two weeks between our recordings, seems like the landscape changes that rapidly.

Dave Bittner: Yeah.

Ben Yelin: So we'll certainly have a lot to talk about.

Dave Bittner: All right. Well, we will have a link to that story in the show notes. We would love to hear you. If there's something you'd like us to cover on the show, you can e-mail us. It's caveat@thecyberwire.com.

[ Music ]

Dave Bittner: Ben, I recently had the pleasure of speaking with Danielle Jablanski from the Nozomi Networks. And our conversation centers on one year of Shields Up, which, of course, is CISA's call to action for cybersecurity. Also the national cyber strategy. And, of course, Russia's war on Ukraine. Here's my conversation with Danielle Jablanski.

Danielle Jablanski: I would describe the current situation as -- I hate to say the word, but "woke." Only because there have been so many instances where people have said, well, this is -- this is a wake-up call, and this is the wake-up call, and this is a wake-up call. And I don't think that those wake-up calls have been calls to action yet. But I think everyone is awake; right? They are attuned to the risks. They understand the concerns. And now I think people are really starting to work together, whether that's with sector risk management, agencies with industry partners, with consultants and trusted advisors. I think everyone is awoken to the risks and ready to get to work, if that makes sense.

Dave Bittner: Yeah. I'm curious, you know, in looking back when the war in Ukraine started, to what degree was that an accelerant or a catalyst for folks in ICS and OT to kind of, you know, get their -- get their houses in order, if you will?

Danielle Jablanski: Yeah. And it's a good question, and I'll kind of approach it a different way. A lot of people have pointed to how fluid the kind of groups of actors are in the conflict, right, because of the kind of uprising and activism that's resulted. I've said this since very beginning of the Ukraine conflict that Ukraine would and is -- or would have been and arguably is the most well-prepared nation for cyber attacks and incidents in a classic kind of state-to-state conflict. That's a good thing. But they've been the most prepared for a number of years. And so I never expected there to be some kind of grand attack. I think that cyber capabilities from nation state to nation state are most successful when they are asymmetric. However, Ukraine was prepared and has been preparing for years. I think that their transition of their energy dependence in plugging into the rest of Europe was overlooked as a really great win for their planning and for their preparedness. But I also don't think that this conflict should be a blueprint for what quote, unquote, "cyber war" will or will not look like going forward, just because of that kind of demonstrable preparedness. But what I did learn from it is that nothing is off limits. And I've written about this for a couple of years now, which is, if everything can be held at risk and nothing is off limits, then we've created a really insecure future for what cyber conflict does look like. And that brings me back to a couple of things that are happening now. And I would say to a certain extent they are direct results of that conflict and lessons learned. But I think for a number of years that attribution question was this big looming, you know, can we point to a nation-state actor or a state-sponsored, you know, group, et cetera? And we've really shifted that conversation to intent. And I think that that is the biggest takeaway, that the United Nations is looking at that, that the United States cybersecurity strategy that just came out is looking at. If not attribution, if large-scale, you know, naming and shaming and law enforcement isn't really quite working on our behalf, how can we focus in on intent to change the dynamic? And I think that's really interesting.

Dave Bittner: Well, help me understand that. What do you mean by "intent"? Can you unpack that for me?

Danielle Jablanski: Sure. So a couple of different things. So, obviously, in the legal context, intent is something you have to prove when you're trying a case. In the UN context, there's a new recommendation or best practice. I don't what the actual name of the treaty or whatever that they're working on right now is, but they're looking at the ability to prove intent. And I can pull up some notes from that. I don't have it in front of me. But also when the strategy was coming out in the United States, Chris Inglis has referred to what he called "affirmative intentionality," which was asking more of industry to raise the bar on cyber responsibility, liability, and resilience building, but it all went back into this intentionality. What is the intent behind what you're building? What is your intent to secure what you're building? What is your intent when you're attacking these systems? And what is the effect that you're going to cause based on that intent? And I think that is a sea change, actually, that goes along with a pretty dynamic new strategy. But it's not something that's happening in a vacuum within the United States.

Dave Bittner: You spoke about nothing being off limits. Where do we stand when it comes to establishing cyber norms? You know, and I'm thinking about, you know, rules of armed conflict and, you know, traditionally things like hospitals were off limits. And that doesn't seem to be the case with cyber.

Danielle Jablanski: Yeah, I think there are a couple of efforts, right, like the Tallin Manual efforts and different things with the UN GGEs, the group of governance experts. Nothing materialized that has been any type of international, you know, norm or standard. There have been some bilateral negotiations, if you remember, like with China, to say, you know, we need to stop some of these certain activities. We need to really kind of quell IP theft and things like that. But at a broader, you know, international level, there really hasn't been anything. I do think, though, that the United States has gotten more concerned with looking weak, for lack of a better kind of term, within cyberspace. And I think the best example of that is actually the activity that happened after the Colonial Pipeline incident, where, you know, it was never written in any type of, you know, output that that was a red line for us as the United States. But everyone got together and said -- you know, law enforcement, cyber security community, et cetera -- and said, how do we go after these people? How do we get that money back? How do we triage this incident and make it known that, you know, something was wrong and it needs to be corrected and people are going to be held accountable? That's happened in a couple different instances, but it -- there is no good kind of red line. But there is this assumption now that, if you do something bad enough that impacts society, right, beyond kind of the target that you've chosen, then you might see the full weight of U.S. cyber policy behind you, whatever that looks like, and/or the defense infrastructure that we have behind us in the United States. Which I don't think we've actually seen at scale; right? Very well-trained, very well-resourced U.S. cyber capabilities in the military and the federal domain, that I don't think we've seen together in a combined campaign towards any adversary. Because I don't think that type of, quote, unquote, "red line" has been crossed; right? I don't think we've seen any type of asymmetrical need to leverage that capability from our, you know, national kind of power.

Dave Bittner: Do you think my perception that it's been intentional that, for example, our government has not set those red lines, that they've kept them fuzzy, that they have not been specific, do you agree that that seems to be an intentional policy decision?

Danielle Jablanski: I do. So if you look at defend forward and persistent engagement, a little bit of that opaqueness is by design. There's a theory, actually, out of nuclear weapons space that Israel has for their nuclear weapons; right? Where their publicly deny them, but everyone once in a while showcase them. "Strategic ambiguity," that was the kind of umbrella term for Israel's nuclear policy, strategic ambiguity. And so I think, if you look at kind of the whole-scale impacts of defend forward and persistent engagement, it is a little bit like strategic ambiguity. But I also think that within the national strategy, the liability issues and the other things you've seen come to light over the last year across the cybersecurity community, which, if you remember that, you know, our entire industry that we operate in isn't regulated in the United States. But I think we're seeing kind of this review or kind of due diligence across the nation of like what is our capacity as a nation state in cyberspace? I don't think we've seen that before. And so I do think some of that ambiguity is out there, but I also -- in my previous work with the military, have heard from a lot of leaders that, you know, in the private sector and the public sector and within the military industrial complex that there's no clear understanding of how many attacks or back and forth, you know, targeting of adversarial targets, as well as being hit, you know, in the United States, we could defend against; right? So there's this kind of domino effect that I think strategic ambiguity tries to put off or dismiss. And so it's, you know, just like in the nuclear domain, which is a little bit of my background, right, we've kept a lot of our missile silos as targets for the enemy or the opponent to waste in times of conflict so that they can't then take their arsenal and point them all at our major cities. And I think that that's an interesting comparison, because we don't know what that looks like in cyber either. I don't think that there are critical infrastructure targets that we can, quote, unquote waste. And so how many or what's the breadth of what we can withstand in a, quote, unquote "tit-for-tat" type of escalation? We don't know. I think that the jury is definitely out on that one.

Dave Bittner: Yeah, that's fascinating, the whole notion of, you know, nuclear missile silos as honey pots; right?

Danielle Jablanski: Right. And they're not technically honey pots. They're just --

Dave Bittner: Yeah.

Danielle Jablanski: They're not decommissioned so that they would --

[ Multiple Speakers ]

Dave Bittner: Right, right.

Danielle Jablanski: -- need to take those out in order to have any type of impact that they would want, depending on their objectives.

Dave Bittner: Yeah. So the Biden administration recently released their national cyber strategy. I'd love to hear your insights on it. How do -- how do you feel it comes across as a policy statement?

Danielle Jablanski: I like the strategy a lot. I'm a big fan. I know that we could debate the implementation or the teeth or the enforcement level, you know, until we're blue in the face. I think a lot of that is meant to be ironed out. I also think it's ironic that anyone would call anything in this strategy new. A lot of the debates that have made it into the national conversation are old debates. They just never were part of a strategy because they were, quote, unquote "too difficult." And I think that this strategy is really an invitation to continue to have difficult debates in the United States. That's what we're built on. And then to seek progress, seek cooperation, seek input. I think it's the most kind of bilateral conversation that we can have at the national level in terms of a policy. I love the call-outs for critical infrastructure. I am even more pleased to see industrial control systems and operational technology highlighted. I think that every company and critical component in the United States, whether it's digital or, you know, kind of classic legacy technologies, should compete on security; right? There should be kind of that underlying competitive nature of securing products, securing technology from the vendor standpoint and from the bolt-on or add-on cyber security vendors. And I think this is a broad invitation to just do more, kind of coordinate better, and to see really what audacious goals can look like when we bring all those resources to bear.

Dave Bittner: Where do you suppose we stand with the public-private partnership aspects of this? You know, when you think about the military, there are certain things in terms of a national defense that are pretty clear what their responsibilities are. You know, defending the border and so on and so forth. But with cyber, I think for a lot of historical reasons, a lot more of the defense falls on the private sector. I don't know that that's necessarily a bad thing, but it's certainly something to consider as we move forward.

Danielle Jablanski: I do think it's something to consider. I actually wanted to study this. When I was working at Stanford, I was considering whether to go into the private sector or to go back and do a Ph.D. And what I really wanted study in a Ph.D. was, what do the handbooks look like for training nation-state or state-sponsored groups in our adversarial nations? Versus, what do the handbooks look like or the training look like for U.S. forces at the national level? And the issue was you can't get your hands on what the handbooks look like or what the training and guidance looks like, but you know it exists. And so there's this kind of broad open question in terms of how targeting happens; right? Because companies in the U.S. always ask people like me, why would I get targeted? Or how would you know if I were next? And that question is really difficult to answer because it depends on your risk profile and how opportunistic you could be targeted. Or somebody would really want to cause a bad day and choose you. And there could be a number of cascading impacts and reasons for that to be the choice or the calculation. But I think that that targeting question is this kind of known/unknown. And at the same time the kind of way in which, you know, nondemocratic regimes can exert influence over their private sectors leaves us a little bit asymmetrical compared to some of our peer adversaries when it comes to the cyber domain. So, again, I think that this is kind of an invitation to get around that fact and to try to, for lack of a better term, you know, recruit and, you know, be part of a draft, in a sense, to build more national capacity for resilience given that asymmetric reality.

Dave Bittner: And it seems as though the federal agencies are doing just that. I mean, there's a real emphasis on recruiting right now.

Danielle Jablanski: Absolutely, across the board. And there's tons of, I mean, this could get into a whole hiring conversation. But, yeah, there are tons of open positions. There's a lot of retirement happening, but a lot of new creation. We're seeing a lot of the technical barriers being broken down. Basically, you know, if you have an interest, you know, we'll teach you. We'll teach you anything; right? We're just trying to get --

[ Multiple Speakers ]

Dave Bittner: Right.

Danielle Jablanski: -- people in the door.

Dave Bittner: Which is kind of, I mean, you think about that's an opportunity that the military has provided lots of people, you know, to learn your trade. You know, where else are you going to learn to fly helicopters? Or, you know, repair technical things? Well, the military can provide that opportunity. Seems to me like it's the same with cyber.

Danielle Jablanski: Absolutely the same. And linguistics. There's tons of other skills you can learn. There's so many different applications. The best advice I've heard that resonates across the most different kind of levels of expertise in cyber, if that makes sense, is pick one thing and learn it well, and you can do anything else in this field. Kind of like if you were going into linguistics; right? You would pick one language. You would get really good at that one, and then you would start to learn more, whether that's romance languages that are similar. You can apply that to coding; right? You can learn one language, know it really well. Start to branch out and learn a few more. Same with cybersecurity applications; right? You can learn forensics and know it really well. You can learn pen-testing and know it really well and be able to pivot if you don't love your day-to-day. But knowing that you have that base to build off of, it's very similar to, you know, kind of the military route, where you are, you know, provided a role, and then you can pivot and grow and learn and add, you know, different specialties. Just like in cyber, you can add different certifications. But they're less required than they once were because we know we really need a groundswell of support in this field.

Dave Bittner: Swinging back to ICS and OT cyber. What's your outlook for the coming year, as you look toward the horizon? Is -- are you optimistic?

Danielle Jablanski: I am optimistic. So back to some of the intent pieces. I went and found an excerpt from the UN treaty that's on the table. And it's looking at, like I mentioned, intent and intentionality. And it kind of draws upon some definitions in the Budapest Convention, which is another thing I could not remember earlier. But their new focus is negotiating intent and intentionality. When actors or groups carry out potential cyber criminal activities, intent can be determined by knowledge, intent, or purpose required as an element of an offense established in accordance with the convention and may be inferred from objective factual circumstances. Risky; right?

Dave Bittner: Right, right.

[ Multiple Speakers ]

Danielle Jablanski: Big, big undertaking.

Dave Bittner: Broad, right, right.

Danielle Jablanski: But the other thing that it reminds me of is when we look at OT and ICS in the context of critical infrastructure, is we have this massive prioritization problem; right? We're looking for a needle-in-the-haystack attempts for lateral movement from an IT domain into a control system. We know that those control systems are purpose-built systems, which means that, Dave, if you ran, you know, a chemical facility and I ran a food warehouse -- to pick some nontraditional examples -- we might use some of the same technologies, but they're configured in such different ways for our process that targeting wouldn't look the same, and criticality wouldn't look the same. And so, when you bubble that up across the 16 critical infrastructures, depending on the small to medium businesses and the kind of, you know, huge corporate enterprises that you see that operate globally, we just start to kind of get over our skis in terms of prioritization. And so I really think that this pairing of intent across international law, across United States strategy, and prioritization within critical infrastructure is, again, an invitation and an opportunity for us to be able to do that better, to prioritize better, to go in and help organizations and sectors understand what the nature of that targeting conversation we had, the nature of the capabilities and capacity that certain groups have regardless of attribution and then be able to put that into context. I think that's the other thing that's been missing for a long time, is we can go down rabbit holes about TTPs, you know, that adversaries are capable of. But that just kind of exists in the ether. You know don't know what that looks like when they're actually targeting you. That takes context; right? And so then you have kind of the opposite, where someone will go in and do a really good kind of risk metric and risk framework, but then they might get caught up on the likelihood of success because they don't know enough about the TTPs and the adversarial capabilities and components. And so mapping those and matching those precisely is a really huge undertaking. And I think we also forget, and I haven't seen enough credit given, that people don't just think about cybersecurity all day long; right? Like we exist in this market and in this field, and I think about it all day long. But when I talk to my partner who's an engineer or I talk to, you know, my family, nobody is thinking about --

Dave Bittner: Yeah.

Danielle Jablanski: Adversarial targeting of critical infrastructure all day long, and why should they? I mean, there's tons of other priorities that are competing for our resources, competing for talent, competing for national policy, competing for all these other things. And so I think we also need to be a little bit humble in this field and just do more, like I said, from the beginning to work with our partners, to work with our -- in my world it's customers -- but, you know, critical infrastructure owners and operators to say, maybe you have some engineers that want to learn this. Maybe you have some engineers who don't care, and they just need to be able to translate the problem set to decision-makers in their organization who do care. And we need to be able to level-set and meet everyone where they're at and, you know, be a little bit less overlords of the, you know, this is the problem. And I think everyone's head is on a swivel when it comes to cybersecurity, you know, just looking at the number of best practices, resources, do I follow this guide? Is this one enforceable? Is this a regulation? Is this compliant? And the last thing I would say, back to some of the strategy conversation, is I've seen the strategy as this invitation, and it's asking a lot more from the private sector. But I think the government also owes a promise to streamline some of those competing frameworks, best practices, guidance documents for these sectors. And I think we'll see the sector's management agencies start to do that. But I think that that's owed to the community, to the end users to say, hey, we know that there's a lot of competing documents out there and it's difficult to build a program with competing buckets. But, you know, this is really what risk management looks like for your type of business profile or for your specific sector. I think that that will actually go a long way, and I think that that's what's coming.

Dave Bittner: What do you think, Ben?

Ben Yelin: Really interesting interview. She's very knowledgeable about both domestic and international landscape.

Dave Bittner: Yeah.

Ben Yelin: And one thing that really interested me is the reorientation around this concept of intent that we're seeing in both the Biden administration's cybersecurity framework and what's happening at the United Nations. And I think that's a really interesting perspective on evaluating cybersecurity. So I thought it was a really interesting interview.

Dave Bittner: Yeah, yeah. Danielle's a great guest, just always time well spent. I always --

Ben Yelin: Very smart.

Dave Bittner: Yeah. Always just come away learning just insights that she shares. Just always a pleasure to have her, and I hope she'll spend some more time with us in the future.

Well, that is our show. We want to thank all of you for listening. The "Caveat" podcast is proudly produced in Maryland at the start-up studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our senior producer is Jennifer Eiben. Our executive editor is Peter Kilpe. I'm Dave Bittner.

Ben Yelin: And I'm Ben Yelin.

Dave Bittner: Thanks for listening.