Caveat 6.22.23
Ep 177 | 6.22.23

The power of global networks.


Eric Wenger: There's a tendency to say, "Let's not spend new money on old things." Our perspective is that that is not a sustainable path forward.

Dave Bittner: Hello, everyone, and welcome to "Caveat," the CyberWire's privacy surveillance law and policy podcast. I'm Dave Bittner, and joining me is my co-host, Ben Yelin from the University of Maryland Center for Health and Homeland Security. Hello, Ben.

Ben Yelin: Hello, Dave.

Dave Bittner: Today, Ben looks at a new case involving cell phone searches at the border. I've got the story of the EU taking on regulation of AI. And later in the show, my conversation with Eric Wenger. He is Cisco's Senior Director for Technology Policy and Government Global Affairs. We're discussing the issue of global network resilience. While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney.

Dave Bittner: All right, Ben, we've got some good stories to share this week. Why don't you kick things off for us here?

Ben Yelin: So I'm always looking for interesting appeals court cases, and once again, it was Professor Orin Kerr who alerted me to one over Twitter. This comes from the Fifth Circuit Court of Appeals, which is in the southern part of the United States, largely based in the state of Texas, and this case concerns a search of a cell phone at the border. It is a federal case, United States v. Castillo. So what's interesting here is there's been a divide among courts about the Fourth Amendment standard for searches of the border. Fourth Amendment prohibits unreasonable searches and seizures.

Dave Bittner: Right.

Ben Yelin: There has been a longstanding exception under what's called a "Special Needs Doctrine," basically something that's distinct from normal law enforcement operations, that we have a policy interest in protecting the border that goes above and beyond apprehending criminals, and therefore the standard should be lessened. We've talked about a number of other cases from different circuits where courts have come to differing views about the extent of that border search exception.

Dave Bittner: Right. Just recently, we were talking about this.

Ben Yelin: Yeah, I mean, it was literally a couple of weeks ago.

Dave Bittner: Yeah.

Ben Yelin: We had a case that went the other way, then this one went. But there's really a divide as to how broad these searches can be at the border in the absence of a warrant. These are warrantless searches, which is what makes them particularly controversial. So what happened here is based on a suspicion, law enforcement at the border conducted a manual search of a cell phone. By conducting that manual search, they discovered indications that the person had child pornographic images on his device, and with that information gleaned from the manual search, they went ahead and conducted a forensic search, and through that forensic search, they got all different types of evidence, really a treasure trove of prohibited documents. He was arrested and was convicted. He is challenging the sufficiency of the evidence against him, saying that this was an overbroad warrantless search at the border. What's interesting about this case is I think there's a split among different circuits about the standard for border searches. The Fifth Circuit has not really weighed in to that growing dispute, and just quoting from the opinion here, that the judge in this case, James Ho, says that in some circuits the governing standard depends on the extent of the search. The circuits are divided over whether reasonable suspicion is required for a forensic search of a cell phone at the border, but every circuit to have addressed the issue has agreed that no individualized suspicion is required for the government to undertake a manual border search of a cell phone. So the issue here is that the court is holding that they don't need a warrant to conduct a manual search, but that manual search led to evidence that led to a forensic search. So you have a forensic search, which is quite intrusive, without a warrant just because you were able to establish through a manual search there is this significant amount of evidence. And I'm not sure other circuits are going to be comfortable with tossing aside the forensic searches you see here. The court is basically saying, "We don't need to weigh in on the matter of what type of suspicion is required for forensic searches because here we have a manual search." Well, sort of. They did have a manual search, but eventually that led to a forensic search, and we know from Riley v. California, the Supreme Court holds the cell phone up as kind of a pillar of our personal privacy. It has all of our information on it. The government generally needs a warrant to search your cell phone, manual or forensic, if one were not at the border.

Dave Bittner: Right.

Ben Yelin: So the fact that we have a warrantless forensic search here I think is puzzling and is something that we might see attacked in other circuits.

Dave Bittner: Help me understand the difference, like why does it matter if the search is manual or forensic? A search is a search, but they're saying, no, not -- there's a difference.

Ben Yelin: There is a difference. I think with a manual search, it's limited and confined in terms of time and scope. So there's just a natural bandwidth that customs and border agents have to do manual searches of cell phones, and because of that limitation, I think it's reasonable that when somebody is coming into the country, you can manually check stuff for contraband. A forensic search is far more involved and it's something that wouldn't have been possible in the pre-cell phone era, to get a search that was that intrusive. So I think what a forensic search is, is it goes far beyond just one officer or multiple officers unlocking a device, looking around, snooping around for contraband, prohibited files, etc., and goes into a really complex search through the bowels of somebody's phone to find that contraband in a way that, you know, computers can do this better than humans do, in other words, so they'll be better at finding that type of prohibited material. So I think that's really the concern here, is that maybe you can have reasonable suspicion which is below the standard of probable cause to conduct that manual search, and that's fine, but if that leads to a forensic search because the manual search has turned up some type of evidence, then, in effect, you are having a warrantless forensic search of somebody's cell phone at the border.

Dave Bittner: Right.

Ben Yelin: And I think that kind of goes against the spirit of a lot of the Supreme Court's Fourth Amendment cases, Riley v. California being one of them, and certainly Carpenter being another one.

Dave Bittner: So if your manual search brings up this evidence, don't you pretty much have a slam dunk to get your warrant?

Ben Yelin: Right. So that's what's -- another thing that's confusing here is they could have just gone to some type of magistrate judge after they did that manual search to obtain the warrant.

Dave Bittner: Right.

Ben Yelin: That's not really addressed in the case, interestingly. I'll read what they said here about the forensic search. All we need to -- all we need to decide this case is to adopt the consensus view of our sister circuits and hold that the government can conduct manual cell phone searches at the border without individualized suspicion. After all, the manual cell phone search here produced evidence of child pornography, so if that search is valid, it's hard to see how that would not justify the subsequent forensic searches for evidence of child pornography, and even the criminal defendant in this case is not complaining otherwise. The criminal defendant is saying that his Fourth Amendment rights were violated by both the manual as well as the forensic searches, but he is not claiming that the forensic search was invalid, even if the court finds that the manual search was valid, if that makes sense. So yeah, I think the criminal defendant here is really out of luck based on what they found on this manual search, which was not even individualized. They didn't establish any type of probable cause and, yeah, I think there was an opportunity for the court here to seek a warrant for the forensic search once they had evidence from the manual search, and the fact that they did not I think is -- could certainly cut against reasonableness from a Fourth Amendment perspective.

Dave Bittner: So does this one more brick in that wall of heading towards the Supreme Court?

Ben Yelin: Yeah, I mean, I think it is. I think we now have -- we've seen enough federal circuit courts weigh in on this issue that the Supreme Court is going to have to weigh in eventually. I keep looking for a border search, like a digital border search, forensic border search case to make its way to the Supreme Court. There have been a few potential cases the Supreme Court has declined to take up, but now that we have this divide among circuits and I think what is, at this point, a really -- a really unclear prevailing standard where you have these distinctions with the manual search and the forensic search, and you have a split among federal circuits that I think this is definitely an issue that is ripe for the United States Supreme Court, and I suspect that if it's not this case, it's going to be a case like this where there's fruit of the poisonous tree, at least alleged by a criminal defendant, maybe one type of legal search led to another warrantless search that violates somebody's Fourth Amendment rights. I think we could see a very similar case make it to the Supreme Court and they'd have to decide these issues.

Dave Bittner: I'm just trying to understand, you know, you know me, I love my analogies to try to understand things.

Ben Yelin: Sometimes they're great.

Dave Bittner: Yeah. Sometimes they're not.

Ben Yelin: All right, all right.

Dave Bittner: That's right, that's right. And I'm trying to, you know, compare this to a search of my home, right? So the standard would be, listen, it's okay if we send in a couple of police officers to just look around, but if we send a robot in who can, you know, can just go through everything with a fine-toothed comb, then that's different, right? Like --

Ben Yelin: Yeah, I mean --

Dave Bittner: Like, for me, like, what -- how is it okay at all? How is the manual search -- and I guess it's because it's at the border they're saying that the manual search is okay. It's a lower level --

Ben Yelin: Right.

Dave Bittner: I'm trying to split the difference here, and I guess I'm having a hard time seeing the difference between a motivated law enforcement professional, you know, would know where -- would know where to look and what to look for and all that sort of thing, a well-trained law enforcement professional, and I don't have a problem with that, but as you and I talk about over and over again, get a warrant.

Ben Yelin: Right.

Dave Bittner: Get the warrant.

Ben Yelin: Right. So I'm just going to play devil's advocate here.

Dave Bittner: Please.

Ben Yelin: I know you love your terrible metaphors. I love playing devil's advocate. I think the judges see this as kind of a plain-view doctrine corollary where let's say you go into a house to conduct a manual search with a couple of officers --

Dave Bittner: Right.

Ben Yelin: And find a -- like one picked -- one image, strewn across the floor, of child pornography.

Dave Bittner: Right.

Ben Yelin: That might justify you to bring in the full-on SWAT team or whatever. I don't know what -- I don't know what kind of law enforcement force comes in for a child pornography case.

Dave Bittner: Sure, sure.

Ben Yelin: But the equivalent. You bring in the full --

Dave Bittner: These days, who knows?

Ben Yelin: Yeah. You bring in the full brigade into the house because you've already established that one image exists.

Dave Bittner: Right.

Ben Yelin: You have probable cause on its prima facie, to use the Latin term.

Dave Bittner: Yeah.

Ben Yelin: That there's going to be additional evidence there, so you bring in the full force of the law.

Dave Bittner: Right.

Ben Yelin: I'm not sure if that adequately captures the extent of a forensic search, though. I think, like, having a bunch of robots come into your house who can open drawers and, you know, scan closets and scan the insides of somebody's walls, I think that's kind of the better comparison.

Dave Bittner: Yeah. I mean, so again, you know, a police officer comes to someone's front door, knocks on the door, the person opens the door, and behind the person is -- stuck to the refrigerator with a magnet is an objectionable image that the police officer can see from the front door.

Ben Yelin: Right.

Dave Bittner: Now we got a reason to go further, but I would say now we have the -- now we have the justification for our warrant.

Ben Yelin: Right. I think that probably should have been the case, but I think from the perspective of law enforcement, it's a time and resources issue. You already know that there's contraband here. That means you've established probable cause.

Dave Bittner: Right.

Ben Yelin: It would be cumbersome, especially with the backlog of cases that we have at the border, to get this in front of a judge. It would be great if they could get it in front of the judge, but from their perspective, they want to be able to prosecute this case in a court of law.

Dave Bittner: Yeah.

Ben Yelin: And they realize that they probably will be able to do that without going and getting a warrant for this additional material because, if this makes sense, they have already established probable cause that that material exists.

Dave Bittner: Right, right. So the Fourth Amendment applies unless we're too busy.

Ben Yelin: Right, yeah, that seems to be the excuse of the day these days.

Dave Bittner: Okay.

Ben Yelin: Yeah, I mean --

Dave Bittner: We're too busy and we're underfunded, so --

Ben Yelin: Yeah, it's just so hard to get in front of a judge. I get it. I mean, you have the evidence, so it might seem cumbersome at that point to be like, do we have to put together a whole file to get in front of a magistrate judge?

Dave Bittner: When we know we're going to get --

Ben Yelin: We know we're going to find it, so why, you know, why are we putting the effort to do that?

Dave Bittner: Let's stay practical and, yeah, sort of meanwhile here in the real world, this is what we need to do.

Ben Yelin: Yeah, I mean, I think the Supreme Court might see it differently based on what they said in Carpenter where they focused on the intrusiveness of the search itself, so its breadth, its depth, its revealing nature, you know, all of those -- all of those things certainly qualify when we're talking about forensic searches of devices, and so that's where I think the Supreme Court might depart, if we think that they're going to adhere to their own Carpenter precedent, which is also doubtful because that was a five/four decision and we have new justices. It's been five years since that decision was handed down and we have three new justices.

Dave Bittner: Right.

Ben Yelin: So there's no guarantee that the current Supreme Court would see Carpenter the same way.

Dave Bittner: Yeah. Good times, good times.

Ben Yelin: Yeah. So there's just not a lot of clarity in this area of the law.

Dave Bittner: No, there isn't, there isn't, but I guess that's, I mean, this is just the way it works, right?

Ben Yelin: I would highly recommend just not bringing over contraband items on a device if you are crossing the border.

Dave Bittner: That's true.

Ben Yelin: That would be my recommendation.

Dave Bittner: Right. Crooks are stupid. That's why they're crooks.

Ben Yelin: Exactly.

Dave Bittner: Okay, all right. Well, how is this going to play out, then? Where does this go from here? I mean, is this a done deal, or are we likely to see an appeal here? What's next?

Ben Yelin: So this was a three-judge panel of the Fifth Circuit. I'll note that the three judges here are all I believe Republican appointees, including some of -- some big names in the legal world. Leslie Southwick being one of them. James Ho was a Trump appointee who was highly touted by the political right. So I would suspect that the defendant will appeal this and try and get this in front of the Supreme Court with a writ of certiorari, and most litigants are not successful at accomplishing that goal, but maybe this litigant will be different. The litigant could also petition for a hearing en banc from the full panel of the Fifth Circuit Court of Appeals. I think that is a futile effort because the full panel of the Fifth Circuit Court of Appeals is only slightly to the left of Attila the Hun in terms of political orientation.

Dave Bittner: Okay, so they're likely to track along what's already been done.

Ben Yelin: Exactly.

Dave Bittner: Yeah.

Ben Yelin: Exactly. So the Supreme Court is the avenue. This defendant will probably file a writ of certiorari and we'll see what the Supreme Court says.

Dave Bittner: Wow. All right, well, keep an eye on that one.

Ben Yelin: For sure.

Dave Bittner: Fascinating, yeah. All right, well, my story this week has seen a lot of coverage in a lot of different places. I happen to be pointing to an article here from the Atlantic Council and this is basically about how the European Parliament, which is the legislative branch of the European Union, they passed a draft law last week that's going to restrict and add transparency requirements to the use of artificial intelligence in the EU. First of all, I think it's important to point out that things work differently in the EU than they do here in the States, so the fact that this law was passed doesn't mean that it's been put into effect. It means that it goes for debate now.

Ben Yelin: Right.

Dave Bittner: Right?

Ben Yelin: I -- not to go all poli-sci, but there's the European Commission, which has different authority than the European Parliament, and they have to accede to the law, so hopefully they understand it. That's all I can say.

Dave Bittner: Right. So this article in the Atlantic draws on a bunch of different experts in this area for commentary, so it's really interesting to see different points of view. There's one that I'll highlight here. They point out, they say there are numerous significant aspects of this law, but there are two and a half that really stand out. The first is establishing a risk-based policy where lawmakers identify certain uses as presenting unacceptable risk, for example, social scoring, behavioral manipulation of certain groups, and biometric identification by groups including police. Second, generative AI systems would be regulated and required to disclose any copyrighted data that would use to train the generative model, and any content AI outputs would need to carry a notice or label that it was created with AI. It's also interesting what's included as guidance for parliament is to ensure that AI systems are overseen by people are safe, transparent, traceable, non-discriminatory, and environmentally-friendly. So there's a lot going on here. I think, at a high level, once again, we see the EU taking the lead on a thorny privacy issue, right?

Ben Yelin: Right, as they've done many times in the past.

Dave Bittner: Yeah.

Ben Yelin: And I can't quite account for, I mean, I could try to account, but there are just differences in the way our federal government works, polarization, partisanship, inertia, that just isn't as prevalent in either state legislatures or the European Parliament, but it is absolutely noteworthy that Europe, and not even a state within the United States, is taking the first bite of the apple of this. I'll note that there has been debate and some proposed regulation at the state level dealing with AI, but this is the first that really tackles it in a comprehensive way in the post-iterative AI ChatGPT era, so it's certainly significant in that respect.

Dave Bittner: I should point out that the quote I just read was from Steven Tiell, who is a non-resident Senior Fellow with the Atlantic Council's Geotech Center. So this goes in for debate and certainly there's going to be some folks in law enforcement and perhaps industry who take issue with this, but --

Ben Yelin: Right. The industry seems particularly unhappy. ChatGPT and its founder basically said, "If this proposal becomes law, it's going to be very hard for us to operate in Europe," which, you know, they always say that type of thing as a threat, and sometimes the European Parliament takes that threat seriously, sometimes they don't, but it certainly would put a kibosh on a lot of the functionality of ChatGPT. It would make iterative AI just far more cumbersome to produce, especially with the requirement that there be a warning label on it saying that this was -- this image or this text was generated by AI. My other issue with this is, and I just haven't really done the research on it.

Dave Bittner: Yeah.

Ben Yelin: The risk-based approach. I know this is something that's been talked about at the state level, that you evaluate how much regulation there needs to be based on the use of the AI itself, so certain things that entail higher risks, like things that might sway voters to influence elections, which things that suggest posts, photos, and videos that people see on social networks, those types of high-risks, high-stakes uses of AI would merit greater regulation. I guess my issue with it, and I'm wondering your thoughts on it, is unless we can have a really dynamic process where we're constantly identifying what these high-risk areas are, it just runs -- it runs the risk of being outdated relatively quickly because it's hard to see two or three years down the line what a high-risk use of AI is going to be. It perhaps could be something that we -- that's just not on our radar right now.

Dave Bittner: Yeah.

Ben Yelin: So defining that in a statute or a regulation just might end up being limiting. That would be my first interpretation and first criticism of it. Yeah, I think it certainly holds promise. I like that you're taking a scalpel and not a butcher knife at AI because there are some uses that are non-controversial and don't present a high risk, and there are some that are. I think it instinctively makes sense to go after those areas where there is a high risk. I just think it's something that's hard to define. Probably many different people in good faith have different definitions of what that high risk would be, so that would be something that certainly stood out to me in this regulation.

Dave Bittner: Yeah, I mean, is it like that old chestnut of, you know, pornography, "I know it when I see it," but to define it is challenging?

Ben Yelin: Yeah, and that, which came from a famous Supreme Court case, is pretty discredited because that's a poor way of defining something in the eyes of the law.

Dave Bittner: Right.

Ben Yelin: It is something that's kind of subjective, and I just think that there could be something that presents itself several years down the line that is high risk that hasn't been identified in EU regulation and then we're either confined in the statute or we're just constantly changing the definition, which would be confusing, particularly for compliance. I mean, when you have these big U.S. companies already saying, "I don't know about this regulation. We might have to pull ChatGPT from the European Union because it's too complicated. It's too burdensome," imagine having this uncertainty about what qualifies as high risk. That's just going to make the poor attorneys sitting there, twiddling their thumbs, trying to figure out what is legal and what is illegal, so I think that certainly presents an element of risk for the industry.

Dave Bittner: Are we doomed to be reactive with something like this? I mean, obviously, this is an attempt to be proactive, but is the -- does the definition have to be formed by -- on a case-by-case basis?

Ben Yelin: Yeah, I mean, by nature, it is always reactive because we have to understand the threat before we try and regulate that threat or legislate on that threat.

Dave Bittner: Right.

Ben Yelin: So I get it from that perspective. This is as forward-looking as any legislative body could be. I mean, we're only about six months into what I would consider the new wave of AI.

Dave Bittner: Yeah.

Ben Yelin: And so the fact that we've -- that the European Union has advanced this type of regulation already is pretty impressive. I mean, that's not usually the timeline we see either in our legislatures here in the United States or in our legal system. It usually takes much longer. But yeah, I mean, it is going to end up being reactive just because, you know, we're not talking about titans of the industry visionaries here in the European Parliament who are always looking for the next best thing, the next greatest thing in tech. These are career bureaucrats. Those are at least the ones who are going to be making the decision for the European Commission, so I think their baseline knowledge of any of these issues is relatively limited, so by nature, it's going to have to be reactive to what the controversy is.

Dave Bittner: Right.

Ben Yelin: And that's just one of the problems of trying to regulate this stuff in the first place.

Dave Bittner: And we don't know where it's going to go. It's such early days, as you mentioned. We don't know how this is going to play out. We don't know the areas of our day-to-day lives that it could affect. You look at any of the other big online social media phenomenons, the Facebooks, the Twitters, any of those, YouTube, you know, who could have predicted --

Ben Yelin: Right.

Dave Bittner: The influence they would have on the world, on society? Imagine trying to regulate them early on, and I guess, you know, there is -- or there has been this notion of staying hands-off in the early days of new technologies.

Ben Yelin: Right.

Dave Bittner: And I think perhaps this is a recognition -- or do you think that this is a recognition from the European Parliament that, looking at some of the other hands-off approaches, well, that didn't go so well.

Ben Yelin: Yeah, things get out of control quickly. I mean, when we're talking about AI, we're talking about potentially severe consequences if we don't get a handle on this. Unchecked dissemination of false information, you know, I know it from the academic perspective, so cutting against the legitimacy of academic endeavors by people cheating on tests, etc.

Dave Bittner: Yeah.

Ben Yelin: Potentially displacing a lot of workers, not to mention implications on defense, national security, so this is really, really important stuff, and we can't afford -- and I think this is what the European Union would say, we just can't afford to wait on this to see how the technology develops. As you've said, they have tried that method in the past and things get way out of hand before the regulators can come in and try to address the problem.

Dave Bittner: Right.

Ben Yelin: So I think it's really both worthwhile and noteworthy that they're getting out in front of this.

Dave Bittner: Yeah.

Ben Yelin: And I suspect that this is going to be ratified and put into law by the end of this calendar year.

Dave Bittner: Right, right. They are definitely on a fast track here, and this article points out also that it very much tracks the way GDPR did in terms of the fines. There is a big stick that they have here to -- for compliance. You know, we're talking about some of these are up to like 7% of worldwide annual global revenue.

Ben Yelin: Yeah, it's the same GDPR approach.

Dave Bittner: Yeah.

Ben Yelin: I think it's worked decently well with the GDPR. It's enough of a disincentive that it's at least going to make these companies worry. It's also going to make them lobby because there's enough money at stake for them that they want to have their hand in forming these regulations and defeating regulations that they think are going to hurt the industry.

Dave Bittner: Right.

Ben Yelin: So it's wise on the part of the European Union to shoot high on these regulatory fines because then you can kind of force industry to come to the table --

Dave Bittner: Yeah.

Ben Yelin: And actually engage in some of these issues.

Dave Bittner: Yeah. All right. Well, it's fascinating. We'll see how this one plays out, like, you know, if they're aiming for the end of the year to settle this, that's -- that is a fast track, for sure.

Ben Yelin: It's certainly there.

Dave Bittner: Oh, if we could do such things here.

Ben Yelin: And it certainly is, but who knows what the world of AI will look like in November/December of 2023.

Dave Bittner: Yeah.

Ben Yelin: It's moved so quickly.

Dave Bittner: That's true.

Ben Yelin: Yeah.

Dave Bittner: Yeah, absolutely. All right. Well, we will have a link to that story in the show notes, and of course, we would love to hear from you if there is something you would like us to discuss on the show. You can email us. It's

Ben, I recently had the pleasure of speaking with Eric Wenger. He is Cisco's Senior Director for Technology Policy and Government Global Affairs. We are discussing the issue of global network resilience, and particularly as devices age and extend past their useful life, what are the best ways to handle that? Really fascinating conversation. Here's Eric Wenger.

Eric Wenger: First off, I would say that we have made a dramatic change in how we use our networks, and some of this was accelerated by the experience that we had during the past few years, but the trends were already in flight, and we have pivoted in a way that we are moving from fixed perimeter networks. You can imagine students or workers sitting at desks with desktop computers that are wired to a network, and then they access data and applications internally, and when they go and need an internet-based resource, they cross through a firewall which allows for the traffic to be examined, and we have seen a dramatic change where people are using personal devices, unsecured internet connections, sometimes open Wi-Fi to access data and applications in the cloud without ever touching the perimeter of the network. So that is a real change and it presents new challenges, obviously, for security professionals to figure out how best to address those kinds of challenges. In addition, we're seeing an aging of the infrastructure, the technology that we have relied upon for years, decades, essentially sitting in a closet without being touched, is presenting new security challenges that require new attention.

Dave Bittner: Yeah. You know, I think that's a really fascinating reality that we face these days. You know, there's that old saying that, "If it ain't broke, don't fix it," but as new vulnerabilities are discovered with old hardware, I guess that adage just hasn't stood the test of time.

Eric Wenger: Right. So we have -- the technology environment itself is changing. People are layering on new services and capabilities on top of existing infrastructure. We have threat actors that are adapting their behaviors to what they see work, and we have the dynamic nature of the threat environment itself. So when you put all these three things together, you can say that the technology, as you said, Dave, if it ain't broke, don't fix it, that suggests that you should just leave it alone, it seems to be working, and let's not touch it. But the environment that we're operating in from a threat perspective requires us to actually have some fixed and focus attention, investment of resources. And so it may seem that that device in the closet, again, should be left alone, and if I make any decisions about changing it, updating it, securing the configurations, patching it, then I potentially introduce new introduce new risk because any change to the environment could cause it to potentially fail and I don't want to do that. And I also am taking something that has a completely amortized cost and then introducing new ongoing operating expenses associated with that, and so there's a tendency to say, "Let's not spend new money on old things." Our perspective is that that is not a sustainable path forward.

Dave Bittner: What are the options that folks have available to them, then? What sort of things can be done in order to come at this problem?

Eric Wenger: One thing that we see is that the adoption of cloud-based resources can provide some significant security benefits in this regard, especially if somebody else might be better positioned to manage the security risk than you. And so if you look at the Hafnium attack from a couple of years back, you had servers that were being maintained on premise, inside of -- in this case, it was Exchange servers, but it could be, you know, any kind of on-premise resource where there was no effort being maintained to patch and secure those servers, and the simple act of taking some of those things that are running inside of an organization and then lifting it and putting it in the hands of a cloud provider who actually has expertise and dedicated resources to maintain security may significantly improve the posture of any organization. So I think that's one thing to think about, is what -- do I have the resources to maintain and secure on premises, and what should I actually turn over to somebody else and rely upon their expertise instead?

Dave Bittner: What about the hardware that I have on site still? You know, how do I -- how do I go about the proper care and feeding of that stuff?

Eric Wenger: Well, the first thing I think you need to do is to figure out what you have. If you don't know what technology is running inside of your environment, then it is essentially impossible to make sure that you're securing it properly. So conducting an in-depth audit of the technology that's running inside your network and understanding what you have I think is clearly step one. The second step is to then assess what of the technology that I have is end-of-life, end-of-support, where there's no further patching that is available for it? Because if I'm running technology inside my network and it's exposed to the outside world, to that dynamic threat environment that we talked about, to the adaptable threat actors, then if there's no way for me to continually update that technology on a going-forward basis, then I have to think about, can I isolate or segment it in a way that then puts security resources capable of addressing dynamic threats in between that device and the outside world? And if not, then I really should give serious consideration to taking it out of service.

Dave Bittner: Is there a point of diminishing returns with old hardware? I mean, is it -- is it reasonable to say as an organization, you know, we will have no servers that are older than x number of years just as a matter of policy?

Eric Wenger: I don't think that there's a fixed timeline that you put on the use of technology. The question really is, is there a plan in place to make sure that it can be continued to use -- be used in a safe way? So for example, there is medical equipment that you see inside of doctors' offices that is tied to generic computing equipment, laptops and desktop equipment, things like that. They run oftentimes commercial, consumer-grade operating systems, sometimes Windows 95, Windows 7, whatever it is.

Dave Bittner: Right, right.

Eric Wenger: And it may be the case that that equipment can't really be replaced with newer operating systems because -- and we just actually -- this was something that we discussed on -- I guess back and forth between us via correspondence on an earlier show.

Dave Bittner: Yeah.

Eric Wenger: There may be times where you say, "I need to patch this in order to be able to make sure that it's continued to -- it continues to operate in a dynamic threat environment." There may be some of that technology you just say, "I'm going to take it offline and I'm going to use it in a mode where it is not connected to anything else." The question first is, do I know what I have, and do I have a plan for dealing with it? If I look at it and say, "This is critical. I have to have it, and I can operate it in a mode that is disconnected," then maybe I have a reasonable plan for continuing to use it after it's no longer supported. But if it's connected to a dynamic threat environment, that's really the question -- the point you have to ask yourself, "Can I safely continue to use this technology in the mode that it's currently connected?" And if there is no plan for support, then that's the line that I would say you shouldn't cross.

Dave Bittner: What about putting something in between that device and the open internet, then? You know, if you have something that's a -- has critical functionality within your organization but you're concerned about its security, is that a viable solution as well, to put up some walls, some barriers between it and the rest of the world?

Eric Wenger: I mean, I think that that's the question you have to ask yourself. The tendency that humans have is to connect things, and so anything that can be connected will be connected. And so, you know, if you look at, for instance, the attacks on the Natanz nuclear reactors in Iran, it's fascinating because those centrifuges were designed to be operated in an offline mode, and the attacker software was so effective that it actually, using Sneakernet, was able to get inside of the boundaries of the -- of this closed facility and then come back out again to the internet. So the assumption that something can be effectively disconnected from the internet I think is something that needs to be thought about very carefully. But assuming that you understand the risk and you know that you're operating with a technology that cannot be patched, and you've decided that it is being operated in an offline mode or with some intermediate technology from the network that then provides augmented protection to take care of and to step in where the device can protect itself, then that may be a reasonable decision. If you can't do those things, then, again, you need to say to yourself, "This is something that I potentially should take out of service" and replace it with something that is being currently supported by. So again, I don't think the answer is "There's a fixed period of time." The question is, "Do I know what I have? Do I have a plan to protect it? And if I can't adequately protect it and I can't segment it or I can't isolate it, then I need to move on." And even though it may seem like it's cheaper to continue to operate that technology that, again, is just sort of sitting in a closet and nobody's touching it and it doesn't cost me anything, but there may be a technical debt that's accruing off books, and so it may seem like that there's no cost associated with that technology because I'm not spending anything on a regular basis, but I have this shadow cost that is accruing and eventually I'm going to have to pay that debt.

Dave Bittner: And to what degree is that -- the key to making your case to the powers that be within an organization? I mean, do you come at this from a risk assessment point of view to your board of directors, for example?

Eric Wenger: I think we all have a role to play here, right? The developers of the technology have a role to explain what period of time they support the technology that they have supplied, to make it easy for customers to understand where there are known exploitive vulnerabilities, what secure configurations and mitigations are available, how to identify technology that needs patching, and to make it as easy as possible to patch. The operators of the technology, again, need to understand that they have a shared role in managing risk, and you can't, again, just sort of set and forget technology that is exposed to a dynamic threat environment and expect that forever it's going to be cost-free even if I'm not expending money on a regular basis. And then the government, I think, has an important role here, too, to model the kinds of behaviors that it hopes to see from regulated entities, to share information about risk, and then to provide the right kinds of incentives so that those who are building technologies and those who are deploying technologies are helping to manage those risks in a reasonable way, because, you know, this is a shared environment, after all, and so therefore things that are happening -- my neighbor's house can't be on fire without me being at some risk as well, too.

Dave Bittner: Where do you suppose we're headed here? I mean, is it your perception that word is getting out about this, that these are policies that folks are adopting?

Eric Wenger: We think it was an important development, what happened on April 18th, and you did a special advisory about this on your show. And so there was a joint advisory that came out from the U.S. government, it was CISA, FBI, NSA, in concert with the U.K.'s National Cybersecurity Center, and it highlighted the existence of known exploitive vulnerabilities. In this case, it was some poorly managed Cisco technology, but it could happen from any of our competitors or peer companies. And so what we're trying to do is to say, you know, we can each take turns having our technology that we put out years ago that is no longer being patched by the user of the technology, that it could continue to generate headlines and we could point fingers at each other, but instead, we think it's actually more important to gather everybody around the table and to start having a conversation that involves the developers of the technologies, the users of the technologies, and the government to figure out how we identify what is out there, what patches can be deployed, and frankly, what technology is beyond its useful life and can no longer be secured and needs to be taken out of service. And so that's part of the reason that we're trying to get out here and talk about this today, is to make sure that we are pulling all the stakeholders together and having an important conversation about how to manage this risk and how to work together.

Dave Bittner: Ben, what do you think?

Ben Yelin: Yeah, I mean, this is something that's certainly outside my wheelhouse of expertise, so it was just a really interesting interview. We all know that there are risks of legacy devices, legacy networks, but to think of it in a really macro perspective as to how this is going to affect the private sector governments and how we can address it, I just thought -- I thought it was really interesting.

Dave Bittner: Yeah, and we appreciate Eric Wenger taking the time for us. Really interesting insights there.

That is our show. We want to thank all of you for listening. We'd love to know what you think of this podcast. You can write us an email at We're privileged that N2K and podcasts like "Caveat" are part of the daily intelligence routine of many of the most influential leaders and operators in the public and private sector as well as the critical security team supporting the Fortune 500 and many of the world's preeminent intelligence and law enforcement agencies. N2K's strategic workforce intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team while making your team smarter. Learn more at Our senior producer is Jennifer Eiben. The show is edited by Elliott Peltzman. Our executive editor is Peter Kilpe. I'm Dave Bittner.

Ben Yelin: And I'm Ben Yelin.

Dave Bittner: Thanks for listening.