
The AI policy divide.
Dave Bittner: Hello, everyone; and welcome to Caveat, N2K CyberWire's Privacy, Surveillance, Law and Policy Podcast. I'm Dave Bittner. And joining me is my cohost, Ben Yellen, from the University of Maryland Center for Health and Homeland Security. Hey, there, Ben.
Ben Yellen: Hello, Dave.
Dave Bittner: And on today's show, Ben and I are pleased to welcome back to our show my N2K CyberWire colleague, Ethan Cook, the author of The Caveat Newsletter. Ethan, welcome back. 333 Thank you for having me back. Yeah. We're going to take a deep dive, as they say, into AI policy today. A quick reminder that, while this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. All right. We are back. And let me start off by saying that, Ethan, you have taken a really extensive look into AI regulations from coast to coast here in the US.
Ethan Cook: Yep. There's a lot.
Dave Bittner: Thanks for joining us, Ethan. We look forward to having you back. Yeah. So I guess part of me thinks -- thanks you for having done it. Part of me thinks, better you than me because I'm sure --
Ethan Cook: I'll second that. Yeah.
Dave Bittner: -- parts of it were a slog. But, I mean, before we dig into the details here, what's the -- kind of the overarching impression that you have, having gone through this exercise?
Ethan Cook: Yeah. I think -- you know, I think there's a couple ways you can look at the overarching view of AI legislation across America. But I think the way I would characterize it is patchwork. With there being no real substantial federal legislation to kind of guide states and companies on how to behave, each state has taken it upon themselves or not, in some instances, to regulate AI or what they feel is relevant in various degrees, creating this kind of weird framework where some states are really restrictive or attempting to be really restrictive on what can and cannot be done; and other states are focusing on very specific issues that may be relevant to them, and other states are choosing to really not regulate at all and kind of just let it go crazy.
Dave Bittner: Are the states, I guess, behaving in the way you would expect them to? In other words, I would think -- I would imagine that California comes at this in a different way than Texas does.
Ethan Cook: Yeah, yeah. Absolutely. I think -- you know, one of the most unsurprising ones that I saw was Tennessee passing -- or, I should say, kind of more updating the Elvis Act, which basically prevents anyone from copying anyone's image, likeness, voice, etc. using AI to create deep fakes. And that's -- you know, kind of came in response for their music industry, which makes a lot of, like, logical sense for that -- for that state. And then you have other states like Colorado or even Texas, to a degree, passing some privacy laws or things along the lines regulating what can and cannot be done from an AI regulator perspective that have been really comprehensive in certain capacities. But I think the biggest thing that I've kind of noticed is, while each state or most states have tried to do everything or tried to pass some form of legislation to handle this completely evolving topic that changes almost daily, it feels, there hasn't really been a concerted conscious effort to -- for -- as a state to really hit everything. It's kind of like, we'll try and focus on deep fakes. Or we're trying to focus on individual professions or privacy, rather than the whole thing at once.
Ben Yellen: Yeah. I mean, I've seen that here in Maryland working on AI-related legislation at the General Assembly. It's been that same patchwork approach. There have been legislation on deceptive deep fakes. There's been legislation proposed on election-related deep fakes. We have this separate scenario here that we're trying to develop legislation on for the use of deep fakes that leads to legal harms such as defamation. And then these sort of industry specific regulations for industries that prevent a substantial risk so critical infrastructure, healthcare, that sort of thing. I think that's really interesting. But I think, kind of along the lines of what you're saying, it lacks a certain level of uniformity. On the one hand, I understand why states are reluctant to put together kind of broad AI policy through legislation. And we've seen the natural limits on that; and I know we'll talk about this in the California example, where the California legislature put together a pretty heavy-handed bill regulating artificial intelligence. And it was vetoed by Governor Newsom, partially by being too heavy-handed. I know he had specific nitpicks. And I think it's incumbent upon states to start with some kind of common values, a very low common denominator. And for me, and I'm curious if you agree with this, that's governance. It's -- it's not necessarily what decisions need to be made vis-à-vis AI regulation but who makes those decisions. How do we set up processes so that especially -- and you can start with the use of AI in government and state government and local governments come up with processes. Who decides whether a certain AI tool can be used to achieve a certain objective, and where does the oversight come from? What authority has that oversight body been given? I think those are questions we can start to answer now, before we have this kind of patchwork of regulations in different sectors.
Ethan Cook: Yeah. I would -- I would agree that governance is almost critical to establish instantaneously. And I feel that, if it's -- you know, going through a lot of, you know, different bills and different topics that people tried to regulate AI with, I think one thing that always stuck out in my mind was how do you verify that these things are accurate, right? How do you verify that these impact assessments are being conducted the way they should be conducted? How do you verify all these things? And that kind of boils down to really strong governance and making sure that not only are companies doing what they say they're doing but also we have a way to verify that. And, you know, that age old saying, trust but verify.
Ben Yellen: Right, right.
Ethan Cook: And make sure that these AI companies and, you know, the -- that the people reviewing processes, the way they're being -- and not just developers but deployers, are they being deployed in the way they should? Because a lot of, you know, AI developers, they just put their, you know, models out there and let people kind of go wild with what it is. And making sure that those processes are being, you know, carried through by the deployer as well.
Ben Yellen: Right. I know some governments -- I believe it was the State of Washington looked into preclearance for the use of AI tools in state government, which is very ambitious. I think --
Dave Bittner: Is that like FDA approval for AI?
Ben Yellen: It's exactly what it would have been.
Dave Bittner: Yeah.
Ben Yellen: There's some type of state review process before the technology can be deployed, which is going to be cumbersome. And I think you might lose out on innovative use of -- uses of AI if you have this really cumbersome process. But, yeah. I just think governance is an area where, if we can at least establish that, and if there's -- it would be great if there were uniformity among states on governance policies. At least, generally, who the decision-making bodies are, I think that would give companies a little bit more certainty. They'd know what the playing field is.
Ethan Cook: Yeah. And I think with the federal government, like, you know, with NIST publishing its framework, kind of trying to establish that, it was an attempt to kind of -- kind of lay the foundations, I guess the best way to describe it. But, you know, with that being established, I don't really think there's been any -- certainly not the federal level but, at the state level, aside from, I would say, Colorado's AI Act, there's not really been any comprehensive effort led by a state that I can think of, at least, off top my head that has been pushing for that strong governance that you mentioned.
Ben Yellen: Yeah. And I really do think it's this vacuum that we've seen from federal leadership. I mean, NIST is well-positioned to put out these guidelines. And they've done it across different sectors as long as they've been in existence. I think we're going into a period of more uncertainty, given that, with a new administration, they've pulled back on a lot of nonbinding guidance that we saw in the Biden years. So I think states had more of a blueprint to come up with governance policies based on what they saw from the previous administration. And now, without that, I think they're kind of more in the dark because they always have to worry about preemption. If the federal government somehow does get their act together, the federal government can occupy the field in regulation of artificial intelligence and preempt state laws. And so I think states are wary of that. And they're also wary about companies, frankly, disliking state governments because they're using the heavy hand of regulation and that hurting the business climates in those individual states. So I think that's kind of the dynamic we're seeing out there right now.
Dave Bittner: Let me ask you this, Ben, since you're involved, as you say, with helping some folks here in Maryland with some of these bits of legislation. Is there a practicality at play here? In other words, the -- some of the things that the states are coming at, are they doing that knowing full well that, if the federal government is to come in and, as you say, finally do something, these are the areas the feds are likely to be interested in. So we're going to not -- we're going to try to avoid intersecting with that because, as you say, preemption is always an issue.
Ben Yellen: I've definitely seen a good deal of that. I think that's kind of 50% of the attitude. The other 50% is, look. If the federal government's not going to do anything, then it's incumbent upon us to develop policies. Like, we're not going to sit by and just let there be unfettered use of artificial intelligence in our state, especially from our state government agencies. And we can't afford to wait for federal guidance. So I think it's kind of those. Those are the two predominant viewpoints. There's the reluctance to engage out of fear of preemption through federal legislation or regulation. And then there's this kind of defiant attitude that, if the federal government isn't going to act -- and it certainly seems like that's not going to happen anytime soon -- then states have a responsibility to fill in those gaps.
Dave Bittner: Ethan, I'm curious from your research. You know, if I'm a company, let's say I'm a social media company, you know, a Facebook or an X Twitter or, you know, one of these big companies that has true global reach and I'm faced with complying with this patchwork that's state to state, how am I coming at that? How am I approaching that from a practical point of view?
Ethan Cook: Yeah. So I think split -- you know, I think there's two ways we tackle that, which is first looking domestically within the US and then looking internationally. Talk about the first one, the first half of that. You know, I think, domestically, it's about, for them, finding states that really work for them, which is why -- you know, normally, we always think of California kind of as being, A, that massive, you know, Silicon Valley. Every tech company goes there, and that's still true. And -- but we also think of it as this, you know, major regulator. And I think, you know, we've kind of touched on it a little bit earlier. But to circle that back to this point, which is, you know, they tried. You know, California tried to pass a really comprehensive AI legislation bill that was controversial. But, you know, you did see some support from people like Anthropic who were kind of -- of the mindset of, look. We need to regulate. We'd rather get something on the board than nothing on the board. And, ultimately, that bill was vetoed by Newsom, highlighting a lot of issues that he had with it. But he did end his statement kind of saying that we need -- that even though he's vetoing this, that doesn't mean we shouldn't get AI legislation up. And they are considering a reversioning of that -- of SB 1047, which is the AI build that official name of it. And they are -- but, you know, I think, with the reach that these companies have and, you know, the massive emphasis that they have on both innovation and the potential for AI to unlock so many different opportunities, etc., you know, I think there is a almost states effort to bid for them to come. You know, there's -- like, I think a large reason why SB 1047 was killed was because, you know, a lot of AI developers, basically, kind of hinted that, if you're going to pass this bill and this comprehensive thing, then we're going to pull out and go somewhere else.
Dave Bittner: Go to Texas.
Ethan Cook: I think -- exactly. And so, given that, I think their -- states kind of have to -- domestically have to walk a really fine line. And AI, AI companies or people who are going to be deploying massive AI systems kind of have a lot of sway over her states and what can and cannot be done. What do you guys think about that?
Dave Bittner: Well, let me jump in here and ask Ben. Ben, what do you see in terms of lobbyists?
Ben Yellen: Oh, it's intense. I mean, there are a lot of jobs at stake. And there is a lot of potential tax revenue at stake. We know a few things about AI. One, we know it's a fast-moving industry. We know that changes are coming about rather rapidly. And we also know that it takes a lot of data storage, which means there's a lot of discussion around these large storage facilities, at least prior to what we've seen with DeepSeek where maybe we don't need the same type of data storage facilities that we thought we did. I think the combination of all of those is an opportunity for states to be at the forefront of the AI revolution. California managed to do it in -- the starting in the 1970s as it related to computing generally and all types of information technology and built out Silicon Valley. And it worked because you had these economics of agglomeration. You create this kind of community there where everybody's a techie, and you move there. You can rotate between these -- these businesses as employees. You establish it as kind of a intellectual hub of the development of new technologies. I think California is starting to -- or at least is at risk of losing that competitive advantage due to a number of different factors. The cost of housing is one of them but also the fear of heavy-handed regulation and also tax policy. And so other states have stepped forward. Texas is the big one in trying to be kind of the next Silicon Valley, which does bring a lot of economic benefits to the state. And I think that's exactly what lobbyists are telling legislators is you can pursue heavy-handed regulation, but you run the risk of picking up our toys and leaving. And that's going to hurt your tax revenue, and a lot of states are pretty strapped for cash right now. So I think that plays a major role. We've seen it in Maryland. I mean, we have a governor, a relatively liberal Democratic governor who you would expect would be friendlier to regulation of risky business enterprise. But we also have a major tax revenue problem, and growth here has been stagnant. And our governor has been talking about the need to be more business friendly.
Dave Bittner: Right. You see the -- these economic development organizations, you know, saying Maryland is open for business.
Ben Yellen: That it's open for business. Yeah. Exactly, exactly.
Dave Bittner: And every state says that.
Ben Yellen: And the reason that this is so relevant for AI is the potential -- I know this is cliché. The potential is limitless. Like, this really could be the next multi-billion, trillion, whatever dollar industry. So there's a early movers' advantage for the states that attracts development in this area, and I think they're actively competing for it.
Dave Bittner: Yeah. Ethan, let me switch back to you here. I mean, we've already seen with the new administration President Trump has rescinded former President Biden's AI executive orders. What seems to be the trajectory here of the new administration when it comes to this stuff?
Ethan Cook: I think one word summarizes it: deregulation. You know, I think this administration is very intent and from day one when they rescinded Biden's order was emphasizing this strong message of, we are not going to hamper AI innovation at any cost. You know, they are going to deregulate, remove the barriers, and let companies, pretty much, you know, for lack of better, word, go wild with it. And, you know, we just -- I actually -- this week there was an AI summit in Paris, and JD Vance spoke there. And, you know, he continued that message. He said that, not only do we need to have deregulation, but he kind of lambasted a European regulator saying that your regulations are what are keeping AI industry out of -- out of Europe. And if you want more AI innovation in Europe, you need to -- you need to follow with what the US is doing and, you know, follow what China is doing, which is get rid of your regulations if you want to attract these businesses and these opportunities. So I would expect over the next four years that message to be pretty, pretty strong.
Ben Yellen: Yeah. I was struck by -- I guess I wasn't struck by JD Vance's message, which I think was something we all could have anticipated. They do have kind of a deregulatory lens. And I think, yes; that's standard Republican Party politics. But it's also influenced by some newly influential big tech leaders in the Republican Party like Elon Musk and, to a lesser extent, Bezos and Zuckerberg and really everybody who's taking a second look at Trump's Republican Party here and also have an interest in deregulation. But really was the reaction of the Europeans who have this pretty heavy-handed AI regulatory policy that's scheduled to go into effect next year, basically saying, like, we -- we might need to reverse course and take a far more deregulatory path because we don't want to lose out to not only the United States but China. And I think that's the DeepSeek factor here, where I think that that product struck a lot of fear in Western entities who saw that they could build this extremely effective AI model. It's much cheaper and doesn't require as much computing power. And if any of us want to be competitive in this industry, then I think we have to kind of let the companies innovate as quickly as possible and not be hampered by regulations. I think that was kind of the surprising message, not from US leaders but from European leaders like Emmanuel Macron.
Dave Bittner: Yeah. All right. I'll tell you what, guys. We're going to take a quick break here for a word from our sponsor. We'll be right back after this. Ethan, I'm curious, you know, again, from your research here, despite the efforts from the states, are there major gaps in AI -- AI policy, or are there -- are there issues that are urgent that are -- really haven't been addressed yet, in your view?
Ethan Cook: Yeah. I mean, you know, ignoring the federal aspect of it and kind of, you know, that kind of over -- overarching blanket that I feel is pretty needed across the US, I would say some of the biggest things that are concerning for me, that we haven't seen are comprehensive -- and this goes to the government aspect we talked about earlier but transparency. You know, how is AI operating? How is it making decisions? What datasets is it using to train, you know, etc. And that feeds into this bias concern where, you know, maybe the answers the model is giving is not necessarily wrong, but it's not really based on the dataset it was trained with. But if you added in other datasets, other factors in, that changes the answer pretty significantly. And I would say the other thing that I -- you know, I think there's a pretty big concern about is we are innovating so fast, and these AI models have the ability to impact so much, you know, as Ben said, limitless, what is the safety and cybersecurity, you know, aspect here? What is the role of cybersecurity? What is the role of privacy concerns with data and how these things are being used? Because so many people have the ability to utilize, deploy an AI model and there are so many companies now making AI models, what does that look like? And how does that impact not just right now but over the next five to 10 years when AI only becomes more proliferated?
Ben Yellen: There's also the practical problem. Like, we've tried to set up a governance framework in Maryland, which partially requires state -- state agencies to do an inventory of AI systems that they're currently using. By the time they do that in their annual reporting requirements, like, the next thing is already here, right?
Ethan Cook: Yes.
Ben Yellen: So in the process of in -- doing inventory and complying with all of these state-mandated requirements, you might be missing out on the next beneficial use of this technology. That's the nature of the beast. You know, I don't -- and that's not a value statement because maybe it's really better to be safe than sorry and take this sort of slower, more methodical approach. But that's the risk that you're taking by having things like inventory requirements and even going as far as preapproval of the use of AI tools in state government agencies. That's the risk that you take.
Dave Bittner: Are the states looking at labor markets in general? You know, this -- we hear about how AI is going to perhaps eliminate so many jobs. Are states looking at -- looking -- trying to plan for that?
Ethan Cook: Yeah. I would say that is a huge concern, both amongst lawmakers as well as, you know, average citizens. You know, there's a dynamic which has been kind of floated around and hasn't really -- at least I haven't seen, you know, really substantial kind of impacts on this from a law-making perspective that is emphasizing impact assessments on if we -- you know, if we deploy this AI model, if we, you know, create this AI model, what's the impact on this specific job market? There's a lot of discussion about it, but I don't think there's anything that's been comprehensively done at this point, kind of leaving a significant gap in what could be impacted and what jobs could be, you know, impacted overnight, to be honest.
Dave Bittner: Yeah. There's always the kind of -- you see the -- I mean, we've seen this over the decades and I guess centuries. You know, when a new technology comes along that improves productivity, the fantasy is always, you know, and workers will be able to move on to more fulfilling jobs, you know, but.
Ethan Cook: It's always easier said than done.
Dave Bittner: Right. Exactly. But, yeah. It's -- it's not so easy to do from a practical point of view.
Ben Yellen: And people don't like that. I mean, I think that's something that is often overlooked is like people like what they're currently doing. There's a level of stability. There's a level of I've been doing this work for 15, 20 years. And it's scary to me that a machine is going to take over and do my work for me. And I'm 60 years old, and I don't want to go back to school and learn new skills. Those are all very, very valid things to think, in my view. And so, from a policy perspective, I think we have to take all of that into consideration, even if it potentially does hamper unfettered innovation. Like, displacement of jobs is a real factor. It affects real people. So, yeah. I mean, I think that's certainly a very valid concern.
Dave Bittner: To what degree is the industry promoting self-regulation here of saying, Hey. We -- we've got this. Trust us.
Ethan Cook: That's a tricky one. Yeah. Ben, you want to -- do you want to tackle that one first?
Ben Yellen: Yeah. I actually think there was an encouraging effort, I think I'd say maybe about a year ago of some of the major players in the space to get together and say, like, we are going to take regulations seriously. And, you know, we should be putting together first principles on the boundaries on these systems. And let's do it together so that none of us lose a competitive advantage. I'm not sure exactly what happened, but I feel like the momentum behind that effort of self-regulation has been lost a little bit. Maybe it's --
Dave Bittner: You don't know what happened?
Ben Yellen: I mean, I do know what happened.
Dave Bittner: There's a big -- there's kind of a big thing that happened.
Ben Yellen: Yeah. That's how I would approach it is it was very convenient until it was no longer convenient. It was, yeah. 312 electoral votes.
Dave Bittner: Right, right.
Ben Yellen: Yeah. I mean, I think that's a part of it. Like, efforts at self-regulation -- more than a part of it. Efforts at self-regulation are a preemptive measure to show policymakers that you don't need to be regulated by government entities, that you can solve all these problems yourself. If the government, instead of threatening regulation, is saying things like, our primary goal is innovation, and we're going to get out of the way and let these companies carry this industry into the future, then they feel less of a need to prove to policymakers that they're willing to self-regulate. And that's really what it comes down to.
Dave Bittner: Ethan.
Ethan Cook: I think that echoes Sam Altman before the Paris AI Summit this week kind of emphasized, you know, to Europe in this instance that, you know, if you want growth, etc., you have to let builders build I think was the line and innovators innovate, which to me was a pretty strong message saying, if you want us to come here, then you need to pull the gas -- hit the brake pedal on your regulations. And I think, while these companies will always tell you that, obviously, they're about protecting people, etc., I think, you know, look to California where the amount of money that was being thrown around there to, you know, impact that bill's passing, we'll say for a friendly term of it, you know, I think that they're okay with regulation as long as the regulation is on their terms and as long as it doesn't really impact their innovation and things along those lines, given that that's the -- now the federal government's take on this.
Ben Yellen: One thing I will say that's confusing to me that goes beyond the change in administration is there was a movement, even among the mayor player -- major players in the industry of doomerism, that, like, we have to get a handle on AI and stop it before it kills us all. Like, literally, we're going to wake up in 2026 and our machine overloads are going to destroy us.
Ethan Cook: It's Terminator.
Dave Bittner: It's Terminator. Yeah.
Ben Yellen: Yeah. It is the Terminator scenario. Is it just me, or are we seeing way less of that? And, like, have -- what has changed in the actual conditions that would justify abandoning that level of doomerism? So I'm just curious about that dynamic.
Ethan Cook: Yeah. I think that's a great question. I -- you know, I've noticed that as well. I think, you know, I've kind of played around with why that may be. And obviously there's probably some insider knowledge that none of us can predict. But I think there is a dynamic where, first, you know, competition has dramatically increased over the past couple years. So I am sure that that has driven the element of, you know, oh, when you know, Open -- OpenAI was the -- kind of at one point the only -- the only -- the only fighter in the ring is really good to say whatever you want to say and have that messaging. But now we have so many other companies popping up. I mean, DeepSeek has kind of upended everything, it feels like, that I think there's more emphasis on we need to stay innovative and ahead of these other companies, rather than being able to kind of rest on the laurels of what we ideally fantasize about, like being, oh, we -- let's -- you know, let's make sure we're not going too crazy with AI, etc.
Dave Bittner: Yeah.
Ethan Cook: And I think there's another dynamic, which is, you know, OpenAI, you know, kind of another untouched thing, which I think is not getting nearly as much attention as it probably deserves is trying to become a for-profit company.
Dave Bittner: Right.
Ethan Cook: And that, to me, was a pretty big indicator that they are switching from a let's be passionate project, make an AI model from where we -- kind of where they started to now let's try and make as much money off of this thing as possible.
Ben Yellen: Yeah. We're going to be held accountable by our shareholders.
Ethan Cook: Exactly.
Ben Yellen: Yep.
Ethan Cook: And I think that is another pretty big indication of why we're seeing this step away from self-regulation or more ephemeral regulation and kind of things along those lines where, you know, it's as long as we're making profit still, it's okay. We don't need to do all the other aspects of it.
Dave Bittner: Well, before we wrap up here, I want to ask each of you, get each of you to answer individually. As we look into 2025 and beyond, I think even looking through, let's say, the rest of President Trump's term here, what do you suppose some of the major AI policy debates are going to be? What's going to dominate the conversation throughout this year and beyond? Can I start with you, Ben.
Ben Yellen: Sure. So a couple of things. Expanded use -- use cases of AI for things like medicine, national security, these kind of instances where -- or these use cases where a lot could potentially go wrong, it's very high risk, high reward, I think that's going to be a big area for debate. One thing I want to mention is this effort of DOGE, the Department of Government Efficiency. They're going into government agencies gaining access to personnel records and other systems and trying to use AI tools to make substantive decisions for things like hiring and firing government employees and redirecting federal resources. So that's also a new use case. So I think we're going to see new debates around use cases. Are the consequences too big to rely on artificial intelligence for things like war and peace and national security and counterterrorism and fending off cyberattacks, I think that's going to be kind of the new vector for our debate here.
Dave Bittner: Ethan.
Ethan Cook: Yeah. I would -- I would definitely echo that national security aspect. I think given the current administration, I think national security is going to be a top priority for AI. I think the other thing that is going to be -- become an increasingly important topic will be the aspect of privacy and what that role plays over the next couple years. Over the past administration, there was a bipartisan effort to get a privacy bill passed; and I don't think that that is going to die completely over the next four years. So I could see that being a pretty big aspect with how AI uses datasets and what data it's allowed to access and things along those lines.
Dave Bittner: All right. Well, we have really only scratched the surface here. And I want to encourage everyone to check out our Policy Deep Dive here. It's The future of AI policy written by Ethan Cook. It is over on our website, thecyberwire.com. If you search for the Caveat show, you can find the Caveat Newsletter as well. We hope you'll check it out and also subscribe to that. Ethan, thank you so much for joining us. It's always a pleasure to have you with us here.
Ethan Cook: Thank you guys for having me. Great conversation.
Dave Bittner: That is Caveat brought to you by N2K CyberWire. We'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity. If you like our show, please share a rating and review in your favorite podcast app. Please also fill out the survey in the show notes, or send an email to caveat@n2k.com. This episode is produced by Liz Stokes. Our executive producer is Jennifer Eiben. The show is mixed by Tré Hester. Peter Kilpe is our publisher. I'm Dave Bittner.
Ben Yellen: I'm Ben Yellen.
Dave Bittner: Thanks for listening.

