
Trump’s AI race against China.
[ Music ]
Dave Bittner: Hello, everyone. And welcome to "Caveat," N2K CyberWire's privacy surveillance, law, and policy podcast. I'm Dave Bittner. Ben Yelin is on vacation this week so filling in is our N2K colleague and author of the "Caveat" newsletter, Ethan Cook. Hey, Ethan.
Ethan Cook: Hi, Dave.
Dave Bittner: On today's show Ethan has the story of President Trump's recently released AI action plan. I've got a look at bipartisan support for AI regulation. While this show covers legal topics, neither Ethan nor I are lawyers so the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover please contact your attorney. All right, Ethan. Let's jump in with our stories here. What do you got for us this week?
Ethan Cook: Yeah. So this happened actually technically last week, but I honestly think it's such a big story that it warrants kind of talking about regardless, and it's President Trump's recent released AI action plan which is a -- makes both recommendations of policy directions that he wants to implement as well as confirmed like executive orders that are trying to implement policies where he can. And it's covering pretty much anything you can think of when you're talking about AI whether we're talking about export processes, whether we're talking about permits and how you can build data centers, whether we're talking about tracking AI chips. And lastly to woke AI and managing how we -- AI filters content.
Dave Bittner: Well, there's lots to unpack in here as you say. And I think it's fair to say it's kind of a mixed bag no matter which direction you come at this from. What are some of the things that caught your eye in particular?
Ethan Cook: Yeah. So in terms of major policies I think a couple of the standouts to me were first the fact that he wants to institute AI tracking for chips. So we can verify that chips are going where they should be going. This has been something that is one incredibly bipartisan. This is not something that is a Republican, Democrat, left or right issue. This is something that was supported under the Biden administration and we've just not got around to passing it yet. So I think that's a pretty big step in terms of closing a loophole that people have been raving about for a while now which is it's really great that we can ban companies from selling advanced chips to China when they can just then sell them to let's say Japan and then Japan seller then sells to China. Right? So it's this, you know, kind of trading hand process. We're not really tracking or we're tracking who and we don't really actually care where it ends up for the final end product as it ends up with China. So that one's a big one. I think the other one that really stood out to me was the fact that he wants to I think what the quote or the statement was he wants to have one central federal AI policy rather than 50 state AI policies. And this goes back to a conversation we had a couple weeks ago on the show where we talked about preemption and how they tried to pass a 10 year federal moratorium on to ban all state AIs or actually ban's probably the wrong word, but basically severely limit state AI laws. And now, you know, and we talked about this on this podcast that, you know, this -- that was surely not the last attempt at that. It may be the last attempt in that version of it, but it was not the last attempt. And I think this is revisiting that conversation of saying, "Yeah. The Trump administration is very adamant that it wants to be the one running federal AI -- the nation's AI policy." It does not want 50 states doing whatever 50 states want to do.
Dave Bittner: Yeah. And we're going to talk about some of that in my story this week coming up later in the show. But there's important nuance here I think because while a lot of folks I think agree that we need federal action when it comes to things like AI and privacy and all these sorts of things, you know, on the -- it aligns with I think a lot of the desires of the Trump administration and the folks on the right because the way that they would likely unwind this would be very pro business. It would be good for the AI companies to have both the clarity of having 1 rule rather than 50 with each of the states, but it would also preempt the states from being able to do various things. Is that your read of it as well?
Ethan Cook: Yeah. And, you know, to your point on that one rule, I think a big aspect of that would also be having one generally relaxed rule because the Trump administration is very adamant that it wants the U.S to be the leader in terms of AI, not a regulator in terms of AI. It wants us to be winning the competitive race. I think when Trump announced this AI plan his statement was something along the lines of the U.S started the AI race and this plan is going to see that we win the AI race or something like that. And I think that that was a pretty good indication of where he wants policy to go. And in fact one of the executive orders was all about speeding up the permitting process for federal governments for AI data centers whether that's for releasing federal lands or just making sure that they are, you know, on time or that the permitting process doesn't take too long so AI companies can get their data centers spun up quickly and continue to create really fast AI products within the U.S. To your point, it's not just about oh the federal government wants to be in charge. It's that we want to make sure the federal government's in charge and that states aren't hampering this process.
Dave Bittner: Right. Right. Yeah. And I mean you think about the competitive landscape with a nation like China who, you know, if they want to build a data center they build a data center. They're so -- there's no environmental impact process or do we have the -- do we have enough water or how is this going to impact the community's electrical supply? They can just do it. And so I think this legislation is looking to empower companies in the U.S to have I guess less burdensome regulation to be able to do those kinds of things.
Ethan Cook: Yep. I think that's pretty much hitting the nail on the head. And I think one interesting wrinkle that I'm curious to see where this goes, you know, I don't really know if it's fully materialized yet, is the aspect of exporting AI. We mentioned how there is, you know, they're trying to be able to track AI chips after exportation, and that's great. It's bipartisan. It's well loved. But there's also been one of the executive orders is all about exporting the AI technology stack and improving how much we can ship out to other countries. And the administration has gotten kind of a lot of flack for this already with the - from both sides of the aisle saying, "We don't want us to be, NVIDIA or Intel or anyone who makes these chips, sending their most advanced chips to China." And/or Russia or wherever the AI competitor is. Right? Hostile actor or someone who we're concerned about. And with these relaxing of rules in terms of what can be sent out and how much of it can be sent out we are concerned that these actors are going to be able to gain access to advanced chips and like Deep Seek or whatever the AI company is or AI platform is is going to be able to undercut a U.S both competitive advantage in AI, but also national security. And I think that the executive order he signed to that while, you know, is an executive order and it, you know, is limited in scope, it does have some pretty big impacts seeing how that Biden was able to restrict a lot of things through his orders and through his agencies. So I'm not sure if this is the sign that we're going to start winding back some of these things or if this is simply just a posturing thing or if it's going to be a balancing act with chip tracking. And I think that kind of dynamic is going to be really interesting to play out over the coming months about where we go with chips.
Dave Bittner: Yeah. I think it's interesting that when you bring up national security that that sort of takes it to the next level. I think it gets the attention of Congress in a way that just capitalist competitiveness perhaps does not. Right? It allows for more funding. It allows for more bipartisan support. So it makes sense that that would be an angle that the administration would be pursuing.
Ethan Cook: Yep. I would agree. And I think the last wrinkle to this one is the, you know -- the woke AI part of it. And I'm going to get away from the -- what's the right word for it? I guess the flashiness of the executive order name which is I think preventing woke AI or wokeness in AI or woke AI or something like that.
Dave Bittner: Yeah. I mean it's provocative. Right? It's -- it's --
Ethan Cook: It's meant to get a reaction.
Dave Bittner: Red meat for his fans and the polar opposite for folks who are not fans.
Ethan Cook: Yes. But I do think there is an interesting aspect to this outside of just the flashy name which is he is going to limit who federal governments can -- agencies can work with based on how their AI handles DEI or diversity or other quote unquote woke things. If they find that the AI is quote unquote biased or misconstruing facts I don't know if federal agencies will be able to contract the way that they would otherwise. There are some limitations. There's a review process that is supposedly being put in or he's mandating, you know, go 90 days. Come back with like definitions or whatever. And that I think is going to be an interesting aspect of trying to like curry favor almost or it's hey we'll work with your AI tool or your AI provider as long as you don't, you know, X, Y, and Z with wokeness. Right? And, you know, while it sounds kind of outlandish, and it is, I do think there is this aspect of censorship related to that where it's like, you know, I don't think -- for most people I don't think people look at AI and go, "AI is woke." Or, "AI is whatever." Right? But now we're putting restrictions on what AI can and cannot say which that then means what that actually is doing is restrictions on what, you know, companies are putting in to their AI and how they develop AI and what AI provides to people because it's not just the federal government that's going to use that AI system and what does that look like with this kind of restrictive policy all regarding diversity, etcetera.
Dave Bittner: Yeah. I mean I have a couple thoughts. First of all you can imagine someone like Elon Musk who famously has the anti woke AI that he's running on Twitter. Right? So you know you can see him raising the flag. I don't know whose flag, but raising a flag that would say, "Hey, you want an anti woke AI? I'm your man."
Ethan Cook: Yeah. You can grok it.
Dave Bittner: Right. Right. Exactly. And I guess the other thing that I think about is I can understand the desire to limit as much as possible any biases that an AI system would have, but I can also see real perils if, as you say, we start telling the AI systems preemptively that they're not allowed to talk about certain things. You know, what if you're with the food and drug administration and you have this AI and you want to search about health issues for trans people and the AI is prohibited from discussing that because it's, you know, someone has decided that that's a woke topic? And I'm sure each of us could come up with many different examples. I would argue that that doesn't serve the public no matter, you know, where you come down on these kinds of things. So I think that's an interesting aspect of this. Like you know like you when this plan was released I was going through it and reading through it and kind of skimming through it and going, "Okay. Now this makes -- all right. This is interesting. This makes sense." And then I got to this entire executive order about fighting wokeness on AI platforms and I kind of sat back on my heels and went, "Okay."
Ethan Cook: Yeah. It's like you read it and you go, "I didn't realize this was a problem." Right? Like I -- to me I'm like I've never gone to Chat GPT or any AI product and gone, "Oh. It's giving me biased wokeness." I've never really done that.
Dave Bittner: Right. Well, I mean that's a really good point, Ethan, because if anything the AIs make headlines for going in the opposite direction. You know, for be -- and I -- like I sort of half joke that the AIs reflect who we are, who we truly are rather than who we aspire to be. And so rolled in to them are all of the biases, all of the racism, all of the prejudice that comes from flawed human beings all around the world. And you see that and the answers that they famously generate sometimes and they leave the people who make these things trying to put these guardrails on the systems, rightfully so, but at the same time not limit the usefulness of them. So for the feds it's interesting too because it's kind of the opposite of what the president is pushing for in the other executive orders. Right? He wants to unleash these companies to be able to do whatever they want.
Ethan Cook: As long as they're abiding by what he wants. Right? And I think that's the -- you know, that's the classic Trumpism. Right? I'm good with you doing X, Y, and Z as long as it's within what I want it to be. Right? And I think that that's a dynamic that plays in to this, that power factor, of, "Hey, yo. We want you to get data centers. We want you to export. We want you to make money as long as you're just not doing this." Right? And as long as you don't do that you can have all these other cool toys and bells and whistles. And I think that that is -- you know, I'm sure there's going to be some sort of lawsuit that goes back in to this or there's sort of going to be push back. You know, it's an executive order so it doesn't have nearly the staying power that obviously a law does. So, you know, maybe it's ephemeral. Maybe it gets revoked. Who knows?
Dave Bittner: Yeah. Yeah. We'll see. How does it bump up against the first amendment? I mean --
Ethan Cook: Exactly.
Dave Bittner: We don't know yet.
Ethan Cook: Yeah. I mean that's a whole other battle with AI.
Dave Bittner: Right. Right. Well, what are you seeing -- since this has been put out there have you been tracking any of the reactions from the usual suspects out there?
Ethan Cook: You know, you've got a mixed bag. You've got, like I said, there's been some bipartisan support for like the AI tracking program where people are across the spectrum being like, "Yeah. This is great. We wanted this under Biden. We didn't get it under Biden. This is something that we've talked about for years because it's a recognized loophole that we have not addressed." You obviously have some people who are -- you have also some bipartisan criticism when it comes to the laxing of export controls where, you know, there's a lot of concerns especially from I would say traditional Republicans who have said that, you know, "Hey, there is concerns here." You know? Like we do have problems with China accessing these programs. Is that something that this is going to make it worse? Interestingly enough I did see that there was some push back a little bit from Marjorie Taylor Greene who was noting how all of the AI plan completely avoided and largely avoided talking about AI's water usage and how these data centers use incredible amounts of water and the impact that that's going to have on communities. And so I think that there has been some aspect of the plan which is, "Great. We are advancing these companies. They're going to do so well with this plan. But what is the human cost?" And not just the human cost of AI being AI. What is the human cost of building these systems, allowing these companies to run rampant? And that's been kind of all over the place from both Democrats and Republicans alike and the way they phrased it has been obviously to curry or to not curry favor. And I think lastly, you know, regarding this woke conversation, I think we've just hit the tip of the iceberg where some -- obviously like a lot of the initial reaction is people going, "This is dumb." Right? Like why are we wasting time on an executive order about woke AI? Right? Like what -- we -- our tax dollars went to this. This is stupid. And then I think when you take back and there's been a group kind of forming where it's -- to work on point of our conversation. We take a step back and you actually look at it and you read it and you go, "There's some depth to this that warrants a bigger conversation." And I think that there is that also starting to emerge in that conversation.
Dave Bittner: Yeah. I mean, you know, look. President Trump is provocative and when it comes to being provocative he is consistent. Right? And he's got us talking about it. So, you know, mission accomplished. Yeah. He's I mean he's one of a kind. And --
Ethan Cook: That's a word for it.
Dave Bittner: Yeah. You know, like him or hate him you can't ignore him. And so yeah. It's going to be really interesting to see how this plays out. All right. Well, we will have a link to that in the show notes. I tell you what. Let's take a quick break to hear from our sponsor. We will be right back. [ Music ] All right. We are back and, Ethan, my story this week comes from the folks over at the conversation. And this again is touching on issues with AI. The folks at the conversation wanted to look in to to what degree do we have bipartisan support for regulating AI. And, as you mentioned in the first half of the show, President Trump's big beautiful bill, the spending and tax bill, initially included a 10 year ban on state level AI regulations. And the way that that was set up was if a state had any sort of ban of AI they would lose access to I think it was $500 million in federal broadband and infrastructure funding. So a pretty big stick that the government could use. And say this sort of thing is not new. I mean I remember this was -- I want to say this was way back in the day when they first put the 55 mile an hour speed limit in place. This was one of the things they did, you know. If the states wanted funding for roads and so on and so forth they had to agree to the 55 mile an hour speed limit. And, if not, okay, but you lose this funding. Right?
Ethan Cook: Yes. And you know, as we talked about a couple times, across multiple different issues as a way to encourage specific policies to be adopted.
Dave Bittner: Right. Right. But as the tax bill went through or I should say the spending and tax bill went through the process that it goes through there was a lot of opposition and it was bipartisan opposition. States' attorneys general, many of the legislators, and in fact 17 GOP governors, came out against this. And in the end they removed the AI provision. It was a 99 to 1 vote. So pretty clear that they did not want this in the senate to go through. So that being that the folks here at the conversation did a poll and found some really interesting information. It's a poll from April of this year about AI just trying to find out where people sit. And they break the poll down both in general broad bipartisan numbers, but then also Republicans and Democrats. 65% of people fear that AI will spread misinformation. 56% think AI threatens humanity's future. These are the folks watching "The Terminator." Right?
Ethan Cook: I -- you're not wrong.
Dave Bittner: Only 29% think AI will boost productivity. And 21% think it will reduce loneliness. 22% think it will improve the economy. So before we go further and talk about some of the, you know, right left views on this, what do you make of this initial batch of numbers here, Ethan?
Ethan Cook: Yeah. I mean the false information aspect is interesting because we always talk about how AI provides you great results. Right? Like, you know, it's a fast search engine. It gets you the information you want. But there's always that background of hallucinations and AI messing up and giving you the wrong information. I mean there was a whole court case or a story -- not court case, but a whole story a couple weeks ago where in a court trial someone submitted court cases that were never real, never happened. And while it wasn't, you know, confirmed whether or not the lawyer used AI to get these responses, it did open the door to the conversation of what does this do if people start -- like what does this mean if people start doing this? You know, and it's not just going to be that, but let's say regarding financial documents, "Oh, I don't know how to do finances so I'm going to have AI help me with it." And then AI produces some crazy stuff. You submit it thinking it's right. And now you've just committed fraud. You know, and that is an interesting conversation with AI and I don't think it's -- I think that that number is going to go down as AI gets better. That's my first reaction off that one. Regarding I think the other stat that really stood out to me was the less than I think you said 29% of people did not think AI would make more -- or only 29% of people thought it would make people more productive.
Dave Bittner: Yeah.
Ethan Cook: That one was a little stand out to me. And obviously depends on the career. Like AI's probably not going to make a delivery driver necessarily more productive. Right? Or, you know, speed up a mailman's route. Right? So it's not going to impact every job, but like a -- [inaudible 00:23:49] like blue collar -- or white collar jobs I could see AI having pretty substantial impacts in that. And even some blue collar jobs. I could see it being able to improve let's say industrial systems to make it easier to interface and react or go faster with some of these things or make decisions on the fly or partition out processes that I could see it making that more productive. So I was really curious to see that most people do not think it will make them productive.
Dave Bittner: Yeah. And, you know, I'm looking at some of these numbers here. I should say this poll was done in conjunction with UMass and as I look at the break down of Democrats and Republicans, you know, some of these fall along I guess the answers I would expect given what we know about -- we know where we are at this particular moment when it comes to folks on either side of the aisle. On some things there's very little difference. For example in the question AI will make me more productive 29% of both Democrats and Republicans agree with that. AI will make people less lonely. 22% of Democrats, 19% of Republicans. Here's an interesting one. AI should be strictly regulated by the government. 66% of Democrats agree with that, 54% of Republicans agree with that. So in this particular case Republicans who tend to be shy of -- you know, they don't want anything regulated. Right? Or certainly to a lower degree than their Democrats. And to be fair it is a lower degree than the Democrats, but still more than half. Right? So that's surprising to me.
Ethan Cook: Maybe that taps in to that fear element where people are really concerned about the dangers that this can pose and people -- or there's kind of that reaction which is let's, you know -- and maybe this is partly due to a lot of the gut kick that has been happening to tech across the board where there's just been a lot of insecurity and a lot of fear about how data's being used, how privacy is being handled. Who can know what? How it's being used for monetary purposes. And, you know, that I -- maybe that's a reaction to that saying, "We don't want AI. It's already bad right now. With AI what is the -- what does that look like? How fast does this go?" Out of the polls that you shared I think the other one that's really interesting to me is the -- that Democrats had 52% saying that AI will increase inequality in society with only 22% of Republicans saying that.
Dave Bittner: Yeah.
Ethan Cook: That one is another, you know -- I mean that's a 30 point difference right there. That's a pretty big one. What do you make of that one?
Dave Bittner: Well, yeah. I mean that one had the biggest spread and I suppose if you wanted to you could track that along with beliefs in -- or preexisting beliefs when it comes to perceptions of inequality. I think it's probably safe to say that Democrats probably feel there is more inequality than Republicans do. So it makes sense to me that they would be more sensitive to that possibility. Right? Like that tracks to me. The other one that had a big spread was AI will increase the spread of false information. 74% of Democrats think that's true and 57% of Republicans do. So still in both cases more than half, but, what, 15 to 20 points more Democrats than Republicans. So interesting spread there. And again, you know, is that surprising? I don't think so.
Ethan Cook: No. I don't think so.
Dave Bittner: Maybe perhaps what's surprising is that more than 50% of Republicans say that this will be.
Ethan Cook: Yeah. Yeah. No. I think the spreads make sense. I actually I think looking at this the thing that stands out to me is just how much agreement there is where, you know, if you flip the colors or flip the names you would go, "Wow. I couldn't tell who was who." Or if you took that away and said, "Okay. Which is the Republicans saying this? Which is the Democrats saying this?" You wouldn't be able to really tell the difference and there's a lot of commonality there. Like with AI will make me more productive, to your point, split right down the middle. Exactly the same percentage points. Or if you go to the AI will threaten the future of humanity it's only a seven point difference. That's not substantial. I mean like obviously it's substantial in terms of like political campaigning, but in terms of a poll that's closer than people -- you know, most people would think. That's over 50% for both is pretty substantial and something to take note of.
Dave Bittner: Yeah. I mean I guess it's safe to say that the regulation of AI is -- will not become a wedge issue. Right?
Ethan Cook: No. I think it's less so. You know, I'm a big fan of this, you know, kind of the mentality of it's less so of less agreeing. It's more so how do we get to the agree -- like, you know, how do we execute on the agreement? I think that's going to be the big question with how we regulate AI.
Dave Bittner: Yeah. Another thing this article points out that I think is an interesting way to frame it which is that it really comes down to the fact that both sides of the aisle are anxious about AI because there's so many unknowns and we don't know where it's going. It is a new thing. Everybody's nervous about potential job loss, about just the potential power of these systems and how they could affect our day to day lives and our political systems. And there's just so many unknowns. We've never had anything like this before. So people are nervous. And that makes sense.
Ethan Cook: Yeah. And I think there's that dynamic of, you know, to where it's going. It's also look how far we've come with it. I mean in three years since this really exploded on the scene at the end of 2021 and 2022 AI at that time, you know, you would ask AI to generate an image and you go, "Yeah. That looks awful." Right? Like clearly that looks terrible. Right? Or like you would make a video and someone has like nine fingers on one hand and you're like, "Yeah. Okay." And now we're getting to the point where image and video if you aren't told that it's AI ahead of time there are many people who cannot tell the difference between it. And even, you know, I'm sure there's many people, "Oh. I could tell that was AI." And they're, you know -- they're lying through their teeth because they don't want to be, you know -- don't admit that they got tricked by AI, you know. There's that aspect of just in terms of imagery that is becoming a major, you know -- in terms of advancement. What does that look like in three years? What does it look like in terms of what can AI do in terms of, you know, computation and analysis on data sets and how fast that's going to go? How accurate that can go. That's going to be a huge conversation. And, you know, in just three years, you know, while hallucinations are still a problem they are dramatically reduced to what you used to get where, you know, you used to use an AI system and you're like, "Yeah. I've, you know -- " At least personally I would say I would go 25% chance that this is hallucinated. You know, at least. You know, like I -- one out of every four answers I'm just not going to trust. Whereas now I'm like okay. I'm like 90% sure most of the time what I'm getting is pretty good as long as I've asked the question. And of course you validate it and ensure, but I -- the error rate is significantly down compared to what it used to be.
Dave Bittner: Yeah. And it's getting better every day. So I'm curious if you're the president and you have to balance between your desire to allow AI to flourish and to limit regulation on AI in order to encourage that flourishing, and indeed to encourage the global leadership of the United States when it comes to AI, but then you're also looking at numbers like this where your citizenry is anxious about AI and they're saying in a bipartisan way it needs to be strictly regulated, how do you thread that needle?
Ethan Cook: Yeah. And, you know, I think this is a problem that also goes past just Trump. I think it also goes to state legislators, you know, and probably they feel it a little bit more because it's they're so connected to their communities so they can feel that anxiety a little bit more palatably. I think that -- which is, you know, there's that kind of pattern emerging out of state governments right now which is -- we talked about earlier which is that states are becoming more restrictive on AI across the board. Red and blue states are having more restrictive AI policies. And that has been to the benefit of their constituents and the people who've elected them and it's been overwhelmingly popular for most of these states. I haven't seen a lot of polling or a lot of evidence that would show that these restrictive state laws are getting push back. In fact I would say they're probably gaining momentum because every year we gain more and more [inaudible 00:33:14] AI. And the fact that the Trump administration wants to roll this back and say, "No. We don't want these restrictive policies." Or, "If we're going to have restrictive policies it's going to be the federal level and we're going to determine what's restrictive." Part of me says that that is something that he's not really looking to balance. He's fine saying, "Yeah. I understand that you're nervous, but the consequences of not winning this race are far greater than the consequences of you being anxious." And that's a different tone than the Biden administration took. The Biden administration was on the -- very touched in with the anxiety that was around AI. Not only just limiting who could get AI exports, but just the way they framed their executive orders and the way they said, "Yeah. We're not going -- we have to put limits on it. We have to do risk assessments, mandatory risk assessments." All of these things had to happen before any AI processes were allowed to take place.
Dave Bittner: I wonder too, you know, how much does the average citizen connect the dots between the construction of a new data center and these more ethereal worries about AI? You know, because the data center's going in. Well, that means jobs for our community. So that's a good thing. But then there's the flip sides of like we talked about water, electricity, you know potentially pollution. Those kind of things. So there are reasons the community might not want to have a big data center in their backyard. But do they connect the dots between that physical thing, that basically a construction and infrastructure project, and these more existential worries about the unknown aspects of AI? I wonder.
Ethan Cook: Yeah. I think it's a good question. I think I would lean to yes they make the connections, but I would say they don't care if it's not about fixing their current problems. Right? So like if there's job shortages in a community they care more about getting themselves a job, putting food on the table, and you know being able to, you know, take care of their medical expenses, all of these things over a, "Oh, will AI go out of control?" Because that's a problem for not me to figure out, and a problem for me to figure out is being able to feed myself and my family. And, you know, I don't think that's a wrong take. I think that's a very reasonable statement. You know, I think that, you know, kind of saying that hey it's not on me to figure that out. I'm just a guy trying to make his way through the world. And that is very valid, very fair. I think that there is a dynamic where that has been used before to exploit communities and get communities to buy in to projects. I mean, you know, I think there was -- and you see mega corporations getting push back from small communities now, you know. In a completely different world, you know, or different subject matter you see this now reemergence of like local farmer's markets and the push back of these mega convenience stores because when these mega convenience stores came in everyone's like, "Great. Groceries are cheaper. We built the town up." Blah, blah, blah. And then the reaction to that which kind of the unforeseen side effect was, yeah, you lose all your mom and pop shops. All those small businesses that used to be there, that used to sell, they can't survive anymore because they've been pushed out and now there's been this reaction towards that going, "Yeah. Maybe that wasn't as great as we thought it was." And I think that that could be a reality where, you know, it's accepted now and then the question is, "Okay. Well, what does that look like once it's built and in five years from now?" To your point, what does that look like for energy costs? These things draw massive amounts of energy and does that impact rates of energy? Right? And how much they charge. Does that impact water? Right? You know, does -- if there's pollution, how they impact that? Right? What are these long term questions that small local communities may not have the resources to ask or may not know to ask because they just hear, "Oh, we're getting jobs. Fantastic." You know, and they don't understand that the unintended side effects of these things are more impactful than they could imagine.
Dave Bittner: No. It's an interesting point. I mean, you know, I've often thought that, you know, every community loves the idea of having a quaint main street with little shops and restaurants and those sorts of things, little businesses. But then at the end of the day, you know, Oreos are cheaper at Walmart. So --
Ethan Cook: Exactly.
Dave Bittner: And that's the reality of it. And then so that's, you know, you've got to meet people where they are. So yeah. The aspirational aspects of this maybe aren't directly connected to the practical aspects when your main priority is just putting food on the table and making sure that your family's provided for. Interesting. Yeah. All right. Well, we will have a link to this story in the show notes. And of course we would love to hear from you. If there's something you'd like us to consider for the show you can email us. It's caveat@n2k.com. [ Music ] And that is "Caveat" brought to you by N2K CyberWire. We would love to hear from you. We're conducting our annual audience survey to learn more about our listeners. We're collecting your insights through the end of this summer. There's a link in the show notes. Please do check it out. This episode is produced by Liz Stokes. Our executive producer is Jennifer Eiben. The show is mixed by Tre Hester. Peter Kilpe is our publisher. I'm Dave Bittner.
Ethan Cook: And I'm Ethan Cook.
Dave Bittner: Thanks for listening. [ Music ]

