Caveat 7.27.23
Ep 181 | 7.27.23

A darkside to AI.


Ken Cox: 2006, Zuckerberg asked the world to start using our real names on the internet. Prior to that, it never happened. And the world kind of ran breakneck speed towards that. At the time, you still couldn't correlate a whole lot of data back to an individual. It still wasn't a significant issue until January of this year when ChatGPT comes out and all of these tools that now can correlate massive amounts of data extremely quickly.

Dave Bittner: Hello, everyone, and welcome to Caveat, the CyberWire's privacy, surveillance, law, and policy podcast. I'm Dave Bittner and joining me is my co-host Ben Yelin, from the University of Maryland's Center for Health and Homeland Security. Hey there, Ben.

Ben Yelin: Hello, Dave.

Dave Bittner: Today, Ben has the story of a proposed law regulating IoT devices from an unlikely sponsor. I've got the story of a novel proposal to limit social media platforms. And later in the show, my conversation with Ken Cox from custom private cloud provider Hostirion. We're discussing the dark side of AI and how to safeguard your privacy. While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. Alright, Ben, we've got some interesting stories to share this week. Why don't you kick things off for us here?

Ben Yelin: So I love my story this week. It is about a proposed bill in the United States Senate called the Smart Devices Act and is sponsored by none other than Ted Cruz.

Dave Bittner: Okay.

Ben Yelin: Not known as being an advocate for any type of digital privacy, although he might disagree with that contention. But certainly not somebody who frequently co-sponsors bills with members of the opposite political party, which is what he's done here.

Dave Bittner: Really.

Ben Yelin: So, the Smart Devices Act would introduce a layer of regulation for smart devices. It would require the manufacturers to put a warning on these devices to clearly disclose whether these appliances have listening devices, cameras, or other spying technologies. The bill would not apply to cell phones, laptops, or other devices that a consumer would quote reasonably expect to include a camera and microphone. But it's for things like your air fryer, which you might not expect is spying on you but we've had all of these incidents where smart devices have listening capabilities or cameras that end up really confusing consumers. Sometimes they could be used in law enforcement investigations. And so Senator Ted Cruz has proposed a bill. He is the ranking republican on the Senate Commerce Committee. And he has proposed this with the chair of that committee, a democrat, Maria Cantwell of Washington. And the bill is also co-sponsored by liberal Georgia Senator Raphael Warnock. So, there's a quite a coalition.

Dave Bittner: Unlikely bedfellows.

Ben Yelin: Extremely unlikely bedfellows.

Dave Bittner: Okay.

Ben Yelin: They brought this bill to the Senate Commerce Committee. And it was approved via voice vote. So there was no opposition. You think that this is kind of a common sense piece of legislation. It's not an overly burdensome requirement on these companies, they just have to put a notice that would reasonably inform consumers that their devices had cameras, listening capabilities that might not be immediately recognizable to the people who have purchased them.

Dave Bittner: Okay.

Ben Yelin: So where this gets interesting is Senator Ted Cruz went to the Senate floor to extol the virtues of this bill. He talked about what's happened in Texas. The state that he represents. Where energy companies, because they've had access to people's smart thermostat devices, are controlling the thermostat from centralized locations against the knowledge and perhaps the wishes of Ted Cruz's constituents in Texas. Basically, and my energy company does this here in Maryland, preventing me from turning my thermostat down to a particular temperature when the system is strained.

Dave Bittner: Right.

Ben Yelin: So there'll be a cap at 76 degrees for air conditioning as opposed to how I prefer it, which is, you know, ice box level.

Dave Bittner: 65?

Ben Yelin: Yeah, exactly.

Dave Bittner: Well, and also, and I'm signed up for that program as well. But in exchange for that, you get some kind of a discount or something too.

Ben Yelin: Yeah! No, I enjoy the discount. It makes my energy bill go from extremely expensive to just very expensive. But yeah, you know, I'll take it. Part of it does bother me that like I can't control my own -- I mean just the concept of it, that I can't control my own energy consumption.

Dave Bittner: Yeah.

Ben Yelin: And I think for somebody like Ted Cruz who has made a career out of the government being too involved in people's business, I think it's appropriate for him to be concerned that some large entity is stepping in from a place of individual decision making. And in this case, it's being state regulated energy companies.

Dave Bittner: And you know, regardless of what you think about Ted Cruz, and I would say personally not a fan.

Ben Yelin: Yeah.

Dave Bittner: But I think you have to acknowledge when it comes to, you know, speaking on the floor of the Senate, he's a skilled orator.

Ben Yelin: Oh, yeah. I mean the guy went to Princeton, Harvard Law School, I believe. Very smart guy. Was the Texas Solicitor General. So, I mean, he knows, he certainly knows how to make arguments. So what the really interesting thing that happened here, and I will admit that I only discovered this by going down the YouTube algorithm rabbit hole.

Dave Bittner: Okay.

Ben Yelin: Is that Ted Cruz brought this to the Senate floor, gave a speech extolling the virtues of this legislation and then asked for unanimous consent that the bill be read, considered, and passed and sent over to the House of Representatives. And the unanimous consent request was rejected by -- or objected to by none other than Senator Rand Paul of Kentucky. So Rand Paul's contention is that this bill would actually take power away from the consumer. That the consumer should decide which products to purchase based on the type of notification that the consumer gets. So if they buy an air fryer that isn't up front about the services that they're using, whether it has a camera or listening technology, then consumers and consumer groups through the free market can demand products that are more protective of individual privacy.

Dave Bittner: Ah, okay.

Ben Yelin: Rand Paul and Ted Cruz, two senators who usually align on everything, got into a real argument, a real you know what match on the Senate floor about this legislation. Paul was basically saying that this is introducing a new, unnecessary layer of government regulation. That it was going to end up driving costs up on consumers. Ted Cruz responded by saying that's absurd. This is a de minimis regulation. We're not preventing these companies from having cameras or listening devices or related technology on their devices. We are just having an option or having a requirement that consumers be adequately informed of these features. And this fight went on for quite some time. In our show notes we'll post a 10 minute YouTube video of the two of them sparring. Which for nerds like myself is quite a 10 minute display of entertainment.

Dave Bittner: Well, what did you make of their respective arguments?

Ben Yelin: I mean, I certainly found myself far more sympathetic to Senator Cruz. I do think this is a de minimis regulation and simple transparency in giving the consumers the knowledge that the product they're purchasing has the ability to, for lack of a better word, spy on them in their own home I think is very reasonable.

Dave Bittner: Yeah.

Ben Yelin: I don't think it's going to be much of an added cost for these companies to add that type of warning. Many of them already do in one way or another include that on various disclosures. I think, at least for my taste, the libertarian position on this, which is, you know, let the market regulate itself and if consumers really desire this level of transparency and privacy protections, they will purchase devices from manufacturers who include these protections I think is a little naive.

Dave Bittner: Right. How could they make that decision if they're simply ignorant of the state of things?

Ben Yelin: Exactly! I don't think they can. So I was in the very rare position of just nodding my head to Ted Cruz when he was making a speech on the Senate floor. Which it felt very weird to me. Then there's the extra matter, and I know this is really getting in the weeds here, of Senate procedure. So Ted Cruz made the point that if this actually went up for a vote, it would probably be 99 to one with Rand Paul being the only senator in opposition. That might be a slight exaggeration, I could probably nitpick, you know, maybe up to five senators in opposition. But I think the general point stands. That it's passed the Senate Commerce Committee unanimously. It's going to have very broad support in the senate.

Dave Bittner: Right. Bipartisan.

Ben Yelin: It's bipartisan, exactly.

Dave Bittner: Yeah.

Ben Yelin: The way the Senate works is everything requires unanimous consent. If you don't achieve unanimous consent, you have to go through a very arduous, time consuming process, forgetting anything to the Senate floor. Whether it's a nomination or it's legislation. So you have to invoke cloture, first on a motion to proceed to that legislation. And then a certain number of hours has to go by. And it's legislative hours. So that can in real life take a period of days. Then you have to vote on cutting off debate on the motion to proceed to a bill. Which is extremely silly. That's another 30 hours. Then there are opportunities to offer amendments. Unless you use procedural tools to [inaudible] amendments. But then you'd have to file cloture on the final bill. That takes some extra time. And Chuck Schumer, the majority leader, controls the Senate floor and he has his own priorities of what he wants passed. Namely, Joe Biden's judicial nominations. And a defense authorization bill. So he's not going to prioritize a rather parochial piece of -- and I wouldn't say parochial, but a piece of legislation that's relatively small scale like the one that Ted Cruz is proposing. So the outcome is by Rand Paul going to the Senate floor and objecting, that effectively kills the spill. In an ideal system is it good that one senator's rather, I guess I would say quixotic, objectives to this end up carrying the day because we have this process of unanimous consent?

Dave Bittner: Right.

Ben Yelin: I think that says a lot about the relatively arcane procedures of the Senate. And it's just not very good for democracy. This seems like it is a bipartisan idea, on the merits, it seems to be the correct thing to do to give people this constructive notice. And it's just weird that one senator can come to a floor -- come to the floor and put a stop to this entire piece of legislation. So. It's interesting for that reason to me. I would highly recommend watching the video if you want some Cruz on Paul action. If you're into professional wrestling --

Dave Bittner: In this corner!

Ben Yelin: -- figuratively among politicians.

Dave Bittner: Right, right.

Ben Yelin: Enjoy.

Dave Bittner: Right. Aren't we in a similar situation with some other legislation, some military stuff where there's a senator holding up, I want to say some funding because he is against the military paying for soldiers to be able to travel to different states to get abortion healthcare, that sort of thing?

Ben Yelin: Yeah, so it's even beyond that. This is Senator Tommy Tuberville of Alabama. Former coach of the Auburn Football Team, I believe. So he's coach/Senator Tuberville. He's not just objecting to money, he is putting a permanent hold on all military executive nominations. And this includes promotions, which is normally a matter of course. Those go through the Senate [crosstalk] consent. So a lot of military personnel who are eligible for promotions, normally this would be a clean process. They go in front of the Senate floor. I ask unanimous consent to have these promotions confirmed. And there are almost never any objectives. Senator Tuberville is putting a blanket objection on all these promotions and nominations until the Pentagon changes its policy on reproductive health to match his political priorities. That also seems like kind of a less than optimal outcome that the military can be completely handcuffed by one senator's whims here. But he has that power because everything in the Senate does run by unanimous consent. There's simply not enough time for a Chuck Schumer or anybody else to file a cloture on every single nominee and then go through the two hour/three day process to even getting that nomination to the floor under normal procedures. It would take forever because there are so many potential nominations that need to be confirmed. So it's an effective political tactic from Tuberville, but it's certainly not optimal for things like our military readiness, that sort of thing. I think it's certainly a flaw in our constitutional system. The way to change it would be what us in the political world would say as going nuclear, which is to change the senate rules by a majority vote, preventing somebody like Tuberville from engaging in this type of action. It's not clear that there are majority of senators that would be willing to do that. Take away the power of individual senators to put holds on nominations. So we're kind of stuck in this holding pattern. But yeah, I mean, it is -- I think most people don't really realize that this is how the Senate works.

Dave Bittner: Yeah.

Ben Yelin: It's nearly just impossible to get anything done if you have one or two senators who refuse to be cooperative. Ironically, when you have one or two senators, Ted Cruz is usually one of those two senators. So it's just funny to see him on the other side of an issue here. Especially on something bipartisan.

Dave Bittner: You know, for every, you know, one person's bug is another person's feature. What's the feature side of this? For folks who say this is the way the Senate should run. And are there people saying this exactly the way the Senate should run? Like what's their argument?

Ben Yelin: I think their argument is it's better for there to be consent, it's better for there to be agreement on things like nominations and removing that power from individual senators takes away the voices of that senator's constituents. That is not persuasive to me. I mean, I think majority vote among 100 senators in the institution that frankly, is already not very democratic, if the state of Wyoming has the same number of representatives as the state of California. We know we're not dealing with a democratically representative body. And the fact that that's further constrained by these rules I think is not defensible. In my view. But certainly there are those who disagree. I mean, I think if you want to preserve a system in which every senator has leverage points, then there's an incentive to maintain the status quo. And just because you're not using your leverage in the same way as Senator Tuberville doesn't mean that you don't want that leverage in the future. I mean.

Dave Bittner: Right. Right. How are you going to convince senators to give up power?

Ben Yelin: Exactly. So let's say you're somebody like Ron Wyden, and he wants to go through a gambit like this in the future on his own hobby horse.

Dave Bittner: Yeah.

Ben Yelin: So let's say he has a major objection to the actions of the security state during a, you know, Ron DeSantis administration.

Dave Bittner: Right.

Ben Yelin: He would have the power in that circumstance to do the same thing. And put a hold on all CIA, NSA, DNI nominations until they change a policy to please his interests. And I don't think senators necessarily want to give up that power.

Dave Bittner: So it sort of forces people to the table, I guess.

Ben Yelin: Yeah, I mean it does. People have to -- for the first time since he was a coach at Auburn, people actually have to listen to Tommy Tuberville.

Dave Bittner: Oh, Ben.

Ben Yelin: You have to reckon, and I will not comment on his coaching record, either. But yeah, I mean, he has a seat at the table now.

Dave Bittner: Right.

Ben Yelin: And you have senators on both sides of the aisle and the administration bargaining with him. Just what do you want? What can we do to get you to drop this hold? And he's certainly enjoying his own power.

Dave Bittner: He's dug in.

Ben Yelin: Yep, that's the nature of the system that we've created.

Dave Bittner: Alright. Well we'll have a link in the show notes to both the story and the YouTube clip of the rumble on the Senate floor.

Ben Yelin: As we're calling it on this podcast.

Dave Bittner: Right, right. Well my story this week comes from the folks over at Lawfare. This is actually an article written by Ayelet Gordon-Tapiero and Yotam Kaplan. And it's titled "Unjust Enrichment by Algorithm." This is -- I guess it's fair to say a novel approach to trying to put some restrictions on some of the big social media platforms here. And they're using, what they're suggesting is, to use the legal principle of unjust enrichment to come at these platforms. And they say in the article here the concept of concept of unjust enrichment revolves around the idea of unjust or wrongful gains and rests on the fundamental idea that misconduct must not be profitable. Before we dig into some of the details here, Ben, can you unpack this a little bit here? What are we talking about with unjust enrichment and what makes it so novel in this case?

Ben Yelin: So this is really fascinating as an issue. It was originally proposed in a Garage Washington Law Review that's coming out next year. Unjust enrichment is a common law concept that people who engaged in activities that are detrimental to others should not be able to reap the rewards of those transactions. We see it in torts, in contracts. It is something that courts have recognized as a way to prevent people who have wronged others from getting rich off those alleged wrongs.

Dave Bittner: Can you give me an example?

Ben Yelin: So it would be something like a company drilling for oil adjacent to somebody else's private property. And the oil spills onto that property. And causes some type of anxious odor or damage. They might, the plaintiffs in that case might have a way of collecting some of the ill begotten gains by that company by saying that they were unjustly enriched. That they were rewarded for their misconduct or malfeasance. The idea being that we don't want our legal system to reward misconduct. Misconduct should not be profitable. We've seen that in all different types of contexts. I remember it from just my intro torts and contracts classes. So it is something that is a real concept in the legal system.

Dave Bittner: Yeah.

Ben Yelin: What's novel here is how it would apply to a framework dealing with algorithms and artificial intelligence. You would have to come up with some type of judicial standard that shows that a company like Meta or whomever was actually engaging in misconduct. And what these companies would say is A, they're not responsible for what's posted on their platforms. I think that's backed up by Section 230.

Dave Bittner: Yeah.

Ben Yelin: And B, it's just really hard to define exactly what misconduct is in this context. I think the authors of this piece want it to be things like misinformation, fake news. But I don't know if you've been paying attention to the news for the past several years, but that in and of itself is very controversial. Who gets to make the decision about whether the news is fake or whether something is misinformation? And how do you prove that the conduct of these companies has actually caused the alleged wrongs? I mean, you can certainly make circumstantial arguments saying that the algorithms on Facebook and the fact that it led people to extremist content led to something like January 6th. But that would just be very hard to prove as the proximate cause. So to me, it just seems like a little bit of a stretch that a couple of law professors maybe got a little too high on their own supply in their offices and were trying to apply this interesting legal theory in an area where it would just be very difficult to adjudicate, in my view.

Dave Bittner: Well, they go through three main categories they lay out here. The first category is discriminatory presentation of job, housing, and credit ads. The second category is when personalization allows platforms to manipulate vulnerable groups. And the third category is to strip wrongful gains in which the platform behavior results in socially harmful acts. That covers a lot of stuff there.

Ben Yelin: Yeah it sure does. I love, by the way, the second example because they talk about how the personalization of platforms manipulates vulnerable groups through things like the Tide Pod challenge, the blackout challenge.

Dave Bittner: Right.

Ben Yelin: The cinnamon challenge. All things where these went viral on social media platforms, but ultimately, you know, I guess you could say they're vulnerable groups. People did still choose to ingest Tide Pods. So I don't know to the extent how you can hold Facebook or whomever accountable in a legal sense for unjust enrichment by simply being a conduit for which that information was shared. And I think this gets into Section 230 stuff.

Dave Bittner: Yeah, but isn't it, I mean, isn't the case here that what they're saying is that it's the algorithm that's kind of the sticky wicket here. Where it's the -- when the algorithm notices that something is taking off, and then starts to recommend it, so that the platform gets more views, and more views equals more profit. I think what they're trying to do here is hold them responsible for the algorithmic recommendation and the profiting off of that algorithmic recommendation. Rather than, I don't know, an organic growth of popularity of something.

Ben Yelin: It's such a sticky issue. I mean, remember, we talked about this with Gonzalez v. Google?

Dave Bittner: Yeah.

Ben Yelin: At what point is the algorithm actually content created by that platform? You know, how much of it is their own creation or how much of it is just the simple service, like you see on any search engine, for example, of because you search for this, we think you will like this. And at what point does that cut down on the shield of liability from something like Section 230. All I'm saying is I think that the companies would hire the most expensive lobbyists to fight tooth and nail against this type of legal doctrine being adopted.

Dave Bittner: Right.

Ben Yelin: And I just, it's a really interesting theory. I think it's just something that is not entirely realistic. In my view.

Dave Bittner: What about the -- I think their overarching principal here, their approach, is if we believe that these platforms are doing harm, can we take away the profit motive that comes from what we're saying is those harmful actions?

Ben Yelin: Yeah I mean, but everything they do is for profit. And in some sense, the harms are pretty diffuse. They're not always fairly traceable to the actions of these companies. It's very hard to prove that the wrongful action or the thing that hurt somebody else, including vulnerable populations, is directly attributable to what happened on what of these platforms is just hard to make that work in practice. Do I think that in theory we should stop rewarding these companies from engaging in behavior that hurts other people? Would I like to take away the profits of a company like Meta for making it so easy to provide false information? Would I like to personally confiscate the wealth of Elon Musk who has ruined my favorite social media platform? Yes, I would. I would like to seize all of his assets for myself and buy a nice yacht.

Dave Bittner: Right.

Ben Yelin: I just don't think from a legal standpoint it's something that's going to be easy to deal with in practice. And it's novel in the way that the legal system would interact with the algorithms on these platforms. We don't know if it would A, be an effective deterrence. And we don't know, basically because we don't know how well the courts will hold these companies liable for some of the negative outcomes that have alleged from their activities.

Dave Bittner: I see. Alright. Well we will have a link to this article in the show notes. And of course, we would love to hear from you. If there's something you would like us to consider for the show, you can email us. It's

Ben, I recently had the pleasure of speaking with Ken Cox. He is from the custom private cloud provider Hostirion. And our conversation centers on what he describes as the dark side of AI. And some methods to better safeguard your privacy. Here's my conversation with Ken Cox.

Ken Cox: It's a very unique space. One that I didn't foresee coming for a very, very, very long time. You know, 2006, Zuckerberg asked the world to put -- start using our real names on the internet. Prior to that, it never happened, right? And the world kind of ran breakneck speed towards that. At the time, you still couldn't correlate a whole lot of data back to an individual. And over the years, we continue to put more and more information online and it just continued to grow and grow and grow. It still wasn't a significant issue until January of this year when ChatGPT comes out and AutoGPT comes out and all of these tools that now can correlate massive amounts of data extremely quickly. And we didn't have that capability before these large language modules and the regenerative transformers. So it's a very unique space. The AI brings a ton of good tools to the table. Lots of creative, it helps with so many things across the board. But when you stop and think about privacy and my security and my freedoms as a human on this planet, it gets pretty scary pretty quickly on how a person that has access to a significant amount of data can pretty accurately predict a lot of things about a human.

Dave Bittner: Can you give us some specific examples of some of the use cases here that concern you?

Ken Cox: I think a very simple one would be if the -- and we know that our data is in the wrong hands. So if I have a whole bunch of information on a human, their name, their birth date, all of that, their kids' names, their kids' birthdays, their husband's birthday, their ex-husband, their credit card numbers, all of these things, I could predictively take some human behavior programming into an AI and be much more accurately at predicting their passwords.

Dave Bittner: So you can envision going to a tool like ChatGPT and say you know, give me the 10 most likely passwords for Dave Bittner or Ken Cox?

Ken Cox: AutoGPT is the bigger risk.

Dave Bittner: How so?

Ken Cox: It doesn't rely on ChatGPT and it's open and I can put it on any computer on the planet. And it doesn't have any third-party regulations. And it's completely open source.

Dave Bittner: So basically, spinning up your own instance of that kind of large language model --

Ken Cox: Yes.

Dave Bittner: -- at home.

Ken Cox: You can do it on your desktop right now. Any Windows or Apple desktop can run AutoGPT and you can bypass all of the restrictions that companies or governments, and I'm not sure that companies or governments should be putting restrictions on these things. I don't know what the end solution is. But what I do know is that it's dangerous. So you know, my goal is just to educate people on their privacy. I think long-term, you know, the Supreme Court has ruled repeatedly that we have a right to privacy on our personal computers and on our mobile devices. But the American public continues to enter into potentially binding contracts with third-party companies giving their data away at will, letting them use it however they choose. So I believe the big risk later is that the Supreme Court might rule, well, the American people don't really care about their privacy. Therefore, they have no reasonable expectation to it on their personal devices or on their computer. So we could lose that.

Dave Bittner: Yeah. My sense is that a lot of people have sort of a sense of resignation when it comes to this. You know? We're presented with these multi-page, dozens and dozens of page, ULAs. You know? And no one reads them. And most, and even if you did read it, chances are you wouldn't understand it. And yet, in order to get to the thing we want to get to, we have to click through that we agree. To me, this doesn't stand the scrutiny of being a meaningful contract and yet here we are.

Ken Cox: But you are in many cases potentially giving your data away to publicly traded companies who have a legal obligation to do the most profitable thing with your data possible, which would be sell it. Or use it to advertise to you. And use these AI engines to do whatever they see fit with it. And until I really sat down and thought about the amount of information that I put on the internet over the past 20 something years. And I feel pretty safe about a lot of it. You know, I grew up in the 80s. So I grew up pre-cell phones, and pre-cameras and videos of everything. And I got to grow up in an internet world that logging every single interaction wasn't feasible. But today it is. The hard drive space is there, and then for a good 15 years we got to live in a world where the processing power wasn't capable enough to do anything significantly dangerous with that amount of data. But now we're in a world where that technology is there. So how do we deal with that?

Dave Bittner: Yeah. What do you propose? What are some of the options we have available to us?

Ken Cox: So one thing that we've created to help this problem and you know, again, I'll date myself, I'm a kid from the 80s. And knowing is half the battle. So we created something called PPGS, which is a privacy policy grading system. You could find it at We wrote a rubric system. So a grading level A through F for a privacy policy. We allow a 13 year old child to go to a website and give their privacy away and we expect them to be able to read a legal document and understand the possible ramifications of giving a corporate company personal private information about themselves and let that company store it for an indefinite amount of time and use it to their best interests. When you think about how this looks for 13 year old over 20 years, that's a pretty scary thought. So we wanted to create a rubric system that a 13 year old understood, A through an F system. Right? So we do risk assessments all the time. And we have to do risk assessment, probability versus the catastrophic risk potential. So we're allowing these people to -- anybody to assign these contracts. And we're not properly advising them of their risks. And I think that that's wrong. I don't think you should happen to have a legal degree to send an email.

Dave Bittner: What about what some of the other parts of the world are doing. Things like GDPR. Is that -- do we need a version of that here?

Ken Cox: My thought is, I'm a huge fan of capitalism. I don't know how it works and with the amount of AI and robotics that we're going to see over the next 10 or 15 years, I think that the market should fix this problem before the government has to step in. I think the market tools like PPGS and other providers can start taking accountability for how they handle data before a government steps in and makes those decisions. I trust the market more than I trust the government to do that.

Dave Bittner: Is it fair to say, though, in this case, it seems to me we have ample evidence that perhaps the market is not self correcting. At least not up to this point.

Ken Cox: I agree with you, unfortunately. And that's, you know, that's why we're doing our part to make that attempt. I think that for us to grade the process a million privacy policies, that wasn't possible for us prior to this AI movement. Right? Before GPT was really at 3.5. Did we have the capability to write robots that went out and searched for privacy policies and publicly facing documents that people are signing and giving away their data rights to? Get that and process it, read it, and then display a result, things that we do, a rubric. So we give a letter grade, and then we also give a basic synopsis of the privacy policy written at a ninth grade level. Just one paragraph that gives the risk. If you share your data with this company, then you could potentially -- they could potentially publish your data forever.

Dave Bittner: What are your recommendations for individuals, then? Folks who are concerned about protecting their privacy. Any words of wisdom?

Ken Cox: Do some research. If the service is free, you're the customer or you're the product. Right? So if you're using an absolutely free service, understand what you're doing. Right? Take some time to educate yourself on you know, how companies might use your data. And I'm not saying don't use these services, I'm just saying be careful of the information you share on them. Specifically for our youth. I'm terrified of, you know, decisions that a 15 year old person would make in their life online. And how that could be brought back against them later in life. It's just a terrifying thing. And now we have the storage capabilities and the processing power to do that.

Dave Bittner: Yeah, I mean, it really is a different world. You know, it sounds like you and I are around the same age. Like you, I grew up in the 80s and I think I'm thankful every day that there wasn't the type of recording that there is today with the things that we did and the things we got away with and the things we thought we got away with --

Ken Cox: I didn't get away with much.

Dave Bittner: But, there are those who say that, you know, each generation, these sorts of changes happen. And this upcoming generation will adapt. I guess one of my concerns is that it seems like from a regulatory, from a legislative point of view, that's all reactive. And so, these folks who are coming up today, these kids coming up today, a group of them are going to have to suffer the consequences before things are put in place to protect them.

Ken Cox: I think that we are in a kneejerk moment right now. I believe that if you've been in IT for a period of time, you kind of understood how long large queries took and how challenging it was to use caching engines to display large number of data, large amounts of data, and even, you know, we worked with some Global 500 companies. And they use like a patchy spark to process large amounts of data. Until GPT came out, it was wildly expensive to do a fraction of what you can do today. So, I think it caught a lot of people off guard. Unfortunately, I think a lot of VC money ran to AI projects right away. And I believe that we can utilize AI to fix a lot of the problems with internet 2.0. I don't think internet 2.0's going to go away for a very long time. Services like email and websites are going to be around forever. They might be implemented in a web 3.0 interface, but those services are going to exist for a long time. And I would like to see companies using AI capabilities to help harden and make those internet 2.0 services even better.

Dave Bittner: Are there any things that parents can be doing to help protect their kids?

Ken Cox: They need to learn how to set healthy boundaries. Educate themselves as much as possible. This is not a situation that's going to go away.

Dave Bittner: Yeah.

Ken Cox: It's out of the bag, there's no stopping it. I don't believe that a government can stop it. I don't think that -- it's going to go. It's too powerful not to. And it's open source. So, it's already forked, it's already all over the planet.

Dave Bittner: You know, my wife and I were joking recently about how when our kids were coming up, we would say that you know, when it came to locking down certain things on the internet, you know, trying to keep their eyes off of things, like we, between the two of us we might be able to outsmart them, but there's no way we're going to outsmart them and all of their friends combined. And that was before we had these kinds of readily accessible tools.

Ken Cox: Yes. And the tool -- I think they're going to be given so much greatness. But the negative is I understand what nefarious people do on the internet, right? I've been on the internet for a very long time as a hosting provider and with start-ups. So it's not a fair place to live.

Dave Bittner: Where do you suppose we're headed here? Any thoughts on how things might shake out?

Ken Cox: I mean, short-term, we're going to see -- I mean, we've already seen a huge reduction in staff. Right? Programming jobs are gone for the most part. I think the jack of all trades position is going to be a wildly valued position. Somebody that can use AI to orchestrate multiple different facets of technology. Someone that can use it to [inaudible] an operating system, write pieces of software, and integrate it into text, images, and paint that whole picture. So, I think the producing products using AI or AI orchestraters is going to be a very valued position moving forward. Long-term, I believe we're going to end up with a utopia or a dystopia and I hope it's utopia. But I think it's going to be a bumpy ride.

Dave Bittner: Yeah. Folks say we're hoping for Star Trek but we might end up with Blade Runner.

Ken Cox: And you know, the fact that it is open source means that both sides get to fight for what they believe is right. And if you have the desire, I think that you can, you know, get your own instance of AutoGPT up, or even if you want to create your own open AI stack, that would be more cumbersome to do. But you know. And you know, build your own stack and do whatever you wanted with it. Teach it however you want.

Dave Bittner: Ben, what do you think?

Ben Yelin: That was a really interesting interview. I mean, I'm very intrigued by the product that his organization has produced. Which would in lieu of actually having to read these ULAs or terms and conditions, would have kind of a checklist system where somebody could research it themselves. And let the fancy lawyers read the information and summarize it and provide some type of score. I think that sounds really, really like a major improvement in theory. There's still kind of an equity question there because the most -- the people who would most need that kind of information are going to be the least likely to seek it out.

Dave Bittner: Right.

Ben Yelin: But it was a very interesting interview.

Dave Bittner: Yeah. Alright, well our thanks to Ken Cox from Hostirion for joining us. We do appreciate him taking the time.

That is our show. We want to thank all of you for listening. We'd love to know what you think of this podcast. You can email us at N2K's strategic workforce intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team while making your team smarter. Learn more at Our senior producer is Jennifer Eiben. This show is edited by Elliot Peltzman. Our executive editor is Peter Kilpe. I'm Dave Bittner.

Ben Yelin: And I'm Ben Yelin.

Dave Bittner: Thanks for listening.