Caveat 2.15.24
Ep 205 | 2.15.24

AI and privacy concerns.

Transcript

Harvey Jang: I would say every privacy law since 2016 has used GDPR as a reference. Sometimes copying it, sometimes deviating from it deliberately, but at least looking at it. And it's surprising, right, that companies and organizations actually like privacy laws. I think in some sense, maybe it's setting a consistent baseline around the world as more adopt interoperable privacy legislation that are more similar to each other than different, which is good.

Dave Bittner: Hello, everyone, and welcome to "Caveat", The CyberWire's Privacy Surveillance Law and Policy Podcast. I'm Dave Bittner, and joining me is my co-host, Ben Yelin from the University of Maryland Center for Health and Homeland Security. Hey, Ben.

Ben Yelin: Hello, Dave.

Dave Bittner: On today's show, Ben has the story of the FCC banning AI robocalls. I've got the story of efforts from the U.S. to lead the way in global AI policy. And later in the show, my conversation with Harvey Jang, Vice President, Deputy General Counsel, and Chief Privacy Officer from Cisco, sharing privacy concerns around generative AI, the trust challenges facing businesses, and the attractive returns from investment in privacy. While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. [ Music ] All right, Ben, we've got some interesting things to share here. Do you want to kick things off for us?

Ben Yelin: Sure. So a few weeks ago, we had the New Hampshire primary election, and many New Hampshire residents received a call purported to be from President Joe Biden telling them not to vote. It turns out this call was generated by AI, it was entirely false, it potentially misled people into believing that there wasn't really a reason to vote. In this case, it didn't impact the election, Biden without even being on the ballot won by a lot, but it kind of set the stage for what might happen in November, where these automated messages, these robocalls that come into people's phones, might submit false information based on the use of artificial intelligence. So with that in mind, the Federal Communications Commission took action, and they have outlawed unwanted robocalls generated by artificial intelligence. To do this, they are referencing a 20-year-old law entitled the Telephone Consumer Protection Act. And under this act, any voices that are deemed to be artificial are banned from calling people's devices, cell phones or landlines. So the interpretation here is that AI-generated voices count as artificial for the purpose of the Telephone Consumer Protection Act. One thing that surprised me here is that this holding was unanimous. The FCC is a bipartisan board, it has Republican and Democratic members, but we came up with not only a unanimous decision, but one that came within a few weeks of this high-profile incident. And we got this really interesting quote from the chairwoman, Jessica Rosenworcel, who said that while this seems like something that might happen far off into the future, it's already here. And I think the situation in New Hampshire kind of hammered this home. There are bad actors using AI-generated voices in unsolicited robocalls, they're extorting vulnerable people, they're imitating politicians and celebrities with accurate replications of their voices, and it really required this agency to take action. Now this is something that could always be reversed if there's a new makeup of the Federal Communications Commission. We saw reversals on other issues between the Trump and Biden administration, most specifically on net neutrality. But I think this has some staying power, just because it seems like such an obvious step to make use of the statute and to ban these very misleading calls.

Dave Bittner: So I have a couple of thoughts here. First of all, by using such an old law, I'm trying to imagine what an AI robocall would have sounded like when this law was passed. And I'm thinking, Ben, this is before your time, but there was an arcade game called Berserk back in the days, and it had a synthesized voice that said, computer alert, computer alert. It sounded like something from the original Battlestar Galactica series.

Ben Yelin: Nerd alert.

Dave Bittner: Thank you very much. I will proudly wear that badge. But the other thing, I guess, if we go up a level here, is it illegal to make a campaign call that doesn't tell the truth?

Ben Yelin: It is not. As long as that call is -- so I mean, I guess false and deceptive advertising is a cause of action, and you might bring that action against a specific party, oftentimes kind of as a public stunt, a publicity stunt. Candidates will make complaints to the Federal Elections Commission based on false or deceptive advertising. It's very rarely enforced, just because we have trouble defining what truth is. And something that might seem to be just 100% false might actually be a political attack, at least somewhat grounded in reality or based on perception. So like, Biden supports replacing the American people with illegal immigrants. You and I can sit around and say that that's entirely false, but that's not necessarily a fact-based inquiry. As opposed to, you should sit out this election, that really straddles the line and the fact that it was artificial intelligence, while it was purporting to be from the President of the United States, that's what's false and misleading here. So it's not really forbidden. I guess it's technically forbidden to give out straight false information, but it's so rarely enforced. I think this is an egregious example where we can make use of the law, knowing that just by the fact that it was done by artificial intelligence, that's what creates this misleading effect.

Dave Bittner: Yeah, I guess what I'm getting at here is trying to parse out where does free speech intersect with the desire to not screw up an election, right?

Ben Yelin: Yeah, I mean, that's a very interesting dilemma. I don't think there is a clear answer. You know, I think we can manipulate our campaign laws so that at least people can evaluate statements based on the people purported to be saying them. It's newsworthy if President Biden were to have recorded anything on a device that was sent to somebody's phone. I mean, that's newsworthy whether what he says is true or false. He made the affirmative decision to do that. People can judge for themselves the veracity of that statement. He could be held to account for it. The media can ask him questions. None of those fail-safe mechanisms are in effect when nefarious actors are creating false videos through artificial intelligence. I think that's really the dividing line here between information where you could say, well, this politician's statement, it did come from the politician, but factcheck.org said it was false, so we should ban it. I think it's a far cry from that to what we have here where the statement came from a nefarious actor and not from the person that it was purported to be from. I think that's just a very clear dividing line. I think that's what the FCC is recognizing with this decision.

Dave Bittner: Yeah, it's just fascinating to me. I mean, I think about like AI versus a mimic, you know, someone who's really good at doing impersonations of someone. And I'm also thinking about what if you got a call from, let's say, either a mimic or AI, but they don't identify themselves as being the person who they are clearly mimicking.

Ben Yelin: Where it's like, hi, I'm a --

Dave Bittner: It's the measure of proof here. You know who I am, you recognize my voice, you know, and here's what I have to share with you today.

Ben Yelin: But it's not explicitly Joe Biden. You know, he's not saying at the beginning of the call, hi, this is Joe Biden telling you not to vote. I think there might still be a cause of action under the Telephone Consumer Protection Act, because the text of that act restricts telemarketing calls and political calls count as telemarketing calls with pre-recorded messages that might mislead the recipient. And it requires telemarketers to obtain prior express written consent from consumers before they're even allowed to robocall them. So I think what this decision does is just bring AI voices, whether they are properly or not properly identified under that same rubric. So in that sense, I think even if this person, the nefarious actor making the call doesn't explicitly identify themselves as Joe Biden or whoever the AI voice is trying to mimic, I still think you could have a cause of action here.

Dave Bittner: You know, you point out that it's notable that we had a unanimous vote from the FCC, and I think it is. That leads me to one of my personal pet peeves here, which is the degree to which politicians tend to carve themselves out of any restrictions when it comes to things like robocalling and do not call lists.

Ben Yelin: Right.

Dave Bittner: Most people in our audience probably share my annoyance. So I guess it's noteworthy that politicians are included in this at all because it is so routine for them to carve themselves out of anything restricting them from doing anything.

Ben Yelin: So I agree in a sense, but I think it's worth noting that this statute that they're basing this decision on refers to calls that come from an artificial robocall. It isn't still entirely legal for some company to call you and it's not an artificially created voice, but they're still robocalling. It's still a recorded message, but it's like, hey, this is Bob from the car dealership and we're having a 50% sale on whatever. Like that could be a robocall and it would still be legal just as a politician could record a robocall saying, oh, this is President Bill Clinton, you know, reaching out to you and telling you to vote for Joe Biden. That's still legal. It just can't be artificially created. So I'm not sure necessarily that the -- I sympathize with your overall point because we've talked about many times that politicians have exempted themselves from some of these statutes. But here, there's that clear dividing line between artificially created and just a simple robocall that's a recording of somebody's real voice. The clarity of this decision is saying that voices created by artificial intelligence count as artificial for the purpose of this law. I think that was a very clear decision to make. I think it makes intuitive sense. And I'm just glad that the FCC went through with it.

Dave Bittner: Yeah, it really seems like they're trying to nip this in the bud. I mean, you talk about that call with Biden in New Hampshire, and they've been aggressive in going after the company that allegedly did this. It's a company out of Texas, I believe. And, you know, they're having at it with them.

Ben Yelin: Yeah, and they have a lot of tools at their disposal. They have the authority under the statute to bring civil action. You can require them to pay fines. They also, this statute empowers state attorneys general to bring causes of action because you can bring causes of action within state courts. So it can be brought by a state attorney general based on a violation of federal law. And now we have this clarifying statement saying that AI generated robocalls would violate federal law under this FCC precedent. So I think it does give a lot of power to state attorneys general. 36 attorneys general, which by definition, I believe, includes members of both political parties, already wrote to the FCC saying that they wanted them to make this interpretation so that they could go after some of these bad actors. And the AGs got their wish here. And the FCC did follow through on it.

Dave Bittner: Well, I mean, let's pause and note a little bit of bipartisan agreement here in an otherwise divided world, right?

Ben Yelin: Yeah, it seems like something we should all be able to agree on. You know, don't mislead people with fake voices. It seems like a no brainer.

Dave Bittner: It's a low bar, but we'll take it.

Ben Yelin: It's a low bar.

Dave Bittner: We'll take the win, Ben.

Ben Yelin: I will take the W. You have to embrace even something that seems like an intuitive, common sense measure. It's not always clear that Congress critters in Washington or our federal agencies are going to follow that common sense. So, yeah, I'm taking the W.

Dave Bittner: What a world. What a world. All right. We will have a link to that story in the show notes. My story this week comes from the folks over at Lawfare. This is an article written by Alexandra Mushka and Alan Charles Raul. This is what I would categorize as a long read. And we will do our best to present it concisely here.

Ben Yelin: As the kids say, TLDR.

Dave Bittner: That's right. And this is a write up about how the U.S. is planning to lead the way on global AI policy. This is really interesting to me, Ben. I guess the thesis here is that the U.S. has signaled its intention to lead international efforts in regulating AI. And this is a bit of a shift from their historical attitude towards data privacy and regulation, which has been certainly compared to the E.U. When you look at things like GDPR, it seems like the U.S. has had kind of a more wait and see kind of approach. But according to this article, the U.S. is being more assertive in their role in shaping global AI governance. I have to say, Ben, this article surprised me a little bit for that very reason, that with the inactivity we see in Congress, and as you and I have talked about many, many times, what seems to be no real movement towards any sort of federal data privacy regulation. It's a little surprising to me that this would be an area where we'd be seeing some attempts at global leadership from the administration. What do you make of this?

Ben Yelin: I'm not sure I'd buy it, to be honest. I hate to be a cynic here.

Dave Bittner: Yeah, that was my reaction as well.

Ben Yelin: I guess what this piece has going for it is it cites the Biden administration's rather aggressive executive orders to regulate AI Some of those are non-enforceable. They are guidelines for private industry. Private industry has adopted many of these guidelines, which is great, but they're not really subject to enforcement. Some of them are really dependent on the administration enforcing them. There was a draft policy for AI in government, which regulates the use of AI within federal agencies. So that is binding. Agencies are required to file that or to follow that guidance. But we could have an election result in November and January 20, 2025, that regulation is wiped off the books. So I'll start with that. The fact that we don't have a federal statute in this space, in the absence of that, it makes it hard for me to believe that the U.S. is really going to be a leader on this, especially because GDPR in the European Union is in the process right now of instituting regulations against AI. And the other reason I think it might be hard for the U.S. to lead is just the structure of our system of government. We have 50 separate states with 50 different perspectives on how to regulate AI. In some cases, like related to data privacy, one state takes the lead and everybody adapts their data privacy practices to follow that state. We're kind of in the infancy of that process, and so I just don't know exactly what that's going to look like. If one state passes regulations that are so strong that they end up dominating the industry, or if it becomes more of a patchwork the way it is with some elements of data privacy. So I'm just kind of skeptical of the whole premise, but maybe I'll be pleasantly surprised.

Dave Bittner: Yeah, I share your skepticism, and I read this article with great interest just to kind of -- because I guess it challenges my preconceptions, which are evidence-based.

Ben Yelin: Sure, yeah. Your preconceptions are evidence-based, but everybody else -- no, I'm just kidding.

Dave Bittner: Yeah, thank you, Ben. Yes, all right. You sit up on my shoulder there and remind me when I'm letting my biases peek through.

Ben Yelin: Exactly.

Dave Bittner: One of the things that they highlight here is that, in contrast to the E.U. having what they describe as a comprehensive AI act, that the U.S. is coming at this on a sector-by-sector basis. And they make the point that this could allow the U.S. to be more nimble to adapt to AI, which they, of course, were on the bleeding edge of AI. And they say this could allow the U.S. to focus on risk assessment and mitigation across different sectors and do so with a bit more flexibility. Do you think there's anything to that?

Ben Yelin: I do think there's something to that, but I also worry about kind of the whack-a-mole effect, where if we're going sector-by-sector, once we solve a problem that's specific to a single sector, then a new problem emerges and we don't have any over-encompassing statute the way Europe has, that would resolve issues no matter the sector that it's being used. So I think that really goes both ways. It's nice to be nimble and have that flexibility, but there is some value in guidelines that apply regardless of which sector is using the technology.

Dave Bittner: Yeah. Another thing that caught my eye here is just kind of the simple fact that so many of the global leaders in technology are U.S.-based, and so it seems to me like it's almost a point of pride that the U.S. wants to take a leadership role in this. It's almost aspirational more than practical. Does that track?

Ben Yelin: Yeah, I think that does track. I think the fact that a lot of this optimism is coming from executive orders is telling to me because, as I said, those are administration-specific and can be reversed. It's not always an easy process to reverse them, but you can reverse them without any intervening act of a legislature, which is just not the case if you were to pass a comprehensive AI statute the way the European Union is. So just by definition, you have less stability and less certainty about future outcomes and regulation.

Dave Bittner: Yeah, I guess a lot of this also comes from what we've learned having been through GDPR and how, in many ways, GDPR became the global standard by the restrictions that it has and affecting global companies. They really had no choice but to follow GDPR, and the easiest way to come about that was to adopt those policies globally. So you could see the advantage of being the leader here, of having your ideas rather than someone else's become the global standard.

Ben Yelin: I think there's a ton of value to that idea, particularly because these companies are located here that might have more of a direct impact on the development of the policy. So I think that aspiration is good. I think the goal here expressed in this article of having the U.S. take on a leadership role in AI regulation is laudable. I hope that as a government we follow through on that. I just think it's restricted by the types of institutions that we have in a way that GDPR is not. And I know I'm one of those people who always eschews the interesting technological issues by talking about the minutiae of how government processes work. I think that is important context here, that you still don't have Congress enacting a broad statute regulating AI policy. It's a patchwork. It's sector-specific. It's dependent on executive orders that don't always have proper enforcement authority. So it leads to, I think, some proper skepticism that we really will take the lead here. But maybe once again I'll be proven wrong.

Dave Bittner: Yeah, I think it's interesting that you and I seem to be of like mind in our skepticism here despite everything that this article lays out.

Ben Yelin: Yeah, I think you and I are natural skeptics and we've become quite cynical. Our cynicism feeds off one another.

Dave Bittner: Right, the weight of the world has crushed our spirit.

Ben Yelin: Yeah, and also we've had so many discussions about this and people ask like, when are we going to get a federal data privacy law? Like we've been there. We've been close to enacting a federal data privacy law and you and I meet the next week to record our podcast and it still hasn't happened. So yeah, I mean, this is going to be a Muppets reference. So what are the old guys in the theater who are constantly complaining?

Dave Bittner: Oh, Statler and Waldorf. Yes, that's us.

Ben Yelin: Yeah, that's basically us.

Dave Bittner: Okay, very good. I'll accept that. All right. Well, again, the article is titled The U.S. Plans to Lead the Way on Global AI Policy. That is over on Lawfare and we'll have a link to that in the show notes. It's a thoughtful article, well worth your time. [ Music ] All right, Ben, I recently had the pleasure of speaking with Harvey Jang. He is vice president, deputy general counsel and chief privacy officer from Cisco. And we are sharing some of his privacy concerns about generative AI and some of the challenges that businesses face when it comes to trust. Here's my conversation with Harvey Jang. [ Music ]

Harvey Jang: Yeah, so this actually started the Privacy Benchmark Study. I think we're in our seventh year of doing it. We really wanted to come up with some thought leadership in the privacy space and really validate some of the thinking that we need to be true as privacy professionals. We knew that privacy was much more than a check-the-box compliance exercise. We knew that it needed to be treated as a fundamental human right and a business imperative. But it's not what you know, it's what you can prove. And so we wanted to go out there and test the market, see what others are feeling about privacy. Some of the key issues over the years, are they deriving business value from privacy programs? So it's something we wanted to probe into, just to try to understand where other companies and our peer companies and competitors even, how are they feeling about privacy? And so this was actually an anonymous survey in the sense we don't know who responded. They don't know that the survey was coming from Cisco. And we surveyed, I think, a couple thousand people around the world hitting about 12 different geographies.

Dave Bittner: Well, let's dig into some of the details here. What way are the winds blowing when it comes to organizations and how they deal with privacy?

Harvey Jang: Yeah, it was interesting to see. I think one of the things that stood out over the years is that people actually like privacy laws.

Dave Bittner: Imagine that, right?

Harvey Jang: Yeah. On the one hand, with GDPR launching, I guess, being final in 2016, with the enforcement of 2018 coming down the pike, that really put privacy on the map. And I would say every privacy law since 2016 has used GDPR as a reference, sometimes copying it, sometimes deviating from it deliberately, but at least looking at it. And it's surprising, right, that companies and organizations actually like privacy laws. I think in some sense, maybe it's setting a consistent baseline around the world as more adopt interoperable privacy legislation that are more similar to each other than different, which is good. And yeah, I was surprised by that number.

Dave Bittner: Yeah. I mean, are you finding organizations are being more proactive or reactive these days?

Harvey Jang: I think it definitely has shifted to companies needing to be more proactive, right? I think also these laws are really calling for a risk-based approach to privacy, but it's not always clear what compliance means. And so there has to be some proactive analysis and understanding of the risk climate and evaluating what can and should be done to prevent the crisis.

Dave Bittner: You know, obviously the hot topic these days is generative AI, and the report digs into some details there. What's the current understanding of how organizations are dealing with this new reality?

Harvey Jang: Yeah. Actually, Cisco published a different report where they surveyed over 8,000 people around the world. It's called the AI Readiness Index. And I think the numbers were shocking, right? Like 97% feel that it is a business priority to embed and use AI in their organization and take advantage of this new technology. But on the flip side, only 14% of the respondents said that they're ready to embrace this. Similarly, I think in our benchmark study, we're seeing that people are excited about AI and the promise and the opportunities there. But there is a bit of reticence and caution as they embrace this new technology.

Dave Bittner: Yeah, I noticed in the report that just over 25% of organizations, in fact, it was 27%, have actually banned the use of generative AI. And it seems like they're really concerned about privacy risks.

Harvey Jang: Yeah, so this study was a survey that was conducted in summer of 2023. And so that was just in the wake of ChatGPT and other of these LLMs launching into the wild. We saw the Italian regulators come right out of the box and say, you know what, you can't use this. We're going to ban it in Italy because we don't think it's compliant with GDPR and our privacy requirements. I think a ruling or notice went out this week, again, reiterating that. And so I think OpenAI has 30 days to respond and demonstrate that the data that was collected to feed and train their models complies with GDPR and privacy laws. And so, yeah, I think right out of the box, there is a mix and a fear of this technology or people not fully understanding the limits and as they were jumping into it. And there were some horror stories that hit the press. So I think media did do a good job of raising awareness that when you put things into these public tools, you are feeding the beast. You are putting your information and some companies, I think it was kind of notoriously inadvertently, people put in source code and highly confidential business information into these tools, not realizing that they were going to be part of the training set and the models. Now, things have changed and evolved. And of course, for a fee, you can pay for a private version that will allow you to benefit from the large language models, but not contribute to it.

Dave Bittner: It strikes me as being irresistible. I mean, the business case, the amount of time that folks can save making use of these tools. I can't help wondering if it's just right for shadow IT.

Harvey Jang: Yeah. And so that's always been a problem, right? Whenever there's new technology out there and the democratization of technology has really made it extremely challenging for our IT department to contain their environment. And so you're kind of at this crossroads in this quandary, right? Like you set up a safe environment because people are going to play with it. At Cisco, we decided to set up a safe environment. So we have enterprise versions of these popular tools, Copilots and various chat tools. So our employee population has a safe place to play and to innovate using these tools. We don't allow it for highly confidential or restricted content, but you can put some information in there, help you write copy a little bit better before something gets published and various other use cases for these new technologies.

Dave Bittner: One of the other elements that caught my eye in the report was this gap between businesses' privacy priorities, but what consumers are expecting from those businesses. So there seems to be a little bit of a gap there.

Harvey Jang: Yeah. I think it's really just a matter of perspective or which angle that you're coming from, right? And so for companies, compliance is going to be important, right? You have to comply with the laws where you choose to do business. And privacy laws, I was saying there's over 160 countries with omnibus privacy legislation and exponentially more sectoral laws covering how to deal with personal data. And so compliance is going to be important to a company, whereas a consumer might look at something the most important to them is transparency and explainability, which also happens to be one of the top compliance requirements when you're dealing with AI and privacy. And really honing down on those principles of transparency, fairness, and accountability. And so I don't think there's too much of a disconnect. I mean, the top issues were similar, but I think it did really highlight the need for better explainability and transparency when you're using AI tools. And I think that's what's going to help build trust. And I think across the board, consumers want to see a human in the loop. They want to see a responsible AI framework set up at the company and having bias audits and making sure that the AI is operating as designed. And these things we're seeing getting embodied in new draft legislation, especially the E.U. AI Act coming out and various other pronouncements and principles on the legislative front with respect to AI really are calling for these same principles.

Dave Bittner: I'm curious, you know, from your perspective, the position of a chief privacy officer at an organization with the scale, the scope, and the reach of a company like Cisco, what are you empowered to do and what do you consider your charge to be?

Harvey Jang: Yeah, so for privacy, my team overseas, and I guess is responsible with setting the strategy, looking at both opportunity and risks and compliance related to personal data. And so we set up our program to have a three-part mission where compliance, of course, is there. You got to comply with the laws where you choose to operate. And so that's the first pillar of our program. But I think the bigger one that is more consuming is the market access piece. We have to build and design our products and services to be trustworthy, right? And we have to make them with the features and functionality so our customers can comply with the law. And that's like next level challenging, right? So when we're looking at our own compliance, we get to decide what's compliant and what's enough to meet the requirement. It's our interpretation of the legal framework there. When it's a customer and you have thousands, tens of thousands of customers, all from different cultures, all with their own perspective of what they need to do to comply, because the law doesn't tell you exactly what it means to be transparent. A customer decides how much information they need from you before they trust you. And there's wide variance when you have customers that are in the general public, a student using WebEx, for example, or a CIO with an electrical engineering degree using our Cat9K. And we have to be able to explain our products and services to that wide range of customer base. And so that piece is probably the bigger one that's a little bit more time-consuming and challenging. And then our third pillar is around differentiation, where these types of reports, these research reports that we do, the surveys that we run, engagement externally with regulators and standards bodies to really drive what privacy should be in the industry and as a regulatory framework as well. And so I think the charter started with that, and how it started to bleed into AI or actually we look at privacy as a foundation for AI and responsible AI, just as we use security as a foundation for privacy. We built our privacy program on top of what we were already doing for security, what we built in the framework for privacy in terms of privacy impact assessments and looking at how we're handling personal data. We use those frameworks and models and tools and overlaid the responsible AI work on top of that. And so it is a foundational piece, like AI in this world is not new to privacy. It's been in there at least since 1995. The E.U. directive talked about automated decision making that has material or legal impacts on the individual. And that's also embodied in GDPR. So naturally, privacy took the lead when this first came out. But as we're looking at it in the risk profile with respect to AI, it goes far beyond privacy. It could even be an existential risk at stake here if things go horribly wrong with the use of AI. And so there's a lot of opportunity, a lot of ambiguity also in the copyright and intellectual property. Do you even have the right to have trained the model with data that was scraped or pulled from the web and all those questions that are unanswered? And the risk cases and use cases of AI are different. And so we had to expand. And privacy is a critical stakeholder. We're in it. We're involved in our responsible AI committee and setting these things up. But another group is taking the charge of looking after responsible AI and responsible innovation overall.

Dave Bittner: My perception is that it seems like folks can be of one of two minds when it comes to a lot of this. One is a kind of a feeling of resignation. You know, you're faced with that EULA before you agree to use someone's product. And there's no way you're going to be able to read it. And you just kind of sigh and you click OK. And away you go.

Harvey Jang: Right.

Dave Bittner: But then on the other hand, you've got people, and I would put you in this category, who are out there kind of trying to fight the good fight to make sure that the regulation we have is useful and good and actionable.

Harvey Jang: Right. Right. And that's where I think some of the legislation has been shifting more towards accountability. Right? And even if no one reads your EULA or they're just clicking Accept and just going along with it, you should still do the right thing with the data. Right? And be careful to protect it and respect it. And that's how trust is built. Right? When people know that you handle their data appropriately, you're not doing bad things with it, you're doing what you say you're going to do and you're just delivering the product or service they asked for, then it's a good relationship. You know, things go awry when they're surprised. I think people only like surprises on birthdays and Christmas. Right? They don't like surprises on new uses of data, unless it directly inerts to their benefit, but still they want to know. And that's what we're seeing in the consumer study as well, that this transparency piece is paramount. People want to know what's happening with their data. [ Music ]

Dave Bittner: Ben, an interesting conversation with someone, certainly one of the companies that has a lot at stake here when it comes to privacy.

Ben Yelin: Yeah, I think there's so many issues that come with generative AI, things like bias. I know I've talked about it from an academic setting, and we've talked about intellectual property, but I think concerns around privacy, that's probably the most acute issue and the most unresolved issue. So I'm just really glad to hear an enlightening conversation on it.

Dave Bittner: Yeah. All right. Our thanks to Harvey Jang from Cisco for joining us. We do appreciate him taking the time. [ Music ] That is our show. We want to thank all of you for listening. A quick reminder that N2K Strategic Workforce Intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team while making your team smarter. Learn more at n2k.com. Our executive producer is Jennifer Eiben. This show is edited by Trey Hester. Our executive editor is Peter Kilpe. I'm Dave Bittner.

Ben Yelin: And I'm Ben Yelin.

Dave Bittner: Thanks for listening. [ Music ]