AI all around.
Dave Brumley: When I think of high tech businesses, I don't think of the federal government as really their first customer. The federal government is trying to say that, if you're going to supply things to federal workers, you have to have had these checks in place. So it may work. It may also backfire.
Dave Bittner: Hello, everyone, and welcome to "Caveat," the CyberWire's privacy, surveillance, law and policy podcast. I'm Dave Bittner. And joining me as always is my cohost, Ben Yelin, from the University of Maryland Center for Health and Homeland Security. Hi, Ben.
Ben Yelin: Hello, Dave.
Dave Bittner: Today, Ben reviews the outlines of a new Executive Order on AI. I've got one organization's attempts to look at data provenance in AI. And later in the show we've got David Brumley. He's a cybersecurity professor at Carnegie Mellon and CEO of the software security firm ForAllSecure. He's offering his take on the new Executive Order on AI. While this show covers legal topics and Ben is the lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. All right. Ben, it's been a busy week. But I think it's fair to say that the President's release of some info on this new AI Executive Order has captured some of the headlines here. You want to lay that out for us?
Ben Yelin: Yeah. It's a huge deal. So we just got the outlines of the Biden administration's new Executive Order on artificial intelligence. I haven't seen the actual text of it. But we got the outline, and then I think some media sources were able to see the text. We have a good deal of what's in this new Executive Order. First, I'll note it is sweeping. It covers a lot of different policy areas related to AI, basically, all of the areas in which the President can act unilaterally without the support of Congress. It covers everything from data privacy to watermarking to product safety. So it really runs the gamut of issues that might arise in a world of AI. So I'm going to get into some of the nitty gritty here and talk about the details of this. And certainly some aspects of this order that I think really surprised me and might potentially cause some issues --
Dave Bittner: Okay.
Ben Yelin: -- in terms of the development of the technology. So the big one that really shocked me is that this order would require companies building the most advanced AI systems to red team so performing safety tests and notify the government of their -- of the results of those tests before rolling out their products. The authority they are using to require this red teaming is the Defense Production Act. You might remember we had a lot of conversations about the Defense Production Act during the COVID era. It's a Korean War era law where the federal government can compel private industry to produce stuff that we need for national security purposes.
Dave Bittner: Okay. Go on.
Ben Yelin: Yeah. You can kind of get where I'm going here.
Dave Bittner: Okay.
Ben Yelin: So I guess the justification is because these AI tools have the potential to affect national security, the government is invoking this power to give themselves basically reviewing authority or even a veto power over the most advanced AI systems that are produced in the private sector. So that's going to be, first of all, a big problem for the industry. They are not going to like that. The industry moves very quickly. It all depends on how the federal government, probably the Commerce Department would define an advanced AI system, the type that would need to be subject to this type of regulation. But it can really potentially stifle innovation in the AI field. I've seen estimates that this could set us back 50 years if we're just slowing the evolution of AI because they have to go through this intense regulatory process. I also think, you know, and maybe I'm out on a limb on this, that you kind of have to reserve the Defense Production Act. It's kind of a break the glass type of statute.
Dave Bittner: It's a wartime thing, right, or it's supposed to be?
Ben Yelin: It is. Yeah. Now I supported it during COVID because we had this issue where we needed certain items to be produced. First, it was personal protective equipment. And then later, when we were rolling out the vaccine, it was -- whenever you need to produce vaccines so vials, syringes, etc.
Dave Bittner: Right.
Ben Yelin: That I think was a wise use of the Defense Production Act. There wasn't enough economic incentive for manufacturers to produce the stuff we needed to survive as a society. I mean, this was a terrible once-in-a-century pandemic.
Dave Bittner: Yeah.
Ben Yelin: Using the Defense Production Act as a mechanism to review advanced AI systems just seems like to me, and I want your opinion on this, to be an abuse of this statute. And I think it kind of cuts against the credibility of this entire enterprise. And I'm curious to see if the Biden administration is going to hold fast to it. That portion of the Executive Order, when it is enforced, I think will be the subject of litigation. And I think there's going to be a lot of pushback from industry on that requirement.
Dave Bittner: Yeah. Is the spirit of this, you know, come from -- I'm thinking of, you know, World War II and Rosie the Riveter and, you know, converting automobile assembly lines to make airplanes and bombers and things like that. I mean, it's seems to me that's the spirit of this law.
Ben Yelin: It absolutely is the spirit of this law. I mean, that's why it was created so that we wouldn't have to ask, as we did in World War II, these companies to suspend their traditional manufacturing to foster our war effort. But we could compel them to do so under this 1950 statute.
Dave Bittner: Yeah.
Ben Yelin: And I just think, when you have a statute that compels private industry to perform certain -- certain activities that they wouldn't perform otherwise, then you're really inserting yourself into the marketplace in a way that I think could potentially be undue. I also think there are going to be times where the government is going to need to compel private companies to do things to address an emergency or a national security crisis. And I think we risk undermining those uses by invoking this power for something like advanced AI systems. Even though I certainly recognize the danger of advanced AI, particularly when we're talking about weapons of war or critical infrastructure, I understand the consequences. I just think this statute is a step too far, in my opinion.
Dave Bittner: Yeah. Do you suppose that this is a practical move in that we have a paralyzed Congress who can't get anything done? So that leads the White House to look through the list of things that they can do without Congress's permission or blessing or whatever? And so this is on that list. And, while it's a stretch, it's the only way to get things done.
Ben Yelin: Yeah. I mean, it might also be a wake up call to Congress to say, Fine. If you don't like this Executive Order, why don't you take some action on artificial intelligence? Congress has not only has failed on AI -- they haven't done anything on it thus far. And sure. We're sort of in the infancy of at least generative AI as a technology.
Dave Bittner: Yeah.
Ben Yelin: But they have not done anything yet. Nor have they acted on data privacy. And those are areas where you really do need a congressional statute. So it could be that the Biden administration is saying, We are going to invoke the Defense Production Act to put a lot of pressure on these big tech companies. If you don't like it, big tech companies, why don't you lobby Congress to come up with what you think is a more sensible regulation?
Dave Bittner: Right. Please call our bluff.
Ben Yelin: Exactly.
Dave Bittner: Yeah.
Ben Yelin: So it is -- to mix metaphors -- I guess it's not mixing metaphors. I'll continue with the poker metaphor. It's really going all in to try and --
Dave Bittner: Okay.
Ben Yelin: -- spook the opposition to take a drastic action.
Dave Bittner: Yeah.
Ben Yelin: I'll also note there are a lot of good things in this Executive Order, I think a lot of good common sense measures. The order harnesses federal purchasing power, directing the government to use risk management practices when government agencies themselves use AI. And the government leveraging their own purchasing power is a way that they can really change the industry because the government is a big purchaser. They've done that in a lot of different spheres, including healthcare.
Dave Bittner: Right.
Ben Yelin: The order directs the government to develop standards for companies to label AI generated content, which we refer to as watermarking. So it tasks various federal agencies to come up with rules on which types of images, videos need to be watermarked and what the enforcement mechanism is. So there are a lot of different elements to it. There's some data privacy elements. I think we'll kind of get more and more into it as we read the text and understand how these provisions are going to be implemented. I was just kind of struck by the Defense Production Act provision as something that I think really made this Executive Order stand out.
Dave Bittner: Yeah. Do you suppose that that will stand, or will there be negotiations going forward with that?
Ben Yelin: I think it's going to be subject to litigation. I think that big tech companies are going to have a big problem with it. I also think there are a lot of center right libertarian activists who think that this is federal government overreaching here and stifling innovation. And so I think there's going to be backlash, perhaps from members of Congress, as well as the industry.
Dave Bittner: Yeah.
Ben Yelin: So I certainly -- I -- not certain that we're ever going to get to a scenario where this Executive Order is being enforced in that manner.
Dave Bittner: Yeah. I think there's a lot of pushback before that actually happens, whether it's litigation or whether it's public pressure to change these provisions. You mentioned you someone or some source saying that this could set us back 50 years. That seems a little breathless to me.
Ben Yelin: Totally breathless. Yeah. I think -- so I think they were being -- they're using hyperbole --
Dave Bittner: Yeah.
Ben Yelin: -- to make a political point.
Dave Bittner: Yeah.
Ben Yelin: But I think the thinking is that, if you put emerging AI technology through this bureaucratic process, you are unduly stifling the type of innovation that's going to generate these incredible AI tools. The whole point here is to reap the benefits of AI technology, of which there are many, we are already benefiting from some of them, while mitigating the risks. So I think, when you put so much emphasis on a process to mitigate those risks, you do run the risk of mitigating the benefits as well. And having an incentive for these companies to -- having an incentive for these companies to worry about passing this sort of government inspection I think could have a chilling effect on what they're willing to produce and how adventurous they're willing to be in developing AI technology. Now, the details matter --
Dave Bittner: Yeah.
Ben Yelin: -- because only the most advanced AI tools are going to be subjected to this procedure, and we don't know exactly what those are going to be. But certainly the threat is there to stifle innovation.
Dave Bittner: Can I play devil's advocate?
Ben Yelin: Absolutely.
Dave Bittner: Not that the devil needs an advocate. But isn't this kind of like saying, what do the pharmaceutical companies need that pesky FDA for? All that oversight is stifling innovation. We could have so many more drugs available that would do wonderful things for society if only we didn't slow down the pace to a crawl with all of that pesky testing.
Ben Yelin: Two things. First, there are a lot of people who say that.
Dave Bittner: Yeah.
Ben Yelin: I don't agree with them because I think it's important to make sure that our medicines are safe before -- safe and effective before we use them.
Dave Bittner: Right.
Ben Yelin: The big difference here is the use of this Defense Production Act process. It's one thing to empower a federal agency to have some type of veto authority over the government's use of AI or government agency uses of AI. It is an additional measure -- measure to extend that authority to the private sector. But to do so through an Executive Order without any express authorization from Congress and to do so using a wartime statute that's been used sparingly throughout its 70-year history I just think is a step too far, and it might undermine more sensible efforts for the federal government to try and step in and regulate AI-generated content.
Dave Bittner: So, to be clear here, what you're saying is you're not necessarily against some sort of oversight, but the -- it's the method that they're using there with the Defense Production Act that really leaves you wondering what's going on here.
Ben Yelin: Yes. I think the lack of clarity as to what counts as an advanced AI system, the use of this wartime statute, combined with the lack of input from Congress certainly raises my eyebrows and makes me think that there's going to be a lot of pushback to this. So I think it's the combination of those factors. I do think we need to have sensible regulation of AI. I think there's absolutely a role for federal agencies to play in making sure that we don't lose control of these AI tools before they consume us and foster all different types of bad results, including discriminatory results, messing with our elections, kind of everything you can think of.
Dave Bittner: Yeah.
Ben Yelin: But I just think we have to do it in a smart, sensible way that doesn't alienate the industry and does its best to not hinder what's been a remarkable evolution of these AI tools. So that's my two cents on it. Certainly a lot of room to disagree on that. I just think the hammer that they're using with this Executive Order, at least to me, seems like it's a little bit too sharp for my tastes.
Dave Bittner: All right. Fair enough. Well, my story this week comes from the Washington Post. This is an article written by Nitasha Tiku and it's titled "AI researchers uncover ethical, legal risks to use popular datasets." So this really comes down to something we've talked about here, which is data provenance. And there's an organization called the Data Provenance Initiative, which is a group of folks who are in the machine learning world along with some legal experts. And they're looking at the various datasets that are used to generate the generative AI that they're used to inform that, the fuel that they use to generate their answers. And they're looking at datasets from sites, places called Hugging Face, which is, I assume, a reference to the movie Alien; GitHub; and Papers With Code, which is part of Facebook AI. And, basically, what they found is that a lot of these, 70% of these didn't specify accurately what sort of licensing is used with this information that's being loaded into these AI engines. And the problem with that is, as these things get loaded in, they get shuffled around. They get combined, repackaged, resold. This article refers to intentional obscuring of information which they refer to as data laundering.
Ben Yelin: I like that as a term.
Dave Bittner: Yeah. And they're saying that there's a real issue here with documentation, of knowing what you're putting into these systems and knowing that you have the proper permissions to do so. And, once they're in there and they get mixed in, like, laundered, blended, if you will, it's hard to get them out. And it's hard to know to what degree they're actually being used in the models.
Ben Yelin: Yeah. I mean, I think there are a couple of things going on here. One, because of how fast generative AI moves, even if somebody identifies a license when they upload a dataset, they have no idea that that dataset is going to go through the wringer, and it's going to be laundered into a generative AI system and combined with a bunch of different sources.
Dave Bittner: Right.
Ben Yelin: So it's just easy to lose track of that intellectual property. And there's no built-in legal protection at this point for information that's generated over AI. So I think that is certainly a concern. There's another concern that they identified here, which I thought was very interesting. Most widely used datasets have limited representation of spoken languages from the Global South, revealing a lack of diversity in data sources. So a lot of the inputs are coming from countries like ours where we speak English or other countries in the northern hemisphere. It's not the most diverse set of inputs, which is going to hurt the outputs. It's going to hurt the information that's coming from these generative AIs. So that was something that really stuck out to me is there's this lack of diversity in the data sources, the best thing we can do is to shed light on the data that's going into generative AI. And that's why the Data Provenance Initiative is so important. You are shedding light, as they say, on a, quote, opaque data ecosystem. And that's going to be critically important when we start to see increasing litigation on this. And we will see increasing litigation. People are going to be suing for copyright violations.
Dave Bittner: Yeah. They already are.
Ben Yelin: They already are. Yeah. And we're going to start to see more high profile cases. And it's going to be very hard to know how to adjudicate these. I mean, you -- can you imagine the discovery process of trying to figure out --
Dave Bittner: Yeah. I mean, that -- but I think there's a practical limitation here and an issue, which is that these black boxes are so mysterious in how they do what they do. You talk to a lot of the researchers, and they'll say, We're not exactly sure how it comes up with the answers it comes up with all the time. And that's a problem. And, also, I think, as you and I have talked about here before, I think I am still skeptical of the notion that informing one of these engines on someone's data represents a copyright violation, no more than if -- if I read someone's book, if you wrote a book and I read it, and then I go on to reference it in a conversation, is that copyright violation? If I use it to inform my opinion on something, is that copyright violation?
Ben Yelin: Yeah. I mean, I think that's an open question. We talked about this last week in a different context when we were talking about defamation lawsuits.
Dave Bittner: Yeah.
Ben Yelin: Basically, how human is an AI system? And are they actually -- are the outputs coming from generative AI published for the purposes of both intellectual property law and defamation law? And so far the limited consensus that we have seems to be that this is not information that's published. It's used for informational purposes. It's the user who would publish this information somewhere. And once the user publishes it, maybe that's when you would have your intellectual property dispute. But the fact that it's been generated by AI in and of itself doesn't mean that there has been a copyright violation or a proper defamation action, for that matter.
Dave Bittner: Yeah.
Ben Yelin: So I think that's a really interesting dividing line. Of course, that's not a very satisfying answer because you can hurt somebody's reputation if something's wrong with an input, like we talked about last week --
Dave Bittner: Right.
Ben Yelin: -- and it's accusing somebody of committing crimes they haven't committed, even if that's not published, per se, if enough people are seeing that on -- through generative AI, it certainly could have the same effect as if it was published. And I think that's true in terms of copyright violations as well. If enough people are seeing something through generative AI, through ChatGPT, that's somebody else's creative works that are unattributed. Eventually, that kind of oozes into our ecosystem. And the further away you get -- further away it gets from its source, then the less easy it is to identify the owner, the rightful owner of that intellectual property.
Dave Bittner: Yeah. I can't help wondering if copyright law, as it currently stands, is simply inadequate to address the issues that are raised here with generative AI, that it's just so different from straightforward copyright, publishing, and rights protection and all that sort of thing that there needs to be either a fresh approach to it or additional things added to it to deal with this. But my sense is, is just that we can't use the existing laws in a satisfactory way, in a gratifying way with this stuff. It just -- it just seems like we're using -- trying to use the wrong tool. You know, you're trying to use a screwdriver as a hammer, yeah. You can probably get that nail in, but it's not going to be efficient or get the job done well.
Ben Yelin: I mean, we could pass new laws. But until last week, we didn't have a speaker of the House of Representatives.
Dave Bittner: True.
Ben Yelin: They're busy fighting about aid packages to Israel and Ukraine. We're going to have to fund the federal government somehow. I'm being somewhat facetious, but it's just it gets at Congress's inability to address problems as they emerge. You get a lot of proposed legislation. You might get a few committee hearings. We're actually already seeing a lot of committee action on various AI subjects in Congress. But they just don't act. We see that in all spheres of the law, and the prevailing legal authorities end up being sometimes 18th century statutes that we're trying to apply to modern circumstances. So, really, the impetus is on Congress to get off its you know what --
Dave Bittner: Yeah.
Ben Yelin: -- and start legislating in more of a timely manner to adjust these laws to effect for changing circumstances.
Dave Bittner: Right, right. I'm also wondering from a practical point of view, if you have this training data loaded into a model and that data has a variety of rights assigned to it, you know, do -- do you say to the model, I want you to generate me an answer, but I want that answer to only be generated using stuff that's in the Creative Commons.
Ben Yelin: I wonder if that's going to be an option in generative AI? I mean, to the extent that that exists, we know that Google image searches, you can filter by images that have Creative Commons licenses. So maybe that's something that we're going to see in -- on generative AI platforms. I think that would be useful. It would be voluntary because people could still opt out of only seeing content or only viewing content produced from inputs that are in the Creative Commons. But I think that's a really interesting idea, actually.
Dave Bittner: Yeah. What happens if I -- if I write a book, okay, I write my -- the biography of Ben Yelin. Right.
Ben Yelin: You'd bore a lot of people with that one, for sure.
Dave Bittner: It's a big hit. It's a page turner. People can't put it down. It rockets to the top of the New York Times bestseller list. And I say to these generative AI systems, you are not allowed to use my book in your system. However, the book is widely reviewed. And those reviews have -- some of them have very detailed descriptions of the content in the book. And the people who wrote those reviews are totally happy with the generative AI systems putting their reviews into their systems to use.
Ben Yelin: And when you take an aggregate of the reviews, you basically get a decent summary of the book.
Dave Bittner: So, suddenly, I'm sitting here going into generative AI, and I'm saying, Hey, what do you know about this book that I wrote? And it knows a lot about it.
Ben Yelin: I don't think there's an answer. Yeah. There isn't an answer to that question.
Dave Bittner: Yeah.
Ben Yelin: I don't think there's a solution that we've properly identified. And I think we're going to have to figure it out. I wish we could figure it out through a robust policy debate where we're balancing the need to protect intellectual property versus the need to have effective generative AI that contains as much information as humanly possible. But that would be a debate that we would have in our legislative branch if we had a functioning legislative branch.
Dave Bittner: Right. Which is an adorable idea.
Ben Yelin: It is an adorable idea. What's actually going to happen is we're going to have a bunch of district courts create holdings on a variety of cases that all come from different circumstances, and we're going to have a messy body of law on this. And I just don't think it's going to end up being very satisfying.
Dave Bittner: Yeah. One of the authors of the report here, a person named Sarah Hooker, who is coauthor of the Initiative's report and also the head of Cohere For AI, which is a research lab, they close the article with a quote from them that says, Dataset creation is typically the least glorified part of the research cycle and deserves to have attribution because it takes so much work. I love this paper because it's grumpy, but it also proposes a solution. We have to start somewhere.
Ben Yelin: Yep. I love that quote. Yeah.
Dave Bittner: Yeah, yeah.
Ben Yelin: Somebody's got to do it. I mean, most people don't produce creative work so are just not that interested in intellectual property copyright law. I get that. But someone's got to do it because we want people to reap the rewards of their creative work, and I'm glad that this project is taking that on.
Dave Bittner: Yeah. All right. Well, we will have a link to this story in the show notes as well. And, of course, we would love to hear from you. If there's something you'd like us to consider for the show, you can email us. Its caveat@n2k.com. Ben, I recently had the pleasure of speaking with David Brumley. He is a cybersecurity professor at Carnegie Mellon and also the CEO of a software security firm called ForAllSecure. Our conversation is about the Executive Order on AI. And I will note that we recorded this the day before the summary came out. So some of our conversation here is a bit speculative. But I still think David Brumley has some really interesting and valuable insights to share. So here's my conversation with David Brumley.
Dave Brumley: Well, in general, the Biden administration has probably been the most active in the digital domain altogether. A few years ago, they had an Executive Order around software security and about the SBOM that was released last year. And so what we're seeing coming up is a new Executive Order that's trying to put some guardrails around how AI is used and trying to create incentives for it to be used responsibly.
Dave Bittner: What sort of things do you anticipate we're going to see in this -- in this EO?
Dave Brumley: Well, the current rumor is this EO is going to focus really on using the buying power of the federal government to try to get people providing AI solutions to disclose things like what are they training on, what is the safeguards that they have in place if there's an error, and just generally what are they doing to prevent attackers from using it? It's an interesting approach.. I'm actually not convinced it's going to work, right, because when I think of high tech businesses, I don't think of the federal government as really their first customer. The federal government is trying to say that, if you're going to supply things to federal workers, you have to have had these checks in place. So it may work. It may also backfire.
Dave Bittner: Yeah. That's a really an interesting insight. I mean, as you say, it seems that quite often the federal government, with their purchasing power, can have influence here. But you're right. I don't often think of them as being at the tip of the spear if -- as it were, when it comes to leading -- latest cutting edge technology,
Dave Brumley: They're just not. And when you look at high tech firms, it's never the first or even the second market people go after. Sure, the federal government spends a lot of money on high performance fighter jets. But compared to consumer goods like, you know, the average person buying an iPhone, they're just not a big buyer. And they have legendary, just absolutely legendary complex regulations for even selling to them, right, like regulations that were designed to make sure that you're selling staples and nails at a fair price are being applied to digital goods. And so all these are incentives really not to work with the government, at least until you're big enough that it makes sense. And so what could happen out of this Executive Order, if it's shaping up the way everyone thinks it will be, is that federal workers will be relegated to working with AI developed by the traditional defense contractors instead of leading edge tech firms.
Dave Bittner: Do you suppose this is going to touch on -- on some of the issues we have with our adversaries? I'm thinking of sort of a talent arms race with folks like China?
Dave Brumley: Unfortunately, I don't think so. I think what they're trying to do is they're trying to almost fix the symptom instead of the problem. The symptom, of course, is that we're in an arms race with China. And we want to make sure that we have the best AI. And, of course, that we don't have -- that they don't have the best AI in China, that we can beat them. And the way that we've always done that in the US is we just leverage superior talent to do this. We've been able, putting my Carnegie Mellon hat on, for decades to be able to recruit the best people from all over the world. I know at CMU, like, the best computer science department in China is Tsinghua, and the top 10 students always came to the US And, to me, that was a very effective way to make sure that we always stayed on top, right? It's almost like a brain drain where we're doing a brain drain from other places here to build the best tech. And so I think, with all of these, you know, looking at the symptoms, how do we regulate AI? And how do we make sure our adversaries don't have the most advanced things? You also have to look at this as a highly fluctuating field. It moves really fast, what are we doing to make sure that we retain that thought leadership and attract the best talent into the US to develop this.
Dave Bittner: One thing that is curious to me is, when we talk about things like disclosure with AI and maybe contrast that with an SBOM, you know, software bill of materials, a lot of what goes on in AI in my mind is kind of a black box, where if you ask, you know, what -- what's going on in here, sometimes the answer you'll get back is, you know, we're really not sure. And so how do you reconcile that with the government's desire to know what's going on under the hood?
Dave Brumley: It's really kind of funny that you bring this up because there really is no way to reconcile this. AI fundamentally is a statistic. There is a statistic that's being calculated over training sets. Those statistics are used by LLMs like ChatGPT to generate convincing text. But it's just a statistic. And so when you look at millions of documents and calculate really complex statistics, there's really no way to understand how that came about. And so I think it's going to be interesting to see what they can do about that. They can talk a little bit about provenance and training. But I think they're going to talk mostly about making sure that there's guardrails where malicious actors can't upload malicious datasets to subvert it, at least in obvious ways. Kind of weirdly, though, this doesn't solve the problem because a lot of these machine learning algorithms, the best way to mislead them is to do it slowly. You know, we all hear about botnets being used for social influence during elections. It's that sort of approach where you have a few bots, and they amplify message with other things. And you finally get regular people to pick up those messages that can really be very dangerous to these algorithms almost spiraling out of control.
Dave Bittner: It's interesting. I mean, I've seen reports in the last week about people talking about how their -- their Amazon Alexas were telling them with great confidence that the 2020 election was stolen, for example, you know, and that kind of misinformation -- when it makes it to that point of a consumer facing device, I think there's reason for concern.
Dave Brumley: Absolutely. And what people have to understand is these algorithms, like, the best thing to think about them is a bit like a slot machine. But, of course, you're the person that they're pulling, right. These machines get a small reward every time a person interacts with them. And they use that to optimize their algorithms. And so you can really create these big silos out there where someone's interacting, thinking the election was stolen; and there's reinforcement going -- learning on going on. You know, machine learning algorithms keep amplifying that for the population, while another population is getting completely separate message because, when they're interacting it, they're getting different weights and measures. So it's -- it's a little bit like gambling, right? Like, there's this addiction that goes on. And you keep thinking, the more I pull this lever, the more it reinforces pulling this lever.
Dave Bittner: I'm curious, overall, do you think it's a good thing that this Executive Order is on the horizon here, that the White House is paying attention to this?
Dave Brumley: Well, in general, I'm pretty pro on what they're doing. So I think the SBOM has a lot of problems with it. I think their Executive Orders on training have a lot of problems. This one will certainly have problems. But the point is they're doing something. And the United States was never meant to pivot just very quickly. It's something that's a slow moving ship. And so by getting out there and putting out, you know, a stake in the ground that says we're going to say something about this, I think it's a good thing. And it's pretty bold to do. I think if I was going to hope for more, it would be really to start involving Congress in this right now. What we are seeing is kind of paralysis in the legislative bodies and so the Executive Branch taking over some of these things, and that just won't work forever.
Dave Bittner: I'm curious. Can you give us some insights from your role as a professor at Carnegie Mellon how -- how you are making sure that the things you're providing your students there is the latest things to prepare them to go out into the world to know about things like AI?
Dave Brumley: Oh, I'm really lucky. It's really easy at Carnegie Mellon. Carnegie Mellon has an entire School of Computer Science, not a department, an entire school. And we have a machine learning department with 40 faculty. We have a computer science department with 50 faculty. We have a department for society and computing with 30 faculty. And so a lot of these breakthroughs in machine learning are powered by research coming out of tier-one universities like Carnegie Mellon and like MIT. I think part of our challenge is, in academia, understanding how these technologies are going to be applied in practice. At some level in education, they're all just tools. We teach people, and we can't really predict how they're going to be used. And so that -- that might be a place that we're a little bit blind.
Dave Bittner: You know, something that my cohost, Ben, and I talk about here a lot is how legislation in particular runs at a much slower pace than innovation. And talking about the challenges that you all have as a university, it must be interesting to try to stay nimble at an institution that runs at the scale of a modern university.
Dave Brumley: It's difficult to stay nimble, especially, you know, if you think of a tenured professor, we get old pretty quickly. The mechanism that we built in to guard against this is, every four years, we get a new undergraduate class of students. Every six years, we graduate PhDs. It takes about six years at CMU. And so we're always getting in the latest minds to think about these things. And it's really important to recognize these great advancements that we're seeing are not being done by the tenured professors. We're kind of like the coaches and the people who go get the money to run the organizations and the people who write the reports. It's really the students. And that's, I think, the key to staying ahead. I think it's also really important to be getting diversity. And, you know, as I said before, this is something that I'm very passionate about maintaining where we're getting people from China or we're getting people from Taiwan, where we're getting people from India, where we're getting people from Africa. Bringing those top minds all together under a university umbrella is really how we make sure we have the best ideas.
Dave Bittner: Ben, what do you think?
Ben Yelin: Oh, the timing couldn't have worked out better.
Dave Bittner: It actually could.
Ben Yelin: We had no idea that, as soon as this interview would be recorded, that we'd set --
Dave Bittner: Right. It could have been better if he and I had recorded the day after it came out. But we'll --
Ben Yelin: I suppose that's true.
Dave Bittner: But we'll take it. And I think, as I said, I think David really has some interesting insights here.
Ben Yelin: Yeah.
Dave Bittner: Yeah. All right. Our thanks to David for joining us. Again, he is a cybersecurity professor at Carnegie Mellon and CEO of the software security firm ForAllSecure. We appreciate him taking the time. That is our show. We want to thank all of you for listening. N2K strategic workforce intelligence optimizes the value of your biggest investment: your people. We make you smarter about your team while making your team smarter. Learn more at n2k.com. Our senior producer is Jennifer Eiben. The show is edited by Eliot Peltzman. Our executive editor is Peter Kilpe. I'm Dave Bittner.
Ben Yelin: And I'm Ben Yelin.
Dave Bittner: Thanks for listening.