Looking to other countries when regulating AI.
Josh Harguess: The recent news from the EU with their AI Act has definitely put them sort of ahead of us as far as, you know, thinking about actual regulations, penalties, you know, what are going to be the costs of, you know, doing wrong, you know, "with AI".
Dave Bittner: Hello, everyone, and welcome to "Caveat", the CyberWire's privacy, surveillance, law, and policy podcast. I'm Dave Bittner. And joining me is my cohost Ben Yelin from the University of Maryland's Center for Health and Homeland Security. Hey, Ben.
Ben Yelin: Hello, Dave.
Dave Bittner: On today's show, Ben has the story of the NSA purchasing domestic internet data. I've got the story of an interesting pivot from Texas and Florida on their upcoming content moderation case in front of the Supreme Court. And later in the show, my conversation with Josh Harguess. He's the AI security chief for AI security firm Cranium. We're discussing some of the challenges organizations face when trying to build out a roadmap to comply with the EU AI Act. While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. All right. Ben, we've got some interesting stories to share this week. Do you want to kick things off for us here?
Ben Yelin: Yes, so I have one that was pretty widely shared over the internet over the last week and that is about the National Security Agency buying Americans' internet data without warrants. And I'm using an article from the New York Times although this story really appeared in all news sources. So, you'll never guess which United States senator has been inquiring about the NSA purchasing US persons' internet data.
Dave Bittner: Gee, let me guess, Ted Cruz.
Ben Yelin: Yeah. You know, it would actually not be the least shocking thing for Ted Cruz to be concerned about.
Dave Bittner: You know you're right actually.
Ben Yelin: If you guessed Oregon Senator Ron Wyden, you are correct. He was bugging the outgoing head of the National Security Agency, Paul Nakasone.
Dave Bittner: Bugging.
Ben Yelin: Yeah. See what I did there.
Dave Bittner: An NSA joke there, Ben. It's good.
Ben Yelin: I'm all good on the dad jokes. So, he was tormenting the agency while their outgoing director was on his way out, and there was a scheduled confirmation hearing for the new director. Wyden using his power as a United States senator to gum the works of the place, decided to put a hold on the nomination until he got an answer to this question about whether the NSA is purchasing logs related to US persons' domestic internet activities. And it turns out through a letter that was revealed over the past couple of weeks, it was drafted to the director of National Intelligence and they CC-ed Senator Ron Wyden. Apparently, the NSA is purchasing such data. So, the NSA argues that they're only purchasing internet metadata logs, showing when two computers have communicated but not the content of any message. We know that metadata could be potentially very revealing when you put it in a type of mosaic where you reveal which websites people are visiting, which people, which addresses somebody is emailing with or interacting with on social media. That can be very revealing about a person even if you don't have the content of those conversations. From a legal perspective, obviously for content, for at least circuit court precedent, you need to have a warrant to access the content of internet traffic, the content of emails basically, or social media posts.
Dave Bittner: Right. What about metadata?
Ben Yelin: Metadata, you do not. However, and this is critical, there is a standard to obtain it through a judicially approved process. You don't have to have a traditional warrant, but you have to obtain a subpoena under Section 2703(d) of the Stored Communications Act. And the standard for that is reasonable suspicion that the metadata is going to be useful in an ongoing investigation. So, even though there's not a warrant requirement here, the purchasing of data is still an end-around of the otherwise judicially supervised process of obtaining this metadata. I think that's what's very concerning thematically is what the government cannot obtain through a normal judicial process, whether it's a warrant or a subpoena, they have this end-around of just going around and buying the data. And I think Congress is right to be concerned about this. I mean, for one thing, Congress controls the purse strings so it might be incumbent upon Congress to say, hey, you are not authorized to spend money on purchasing US persons' internet traffic unless there's maybe some type of exigent circumstances or unless you obtain a warrant or a subpoena to do so. I think that's something that Congress will certainly consider now that this information has come to light. I think the disturbing thing is that what once required some investigative work and a judicial process, an oversight from the executive and the judicial branch, you can just pay to go around all of those requirements. And Americans are already distrustful of the NSA, we've seen that over the last decade as more and more of their activities have been revealed. We went through the Snowden disclosures and the authorization of the Foreign Intelligence Surveillance Act, all of those things. And so, I think this just adds to the pile of reasons for US persons to be concerned. You know, it's certainly something that stuck out to me when I saw it.
Dave Bittner: One of the things that I believe that I saw here, and it's possible I'm mistaken but that one of the justifications the NSA was using was that our adversaries can get this data the same way. And so, they need to be able to do it to be on even footing.
Ben Yelin: Yeah, I mean, they will say that they do that, not just to protect us against terrorism but to protect us against cyberattacks. Obviously, I think that's legitimate. We do have a significant security interest in protecting our homeland and our virtual space from maligned foreign actors. But I don't think you can bypass any type of judicial process just because the bad guys are also able to get their hands on the data. I mean, imagine, you know, saying, well, we should just bust into this person's house and see if they have drugs because, look, the bad guy is going to bust into that person's house. You know, the career criminal is going to try and rob somebody and find their stash of weed or whatever. So, I just don't think we should hold ourself to the standard of if our adversaries are able to do it, we should do it without a prescribed process. I mean, look, this processes are designed to be frustrating, and shouldn't be easy to obtain data that contains so much personal information. There are ways we can balance the security interests with the need to protect individual privacy but we actually have to go through that balancing process, and I think it's the judicial branch or frankly, you know, administrative law, judges, and the executive branch if it comes to that who should be looking over the facts and making a determination of whether it is in our national security interest to purchase this data and not have this type of purchasing go on off the books, which is what seems to be happening right now.
Dave Bittner: Remind us of the difference with the burden between a warrant and a subpoena.
Ben Yelin: So, to obtain a warrant, you need to show probable cause that either a crime has been committed or is in the process of being committed. So, it's just a much higher standard. You can think of probable cause is like 75% sure that this is going to be relevant to a criminal investigation. The standard for a subpoena is much lower. It is reasonable suspicion. In some contexts, it's a reasonable articulable suspicion, meaning you can't just have like a vague sense that somebody is committing some type of criminal activity.
Dave Bittner: I don't like the looks of that guy.
Ben Yelin: Exactly. It has to be like, well, I have this indicia of data somewhere. We got a tip from Mr. so-and-so so let's purchase this internet traffic and see what we can find. That is the standard. But that is still better than, you know what, let's just -- I don't have any suspicion at all, let's just go to these companies, fork down some American tax dollars, and obtain this data ourselves, comb through it, you know, once you have all of that data in a database, there is no warrant requirement or subpoena requirement to search that data. So, I think that's what makes it particularly dangerous. This is going up against some FTC enforcement actions against private companies who have abused their data collection practices and, you know, the FTC has cracked down on these private companies and, you know, the government has been caught red-handed, not just the NSA, but other federal government agencies and state and local law enforcement agencies have been caught purchasing this data. So, this is really an extension of a broader problem here.
Dave Bittner: Now, obviously a warrant that we're talking about, oversight from a judge, is it the same with the subpoena? Because I have a vague recollection of like police being able to self-issue subpoenas. Is that a thing?
Ben Yelin: Yes, it is a thing in some jurisdictions. But there is still like process, it's not the type of arbitrary, we're just going to do it without any oversight. There's still a level of review, whether it's in the agency itself or from a judge to obtain that subpoena. So, it doesn't require the same type of judicial oversight as probable cause but it's something. It's some type of standard.
Dave Bittner: So, the reason that NSA would be doing this is one of convenience and maybe velocity here that is just quicker.
Ben Yelin: I think it's both of those things. I think it's quicker because you don't have to have any indicia of reasonable suspicion, which means you don't have to go through the whole documentation process, which is probably very onerous. And then, you're just buying a giant haystack worth of data. So, just like other mass surveillance programs, you purchase the whole haystacks that you can find the needle in the haystack. So, you have that, you know, at NSA headquarters searchable by intelligence community analysts, and you don't have to constantly go back and forth between, in this case, it would be the data brokers themselves who are selling the data or the company that collected the data in the first place. It's kind of streamlining the process that you can do everything in-house. But, you know, that's great for convenience but I just think it's not good for protecting personal privacy. I will say there's something illegal about what the NSA is doing right now. There is no law against purchasing data, even if a subpoena or a warrant would have otherwise been required. So, that's incumbent upon Congress. If they are so outraged about this, they could actually take action, as shocking as that sounds, to prohibit this type of activity. And I think that's the effort that Senator Wyden is going to make. And he might get some bipartisan support for it from all corners of kind of the civil libertarian-minded members of Congress.
Dave Bittner: Yeah, that was going to be my next question because it -- I was going to ask you like what would NSA have to do to buy this data from the data brokers in a proper way. But they're doing it in a proper way because there's no prohibition against them doing it.
Ben Yelin: Right. You know, there might be something morally objectionable about it. The NSA has voluntarily discontinued other programs where they had the legal authority but they got so much pushback, they just stopped doing it. The one that comes to mind to me is something called "about" collection where for foreign intelligence purposes, they weren't only collecting internet traffic to or from an overseas target, terrorist target, but also any traffic, even if wholly domestic that was about that target. So, if you and I mentioned terrorist X in an email in a wholly domestic online communication, that was eligible for warrantless collection. The NSA got all this pushback about it. And in 2017, they said, you know what, screw it. It's not worth -- it's not worth getting angry letters from Ron Wyden every three weeks. Let's just stop it. They have actually been authorized to resume that type of collection through FISA re-authorizations, but to my knowledge, they have not done so. So, this could be another circumstance where shaming -- members of Congress shaming the agency might end up having as much of an impact as actually passing a law. Especially if you have an administration that's sensitive to these types of concerns.
Dave Bittner: Yeah, that's interesting. So, what happens next? I mean, are we waiting on Senator Wyden to perhaps put together some legislation, see if he gets support from other senators, and so on?
Ben Yelin: Yeah. So, he already has put forward legislation that would prohibit, not just the NSA but all government agencies from purchasing any data that would otherwise require a warrant or a subpoena. And there have been hearings on this legislation. Like I said, there is bipartisan support for it. The other side of the coin is there is bipartisan opposition. We've talked about this a lot. They are kind of -- the parties don't neatly align on surveillance issues. There's like the Ron Wyden liberal surveillance opponent, then there's the Rand Paul type of conservative libertarian, Mike Lee in the Senate is another one. But then you have kind of, I guess -- I don't want to upset anybody here -- but there's like the political establishment that's generally more pro-security state, think of Chuck Schumer and Mitch McConnell and Nancy Pelosi, those types of people who have generally been supportive of giving agencies the power to protect us against these foreign threats. So --
Dave Bittner: The people who were in office during 9/11.
Ben Yelin: Exactly. Who went through that trauma. And it's true.
Dave Bittner: Right. I mean, I don't mean to -- I don't -- you know, my laughter is ironic, not dismissive.
Ben Yelin: Yeah, I mean, it is hard to be in that situation, having gone through that with some level of responsibility and not want to do everything possible to prevent it from happening again. But so, you know, it's really hard. It's hard to prognosticate about the prospects of something like this going through Congress because it's been such a mixed bag for the civil libertarian coalition to get any sort of substantive anti-surveillance legislation passed that it's just, it's really hard to tell. But we'll keep following it, that's for sure.
Dave Bittner: Yeah, absolutely. All right. We will have a link to that story in the show notes. Ben, my story this week comes from the folks over at Lawfare. And they're chiming in on some interesting developments here with the states of Texas and Florida. They've got a case coming up in front of the Supreme Courts -- Supreme Court rather, this is the NetChoice case, which we've discussed here. Just quick, before we jump into the recent pivot here, can you give us a little description of what the NetChoice case is about?
Ben Yelin: Sure. So, these are dueling cases from Texas and Florida. Each of them passed statutes that restrict social media platforms -- I think it's subject to a certain threshold so really the large platforms -- from discriminating against certain political content. So, it's a requirement that they present, the viewpoint neutrality in whatever their oversight mechanism is for rooting out whatever they want to take down on a website, they can't do so on account of political opinions.
Dave Bittner: Right. So, in other words, Texas and Florida were upset because, in their legislator's view, these big social media platforms were being unfairly biased against conservative views on platforms.
Ben Yelin: Now, for our conservative listeners, they have a point when it comes to certain subjects. The two most high-profile incidents of this happening, one was the so-called Hunter Biden laptop, which to get really meta about this, I don't even know what the laptop is about anymore. But it is true that, as it was called at the time, Twitter restricted access to the story for a couple of days because members of the intelligence community and other observers told the Twitter content moderation team, "Hey, this looks like a Russian operation, you should tread really carefully here." And it turns out it was not a Russian operation. There was a such thing as Hunter Biden's laptop. So, I mean, I certainly think they had right to be angry about that. And then the other one is the effort to police disinformation or misinformation as it related to the COVID-19 pandemic. There are some people who I kind of think are unhealthily obsessed with this, one of them is former data guru Nate Silver who only tweets about content moderation policies related to the so-called lab leak theory. I want him to start talking about, you know, statistics again but I can't get my wish. Anyway, the point of that is these social media companies tried to crack down on what public health officials were telling them was misinformation about the COVID-19 pandemic. It turns out, a lot of that either wasn't necessarily misinformation or it was something that was the subject of a legitimate dispute. And so, I think legislators in Florida and Texas felt particularly aggrieved that they thought their viewpoints were being subject to discrimination by these big tech-controlled social media companies operating out of Silicon Valley.
Dave Bittner: Yeah. So, the platforms, the big social media platforms sued saying that these laws were a violation of their First Amendment rights. And as you and I have discussed here, that seems right. You are allowed to -- as a private company, you're allowed to moderate your platform as you see fit.
Ben Yelin: Yeah, I mean, I have read all of the legal briefs on it. I am still very dubious about the case that Florida and Texas are trying to make. This seems to me like compelling a private company to platform speech even if for whatever reason they don't wish to do so. You know, there's some case law around this. I don't think any of the case law necessarily helps Florida or Texas' case. But we'll see. I mean, it's a new day. They are going to be in front of friendly courts so it's possible that these cases create a new precedent and a different outcome.
Dave Bittner: Yeah. So, in some of their recent legal briefings, Texas and Florida have pivoted and are trying to make the case that this isn't a First Amendment issue, this is a civil rights issue that platforms must not discriminate, and they're comparing the discrimination to racial or gender-based discrimination.
Ben Yelin: Can I -- I will not swear.
Dave Bittner: Okay.
Ben Yelin: But, to me, this is horse bleep. Can we just talk about the differences between gender and race-based discrimination versus viewpoint discrimination?
Dave Bittner: Please.
Ben Yelin: First of all, one of those is an immutable characteristic. You can't change your race or sex. You are born that way, in the words of Lady Gaga. And that applies to other immutable characteristics as well. I mean, certainly, there have been arguments about things like sexual orientation and gender identity.
Dave Bittner: Sure.
Ben Yelin: That is not true as it relates to our viewpoint. You could always change your viewpoint. A lot of people in history agreed with some things that we would find very objectionable today. So, that's one element of it. The other factor that goes into this is a kind of history of discrimination and whether whatever group is allegedly being discriminated against doesn't have enough power in our political process. And neither of these factors apply here either. You can make any argument you want. I have a hard time -- and I just gave all of this credence to these conservative arguments, but I have hard time believing that there is historical, rooted discrimination against political conservatives in free speech cases. I just personally don't really think there's a strong case to make on that. You're welcome to disagree with me. But certainly, they're not politically powerless. And that's kind of the standard that the court looks at when it applies strict scrutiny to these types of classifications, race-based or potentially gender-based classifications. So, it just strikes me as a very specious argument. I don't think it's going to be well received by the courts. I think they're using the language of discrimination because it sounds similar. But there are just to me very obvious differences between racial and gender and sex discrimination versus viewpoint discrimination.
Dave Bittner: Yeah. What happens if they got their way on this? In other words, if this kind of moderation were suddenly categorized as not moderation but discriminatory in the way that racial discrimination is?
Ben Yelin: See, I keep thinking about this and I kind of want to go over the parade of horribles what this might look like in conservative institutions.
Dave Bittner: Okay.
Ben Yelin: So, let's say, you know, I wanted to speak at an NRA conference. And my viewpoint was that guns basically should be banned. We should have a European-style ban on all handguns. And they told me, you know, "I don't want to platform this at our conference. We support Second Amendment rights." Would I be able to sue them for viewpoint discrimination? They're a private company, just the way Twitter and -- or X, and Facebook, etc., are private companies. So, would that be viewpoint discrimination? I mean, they couldn't discriminate against me on the basis of race because of our civil rights laws, but certainly, I think they have a right to -- associational rights to choose, you know, who would speak at a conference or represent them or be platformed by them. Let's say there was -- I'm trying to think of all these like more conservative institutions, but like you work for let's just say like a contractor who does housework, and you have somebody come in who says everything that this company does is environmentally unfriendly, ethically challenged, you know, doesn't align with my views on the importance of federal regulations. Would that company not be able to fire you even though that went against the mission of the company, which was to be free from burdensome government regulations? I realize these metaphors might seem like a stretch but I just think that's the Pandora's box we'd be opening if you start to talk about viewpoint discrimination as opposed to discrimination based on immutable characteristics.
Dave Bittner: Yeah. I will quote from, again, this article in Lawfare, this is written by Daphne Keller. And in the final paragraph, they write, "The state's 11th-hour reinvention as defenders of civil rights is unlikely to fool the court and it shouldn't steal focus from the real issues in NetChoice. This really is a case about online expression rules, the editorial rules set by the platform, the ones users might prefer, and the ones states have chosen to impose. It should be decided on that basis."
Ben Yelin: Yeah, I think that sums it up pretty well.
Dave Bittner: Yeah. I just find it an odd -- I mean, this is the classic case of just throwing everything at the wall when you have a bad hand so let's see if this works. I guess, it strikes me as being funny that we're all the way to the Supreme Court, and this is the shenanigan we're trying.
Ben Yelin: Yeah. I mean, look, to an extent, that is what lawyers do all the time, it's like, here, accept this passionate argument I've made. But if you don't accept it, here are six alternative arguments. Any good lawyer is going to do that. But there is kind of -- there's a limit on what I think even a very conservative Supreme Court is going to be willing to accept. And when it comes to this type of viewpoint discrimination being reflected in civil rights laws and precedents, I just -- I don't think there's an argument there. But it's not up to me, it's up to five members of the Supreme Court.
Dave Bittner: There you go. You don't think they're going to be unanimous on this one, Ben?
Ben Yelin: You know, as much as I would like them to be on this particular question, not about the whole case, but just on this particular argument, I don't necessarily see that happening.
Dave Bittner: No, fair enough. All right. We will have a link to this story in the show notes. And, of course, we would love to hear from you. If there's something you'd like us to consider for the show, you can email us, it's caveat@n2k.com. [ Music ] All right. Well, Ben, I recently had the pleasure of speaking with Josh Harguess, he is the AI Security Chief at an organization called Cranium. And our discussion centers on how the US compares with other regions of the world looking to regulate AI. Here's my conversation with Josh Harguess.
Josh Harguess: Obviously, AI has been exploding over the past year, you know, mostly due to, you know, generative AI, you know, ChatGPT, large language models, these types of things. But obviously, that is not the only kind of artificial intelligence that's been in development. This has been coming, you know, for the past decade or so. And the US, as far as regulation is concerned, has been thinking about this, you know, governance for some time, you know, from the inception of things like the Joint AI Center, which eventually became the Chief Digital AI Office, and then things like the National Security Commission on AI Report that came out. So, this is not, you know, taking anybody by huge surprise. You know, the US has been thinking about this. But I will say that the recent news from the EU with their AI Act has definitely put them sort of ahead of us as far as, you know, thinking about actual regulations, penalties, you know, what are going to be the costs of, you know, doing wrong, you know, "with AI". But, you know, that's kind of a bit of the history and landscape of where we are now.
Dave Bittner: Can you give us an overview of what exactly is in the EU's AI Act?
Josh Harguess: Sure. I can give some high-level things. So, really what they're most focused on is risk-based. So, essentially, you know, what are the high-risk uses of AI, you know, what are some of the lower-risk uses, and really focused on those high-risk uses. Some examples there, medical devices, vehicles, you know, influencing elections obviously is a big one, critical infrastructure. But there are some things that are in there that are a little bit surprising too, things like education, recruitment HR, worker management. So, there are some things that maybe some folks wouldn't consider high-risk, but the EU is definitely putting them at the top of the list. And really, you know, obviously generative AI, in their case, they're calling it general purpose AI, GPAI, is really the focus. And they have a lot of, you know, thoughts around transparency requirements, documentation, copyrights, safeguards, these kinds of things. And really what -- how they're looking at this is from a very blanketed response. So, they want to take sort of an umbrella approach of, you know, thou shall do these things, thou shall not do these things, with financial penalties that actually come with that.
Dave Bittner: Yeah. So, it's going to be my next question is, you know, to what degree is the EU taking a carrot approach versus a stick approach?
Josh Harguess: Certainly. So, I think, you know, with their guidance, you know, they're trying to roll out a carrot approach, you know, these are the types of things that, you know, we want to see from organizations as far as, you know, AI security, AI assurance, AI governance. And as long as, you know, trying to encourage folks to kind of do the right thing. But certainly, the stick is large. So, up to -- in I think their latest draft -- up to 38 million US dollars, about 7% global turnover, depending on the size of your organization. So, quite a large penalty, especially compared to, you know, some of the regulation and penalties that we saw around cybersecurity.
Dave Bittner: And how does all of this contrast with what we're seeing here in the US?
Josh Harguess: Yeah, a great question. So, very different approach so far. So, from the US's point of view, we're seeing this as more of an agency-driven approach. So, essentially each agency within the US is kind of taking their own approach. There's no centralized organization of this yet. We do have the executive order that did come out. So, that's on safe, secure, and trustworthy development and use of AI. That came out late last year. And that gives kind of an umbrella of sort of guidance of where we think we're headed as far as governance is concerned. But it hasn't given any sort of direct, you know, guidance as far as penalties and things like that. Instead, we're seeing an agency-driven approach. So, for example, the Department of State actually gave its sort of readout of how they want to develop AI and be responsible for AI. And, you know, they kind of listed out their four goals, leverage secure AI infrastructure, foster a culture that embraces AI technology, ensure AI is applied responsibly, innovate. So, they're very much more focused on, you know, how do we accelerate what we're doing within our organization, not as much on, you know, restricting the use of AI or thinking about governance quite yet. You know, there's other folks. For example, I mentioned them earlier, the Chief Digital AI Office, they have something called the responsible AI toolkit that kind of lays out guidelines for how to develop responsible uses of AI. That's within mostly the Department of Defense. And, you know, they have this pyramid that kind of has a hierarchy of needs for AI. But really, you know, nothing quite, you know, sticks as far as the carrot and stick kind of comparison quite yet. But we do expect to see that coming, you know, probably this year. And then kind of beyond that, I would say what we're going to expect to see from agencies like the FDA, for example, are agency-driven approaches to this type of governance. And they'll probably look toward, you know, things that are coming out of the White House but then also very much toward the EU AI Act just because of the global nature of the development of these systems and the global nature of the economy around this technology.
Dave Bittner: You know, when it comes to data privacy, it seems like most of the action here in the US has been at the state level. Are we seeing any movement with AI at that level?
Josh Harguess: Certainly. So, California, for example, is leading the way with their own AI regulations. So, we should see that rollout early this year. Privacy is definitely, you know, an important part of that, as well as AI governance in general. And I think as that rolls out, you know, that does tend to happen where California will lead the way in some of these areas just by the sheer nature and the size of the state. So, we can imagine, you know, California rolling this out and then some states start to follow suit.
Dave Bittner: What about for organizations that are multinational, you know? If I'm a big tech company and I'm operating all over the world, is my navigation of this going to be similar to the way that I have to deal with something like GDPR?
Josh Harguess: Yes. Great question. And I think that is the right analogy. So, just the fact that this is a global market. If you're a global organization and you have, you know, any citizens in the EU, for example, then you're really going to have to look toward, you know, the strictest regulations that are out there. You know, there may be some leeway in some other countries but, you know, once you're writing your own regulations internal or your own policy and governance internally, you know, just like GDPR, you probably want to take a kind of holistic approach, you know, to that.
Dave Bittner: Where do you suppose companies stand these days when it comes to their attack surface? You know, I feel as though a lot of folks have a good handle on their sort of -- I hate to use the word old-school, but their old-school cybersecurity attack surface. But it feels like AI has thrown in a bunch of new wrinkles.
Josh Harguess: Yeah, that's correct. We get that question a lot. The first level of protection that we're actually hearing the most from organizations is really just, you know, what is in my system. You know, we have -- when we have a huge organization with a bunch of data scientists, you know, maybe some business units that are interested in using AI so really the number one question we get is, you know, what is in my system, you know, is somebody using AI that we're not aware of, what types of AI are being used. And then there's this concern of shadow AI. So, you know, maybe you do have a policy of, you know, thou shall not use ChatGPT in the organization for XYZ, and you're not aware of the fact that somebody may actually be using that.
Dave Bittner: Right. It's on your phone, you know.
Josh Harguess: Yeah, exactly.
Dave Bittner: Yeah. What do you suggest, you know, for organizations who are looking to get a leg up on this? Is there a specific type of technology or talent that they should be pursuing?
Josh Harguess: Yeah. So, that's a really good question. So, the talent is a difficult one. So, we already have a shortage -- as most of the listeners on this podcast would know, in cybersecurity, we know we have a shortage there. We have a shortage in AI as a whole, so talent shortage there. And then when you get to that intersection of folks that know something about AI and cybersecurity, those are kind of how we're viewing unicorns right now. I mean, they really don't exist, there's very few of them. So, I think education is a big piece of this, ramping folks up that are on the cybersecurity side on AI technology and vice versa. That's a big part of it. But, yeah, I think some of the things that we're seeing and suggesting, you know, really it's that visibility piece first, you know, what's on your system, understanding, you know, vulnerabilities once you do know what's on your system, so mapping things to, for example, MITRE ATLAS, which is a threat matrix developed at MITRE, similar to MITRE ATT&CK, but focused on AI. Things like OWASP, you know, Top 10 in machine learning, and also large language models. So, trying to understand what that threat mapping is from your system to known threats. And then sort of that next piece of compliance. So, how do you tell someone else that you're doing the right thing, that you're adhering to, you know, these pieces of governance that are coming out of the EU and US government and different agencies?
Dave Bittner: Do you see the rest of this year, you know, 2024 as we look at it, do you expect that this is going to be a year of volatility in terms of how people are dealing with AI?
Josh Harguess: Yeah, I think so. You know, there's a lot of predictions that this is the year of AI, you know, even though we saw a massive explosion last year. And I really think this year is kind of the more mature adoption of AI. So, we're going to see kind of two things, in my opinion. I think we're going to see organizations trying to mature their pieces of AI in their organization, you know, that includes trying to secure that AI so they're not susceptible to kind of known attacks. But then the second thing is there is definitely going to be a little bit of volatility in, you know, some of these vulnerabilities that are out there. You know, maybe we'll see a big breach this year, you know, we're not quite sure yet. But we haven't seen the kind of breach that's, you know, putting the fear into folks around some of these technologies. And part of that, I believe, is because we're thinking about this ahead of time, you know, this is not the same as the internet era where, you know, we really didn't think about cybersecurity until the internet had really reached, you know, sort of everybody's doorstep.
Dave Bittner: Yeah, that's a really interesting insight that, you know, what we've been through with the growth of the internet has led us to be proactive rather than reactive here.
Josh Harguess: That's right. Exactly.
Dave Bittner: When we're talking about, you know, technology and the people who are applying it, you know, what about red-teaming, how about folks who are on that side of the fence?
Josh Harguess: Yeah, a good question. So, we know red-teaming is a very valuable tool, you know, within cybersecurity, also a very valuable tool, the methodology within AI. So, the idea here is how can you discover new vulnerabilities within your AI system. And so, you know, we've developed at our time at MITRE, a couple of folks and myself, and then we've continued to work on this at Cranium. But what kinds of talent do you need in that team? How do you stand up that team whether it's folks with adversarial machine-learning backgrounds, cybersecurity backgrounds, you know, pen testing, things like that? Then there's, you know, the execution of red-teaming so, you know, defining objectives, you know, what type of system do you plan to, you know, show this red-teaming exercise on, you know, building the attack out, launching that attack, and then finally doing that final impact analysis on the business aspects or the mission aspects. And then sharing those findings out to the broader community and the broader teams, you know, such as a blue team or, you know, the rest of the organization doing development. And really what this gives you is, you know, that new -- those areas of attack surfaces that you aren't aware of that, you know, may be currently mapped to ATLAS and OWASP and these other sorts of repositories. And so, it becomes a very important tool for really understanding your AI security posture when it comes to these items. [ Music ]
Dave Bittner: So, interesting stuff, Ben. What do you think?
Ben Yelin: Yeah, very interesting. I've been closely following the EU AI Act, I think we're at like step 50 of 300 of getting this enacted into law. I've had to study how the European Parliament works in order to figure out how this is going to advance. But I do think this is really going to change the industry because companies are going to have to figure out how to comply with this EU law even in the absence of US legislation. And then you have to worry about the potential conflict, not just with the US federal law for compliance but also with the bunch of state statutes that might be passed in the coming years. So, this is definitely something we'll be following going forward.
Dave Bittner: All right. Well, our thanks to Josh Harguess for joining us. Again, he is a former team lead for the MITRE AI red-teaming group and he is the AI Security Chief for Cranium. We appreciate him taking the time. [ Music ] That is our show. We want to thank all of you for listening. A quick reminder that N2K strategic workforce intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team while making your team smarter. Learn more at n2k.com. Our Executive Producer is Jennifer Eiben, the show is edited by Tré Hester, our Executive Editor is Peter Kilpe. I'm Dave Bittner.
Ben Yelin: And I'm Ben Yelin.
Dave Bittner: Thanks for listening. [ Music ]