Caveat 1.6.22
Ep 107 | 1.6.22

AI is transforming surveillance.

Transcript

Richard Carriere: AI keeps improving. It integrates people, like, with different smaller or less visible aspects related to - whether it's ethnic background, skin complexion, like, the type of hair and, in our case, even masks.

Dave Bittner: Hello, everyone, and welcome to "Caveat," the CyberWire's privacy, surveillance, law and policy podcast. I'm Dave Bittner, and joining me is my co-host Ben Yelin from the University of Maryland Center for Health and Homeland Security. Hello, Ben. 

Ben Yelin: Hello, Dave. 

Dave Bittner: On today's show, Ben tracks down the controversy over Apple's recently released AirTags. I review regulatory changes coming in 2022. And later in the show, my conversation with Richard Carriere of CyberLink to discuss how AI is transforming surveillance. While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. 

Dave Bittner: All right, Ben, let's jump in with some stories this week. Why don't you start things off for us? What do you got? 

Ben Yelin: So my story comes from The New York Times by Ryan Mac and Kashmir Hill, and it is about the Apple AirTag. It was released in 2021. 

Dave Bittner: Yep. 

Ben Yelin: We can say that now. We're no longer in it. 

Dave Bittner: Last year. Yeah. 

Ben Yelin: Yeah, so last year. 

Dave Bittner: (Laughter). 

Ben Yelin: But the AirTags are a way of helping you find things. So if you're like me, you lose things frequently. 

Dave Bittner: (Laughter). 

Ben Yelin: And having the AirTag, you know, attaching it to that thing, whether it's your car keys or a remote control or anything... 

Dave Bittner: A child. 

Ben Yelin: Child is nice. Yeah. 

Dave Bittner: Right, right. 

Ben Yelin: Anything you don't want to lose... 

Dave Bittner: Right. 

Ben Yelin: ...You attach your AirTag to that thing. 

Dave Bittner: Yeah. 

Ben Yelin: Of course, with any technology like this, you're going to get some privacy concerns. And we're starting to see these stories of people getting notifications on their phones, saying there is an AirTag that's been in your vicinity for the last six hours. 

Dave Bittner: Right. 

Ben Yelin: So the hook - they talk to this woman from Los Angeles who got one of those notifications. She does not own an AirTag, but something had been attached somewhere near her for a period of six hours. Apple, to their credit, has set up a system where you're alerted if, you know, you've been followed around by an AirTag... 

Dave Bittner: Right. 

Ben Yelin: ...To prevent against AirTags being used as a surveillance tool. So you can see why a criminal, for example, might want to use it or even an enterprising law enforcement agency. Instead of, you know, tacking it onto your child, tack it onto somebody's car to follow them. 

Dave Bittner: Right. 

Ben Yelin: You know, tack it to your ex-wife's vehicle, you know, to see if they're cheating on you. Tack it to... 

Dave Bittner: I saw one yesterday on Twitter where someone was tracking their luggage being stolen from the airport in real time thanks to an AirTag. 

Ben Yelin: That's funny. That's always the worst when... 

Dave Bittner: Yeah. 

Ben Yelin: ...You're seeing what the criminal is doing with your own stuff. It's like... 

Dave Bittner: Right, right. 

Ben Yelin: Oh, no. He's taking it to a gas station. 

Dave Bittner: Yeah. 

Ben Yelin: Don't pay those exorbitant gas prices, you know... 

Dave Bittner: Right, right. 

Ben Yelin: ...With my suitcase in your car. 

Dave Bittner: But the stalking issue, of course, is a serious one. And so what are the issues that are being brought up here? 

Ben Yelin: So stalking is a very serious issue. You have a situation where, if somebody wanted to follow somebody, you just have to get, you know, one of these AirTags on something that belongs to them - so a car, a device, anything. And then at least for a short period of time, the person will not know that they're being followed or surveilled. They're not going to know that they're being stalked. Apple, largely in response to these concerns, you know, and because they're an enterprising company that cares about privacy, has set up this system where you get an alert on your phone. You know, you've been near an AirTag for a long period of time. 

Dave Bittner: Right. 

Ben Yelin: The problem with that is most people who would get that alert don't have an AirTag and have no idea what that is or what it means. You know, I can just imagine somebody getting an alert like that. It just - you know, it's completely contextless. 

Dave Bittner: Yeah. 

Ben Yelin: So Apple is saying that they're taking that concern very seriously. They set up that feature that informs users. They also encourage people to call local law enforcement. That's not really a very foolproof solution. In a couple of the anecdotes here, people did call law enforcement, and law enforcement was basically like, yeah, we don't really know what to do with that information. File a police report, and we'll see what happens. That goes, you know, into the vast universe of police reports... 

Dave Bittner: Right. 

Ben Yelin: ...Somewhere up in the sky where... 

Dave Bittner: Right, right. 

Ben Yelin: ...Nothing happens. 

Dave Bittner: Right, right. Yeah. I remember when I was a kid and I got a bicycle stolen, and it was a little bit of, you know, cynicism added to my life when I realized that my local police department was not going to aim all of their resources at getting back my stolen bicycle, you know? 

Ben Yelin: I know. Everybody discovers that for the first time. It's like... 

Dave Bittner: Right, right. 

Ben Yelin: You file a police report. And if you actually ask an honest cop, like, what's going to happen with this, 9 times out of 10, they'll be like, oh, no. This is for insurance purposes. 

Dave Bittner: Right, exactly. 

Ben Yelin: I should say more like 99 times out of 100. 

Dave Bittner: Yeah, yeah. So, you know, a couple of things interest - a couple of things I find curious about this story. First of all, Apple is not the first company to come out with this sort of tracking technology. The company Tile has had little tracking tags for years. And it's always interesting to me that it seems as though, you know, nothing gets attention - nothing generates clicks like a negative story about Apple, right? 

Ben Yelin: Right. 

Dave Bittner: So we see... 

Ben Yelin: And I get it. I mean, they hold themselves up as this, you know - the standard for privacy and security. 

Dave Bittner: Right. 

Ben Yelin: So I don't want to say they're asking for it, but they do invite this increased scrutiny. But I agree with you. It's not always fair. 

Dave Bittner: So where do we come down in the middle, though? I mean, obviously this technology has legitimate uses. I would - I think having this on your luggage, for example, a great idea. You could put one in your own car so if your car got stolen, you could try to - you know, I can think of all kinds of great uses for this. But on the other hand, I do see the problem with it. It seems to me like Apple is in good faith trying to put things in place to try to minimize the stalking potential here. But do we - is the stalking potential so much that a product like this shouldn't exist? 

Ben Yelin: You know, I wouldn't go that far. I think if you're going to have a product like this, it's best to put it in the relatively responsible hands of a company like Apple. They're not perfect. They make a ton of mistakes. We cover all of their mistakes. But they do respond to this feedback. The story also mentions there was a security feature where if the device has been disconnected from its home smartphone, so to speak - the smartphone that controls it - for more than 72 hours, then the AirTag will start beeping. And they shortened that to 24 hours to try and prevent, you know, this type of stalking from occurring. So if you put it on somebody else's device, after 24 hours, it'll start beeping. Twenty-four hours is still a long time. And you might be able to fulfill your stocking goals, whenever they are, within those 24 hours. 

Dave Bittner: (Laughter) Right, right. Yeah. Huh. 

Ben Yelin: So, yeah, I mean, I don't think, in my opinion, is not that they should take this device off the market. I think it's a useful device, especially if somebody loses things all the time. You know, I think in the long run, it's just incumbent upon Apple to continue to make these adjustments. We are going to see viral stories on TikTok about, you know, people discovering AirTags on the back of their license plate. That's going to be bad publicity for them. It's going to make people question whether these devices should exist. You know, I wouldn't go that far. I'm generally more amenable when the device itself has some actual practical use and the benefits, at least conceivably, might outweigh the costs. 

Dave Bittner: Yeah. I saw another interesting story not long after these came out where there was a gentleman who had - he bought himself a little electric scooter. He worked in a city, and so he would use his electric scooter to get around town. And he was afraid that it would get stolen, so he put AirTags on the scooter. But he actually put two AirTags on the scooter. He put a decoy AirTag that was very easy to find. 

Ben Yelin: Ah, very smart. You never know which one is the - yeah. 

Dave Bittner: Well, so he hid one very carefully so that it wasn't easy to find, but he put a decoy one that was easy to find. So if the thieves stole his scooter, they would quickly find the decoy one, remove it and think they were good to go. And sure enough, that's what happened. 

Ben Yelin: This person - yeah. The criminals have not watched enough heist movies because... 

Dave Bittner: (Laughter). 

Ben Yelin: ...I've seen that - you know, you always have to put a decoy out there. 

Dave Bittner: Yeah, so this guy was able to track the scooter down to a local pawn shop, that sort of thing. And sure enough, he was able to get the police to come, and he did get his scooter back. But... 

Ben Yelin: Very smart. I mean, maybe for every AirTag we produce, we just have to produce, you know, one decoy. So that... 

Dave Bittner: Right. 

Ben Yelin: ...You can fake people out. 

Dave Bittner: Maybe there's a market there. Yeah. 

Ben Yelin: Yeah. 

Dave Bittner: Apple can sell AirTag blanks that don't have any... 

Ben Yelin: Exactly. 

Dave Bittner: ...Electronics inside. They just look like AirTags. 

Ben Yelin: The fake AirTags. Yep. 

Dave Bittner: Yeah, there you go. You know, that's a billion-dollar idea. 

Ben Yelin: Yeah. 

Dave Bittner: All right. 

Ben Yelin: But somehow we never get a million dollars for these ideas. 

Dave Bittner: We do not. We do not. No. Ideas are a dime a dozen. It's execution, Ben. Execution (laughter). 

Ben Yelin: I know. I know. That's what we have to keep telling ourselves. 

Dave Bittner: Right. Right. All right. Well, we will have a link to that story from the New York Times in the show notes. 

Dave Bittner: My story this week comes from The Wall Street Journal, written by James Rundle. And really, it's just sort of an overview of some of the things that might be coming this year in 2022 when it comes to regulation in the cybersecurity realm. The thing I wanted to focus on with you today, Ben, was reporting requirements. You know, there are lots of other industries - when a bad thing happens, they are required to report it. First thing that comes to mind to me are airlines. 

Ben Yelin: Yep. 

Dave Bittner: Accident happens... 

Ben Yelin: And, you know, all transportation covered by NTSB. 

Dave Bittner: Yeah. 

Ben Yelin: Yep. 

Dave Bittner: But I mean, even - my understanding is, like, even in, like, general aviation, if you have, you know, an issue with your plane, like, there are very specific rules about, you know, tracking the repairs and so on and so forth. 

Ben Yelin: Right. 

Dave Bittner: We don't have that in cybersecurity yet, and there is bipartisan support for a move to make that happen. They tried to put it in the Defense Authorization Act, but it got pulled out as part of the negotiations to get that passed. Where do you think we're going to see this go in the coming year? 

Ben Yelin: Yeah, I was not surprised to see that removed from the defense authorization bill. The defense authorization bill is always this behemoth, where it becomes kind of a Christmas tree. Anything that's vaguely related to security, you try and get it in the bill. And then negotiations whittle things down. Even though there is bipartisan support for these breach notification requirements, you know, I think it's a complicated enough issue that I understand why they weren't able to sneak it in a larger bill. I think this is going to have to be kind of an all-hands-on-deck approach, with a bunch of different stakeholders. 

Ben Yelin: One of the things this article talks about is the precarious state of cybersecurity generally in 2022. You know, obviously the pandemic continues, and we're going to be dealing with staffing shortages. That is certainly going to apply to our industry. And you combine that with the rapid increase in threats that we're seeing, the increased frequency of ransomware. When you start to institute mandatory reporting requirements, these are going to start to be very onerous on companies, particularly smaller companies that might not have the resources, might not have the compliance officers to know exactly what they have to report. 

Ben Yelin: So I think we - you know, it might take a little while, but we need to develop, like, a comprehensive framework, where we have very specific actionable rules as to what a company has to do - or any organization - when they've been the victim of a breach. And I just don't - I understand why that didn't happen as part of the defense authorization bill. But I do think it is going to happen going forward. I think we are going to be able to develop those standards. We now have a national cyber director in Chris Inglis and the head of CISA, Jen Easterly, who are very committed to reform in this area. And I think with their leadership and interest on both sides of the aisle in Congress, I think this is something that will probably get done sooner rather than later. 

Dave Bittner: You know, I was reading through an interesting thread on Reddit yesterday for a bunch of chief security officers. And they were talking about incident response, and specifically responding to ransomware threats. And someone brought up the point that they've seen cases where - when a company has been - an outside company has been brought in to help clean up the mess after a ransomware attack, that sometimes they have been specifically requested not to submit a final report because of discoverability. 

Ben Yelin: Right. Right. 

Dave Bittner: So let's talk about that - with - the liability part of this. 

Ben Yelin: Yeah. I mean, that's what makes this so difficult and so complicated is if you think you're going to be held liable for a breach, it behooves you to bury all the bodies. And so that's not exactly a, you know, favorable incentive structure. I don't know how you work around that because certainly I think people who are legitimately - or companies that are legitimately negligent, you know, particularly when it comes to, like, PII, personally identifiable information - we want to hold them accountable. We want to have some sort of legal mechanism where they can be held liable. They're the people who caused the loss of that information. But I also think, you know, maybe we have to be more careful and have some safe harbor provisions, where certain things - internal reports might not be the - subject to discovery in federal litigation. I don't know exactly how that would work in practice. But it certainly is a problem. I think there's going to be less incentive to fully document exactly what happened, you know. Do a type of after-action report if that's going to be held against you once you get into discovery. 

Ben Yelin: You know, it's not a problem that's unique to this field. It's true when we talk about all different types of accidents. We see it in the transportation sector all the time. I do think the difference in this field is it's still relatively new. So there isn't sort of a uniformly agreed-upon best practice as to how to respond to these incidents, which means, you know, it's much harder to determine whether a company has actually been negligent 'cause you're compared to your peers. You're compared to what a reasonable organization would have done. So in that sense, I mean, maybe - and just thinking out loud here - you can have a shorter safe harbor provision, where information might not be discoverable, at least for the next ten years to - or five years or whatever to - until we, you know, can figure out some justiciable standards. 

Dave Bittner: Is this a chicken-and-the-egg kind of thing, where, you know, we have to - we want to establish whatever the standards are, but we're still kind of reaching around in the dark to figure out what's reasonable? 

Ben Yelin: I think that's exactly what the problem is, you know, because the threats themselves are getting more sophisticated. So, you know - and it takes a while once an attack happens - a ransomware attack or any breach of information - to figure out exactly what happened. You know, the best example recently that affected our state personally, here in Maryland, is the Maryland Department of Health was the victim of some sort of cyber incident - still hasn't been divulged whether it was a ransomware attack, although certainly that's been the speculation. But, you know - the COVID dashboard, for example, in the state of Maryland, was down for a month while our cases were accelerating. So we were kind of in the dark, you know. So it had those kinetic effects. We were suffering the consequences. And we still basically have no idea what happened. We - you know, lots of people within the state suffered consequences. People couldn't apply for Medicaid funds. But you know, we're - it's still so soon after the incident, the incident itself is so novel that, you know, it would be impossible to develop a set of standards based on this incident, you know, to prevent it from happening again. So it is a chicken and the egg problem. The threat landscape keeps expanding. And, you know, our universe of potential best practices will constantly be expanding as well. 

Dave Bittner: Right, right. I suppose part of this, too, is establishing, you know, what's the equivalent of a cyber fender bender, and what's the equivalent of a plane crash? 

Ben Yelin: A full-on - yeah. 

Dave Bittner: Yeah. 

Ben Yelin: Full-on collision - exactly. 

Dave Bittner: Like an actual train wreck, right? 

Ben Yelin: Right, right. You know, and that's very hard to define in the cyber realm. You know, you could say the monetary value of the information that's been breached, but that doesn't always capture the full extent of the consequences. You know, I don't think there was a lot of money to be earned from a lot of the breaches we've seen, but they were still very disruptive and still had, you know, pretty negative consequences on people's everyday lives. So I don't know exactly how you would measure that. 

Dave Bittner: Yeah. 

Ben Yelin: It's not as easy. With a fender bender, you go and get an estimate. It's just not the same in this cyber realm where it's not always as simple as here's the monetary value of the information that's been breached. That's - you know, because it happens in the public sector where you're talking about, you know, stealing people's personally identifiable information from government databases, the consequences can go beyond simple, you know, monetary value, so... 

Dave Bittner: Right. 

Ben Yelin: Yeah, it's just - it's really hard to determine. 

Dave Bittner: Yeah. All right. Well, for sure we've got an interesting year ahead of us on many realms. And so we'll see how... 

Ben Yelin: We sure do. 

Dave Bittner: ...This plays out. As you said, I think it's good to see that we've got some good folks in place at the federal level who both know what they're doing and seem to be in good faith, you know, really set on doing the right thing here. So I think that's... 

Ben Yelin: Absolutely. 

Dave Bittner: ...Good to see. Yeah, absolutely. All right. Well, we will have a link to that story in the show notes. We would love to hear from you. If you have a story or a topic you'd like for us to discuss on our show, you can email us. It's caveat@thecyberwire.com. 

Dave Bittner: Ben, I recently had the pleasure of speaking with Richard Carriere. He is from a company called CyberLink. And our conversation centers on artificial intelligence and the ways that that is transforming surveillance. Here's my conversation with Richard Carriere. 

Richard Carriere: Now, at the moment, we're at a very interesting point in time. The technology - if we go back even two years, it was the beginning of the emergence, if you want, of edge-based facial recognition - so edge-based computing. Finally, lots of offerings came up that were powerful enough to do this kind of processing without needing cloud processing, which causes all sorts of issues that we could talk about. So that's, like, where we were. But the market demand was not there yet. Facial recognition hadn't been used for a long time, whether it's kiosks, like global entry, or some - well, people - anyone who uses one brand of phones, mobile phones - we probably give our access to everything in our life using our face without thinking twice. 

Dave Bittner: Right. 

Richard Carriere: But other than that, a lot of press, a lot of opinions, what I call the doomsday sayers that exist since the discovery of fire, were there. And there were some good valid points two, three years ago. Now the technology has caught up. So things such as the ability to easily spoof or some bias that were there for different skin complexions or ethnicities, just the ability to measure vectors on the face to create that unique key was a bit more challenging. By and large, these things have been resolved today as AI has continued to improve the models. So now we're getting to the point of adoption. And if you read the press, they say it's terrible. Like, this thing will never take off. But then just look everywhere around you. It's popping off left and right. Consumers are adopting. They're not afraid, as long as they view the convenience of a use case that requires facial recognition and that they feel that it's provided by a trusted vendor. 

Richard Carriere: We already give a lot of our personal information to every hotel chain, bank and whatnot, so I don't think it's a problem. So we have that. Enterprises see the need. Back after COVID, a lot of offices have started opening. Big, big companies - a lot of them will start opening - will reopen their office January 1. All sorts of models that are hybrid, potentially, they'll have a lot of people who show up some days of the week, don't show up, creating access control needs. There's cost - obviously, pressure. So finding solutions to reduce the cost of security, surveillance, controlling access flow. So there's lots of things where there's demand just in the business. I just come back from a executive summit from the retail industry, retail and hospitality, where you had C-level people of about 20 companies that could develop and provide technology for retail and hospitality. And of all the things - we were mapping, talking about what's coming not in the next five to 10 years, but the next two, three years. In every single use case, the person brought up what they were doing plus facial recognition. And it's because their distributors, their end customers, whether it's hotels, restaurants, retail chains, demand that - was very happy to be there because I was the facial recognition guy. So it's very - a lot of opportunities. But this is where we are right now in the market. So technology is there. Customers are there. Business are there. So we just need to go get running, make sure that what is done is done following all the rules that takes. And I can talk a bit like that - on that after. But I think we're in a very good place for facial recognition at the moment. 

Dave Bittner: What do you and folks in the business see as some of the opportunities that were - that are going to open up in the next couple years when it comes to facial recognition technology making life better for consumers? 

Richard Carriere: No. 1, I would say, giving access and protecting access to everything around you, whether it's your car, whether it's your home, whether it's your work, online - we're talking physical access, then you have online access - your bank account remotely, your computer. This is where this thing is moving very fast, where people are comfortable. And they will see these things pop up left and right. 

Richard Carriere: The other one is around, I would say, identity verification. Like, these things that today require an in-person meeting - so think of banking, think of - about buying a bottle of wine. You need to show an ID, and you need to prove that you are there. So ordering online these things or opening a bank account online is not feasible today. Even checking in a hotel - if you go like the big chains, they say, oh, you can check in online, but make sure when you show up at the hotel, you go to the desk to still show your ID and show that it's you. So we have technologies there where we can identify the license plates on one side and the license - sorry - the ID card, whether it's a driver's license, identity card, passport, and then make sure at the other end that it's a real person, it's not a photo or video that spoofs. So this kind of thing - as people with COVID, those who were not already doing everything from home now want to for the convenience. We'll see that. 

Richard Carriere: And I will say the third one is about just pure customer experience, whether it's retail - talk to anyone in retail, in hospitality, they lost - airline industry. They lost a lot of money not having customers. Now they're all very aggressive to gain market share as things go back or stay in business. So things like contactless everything. So like, I think there's a pilot program right now at the Atlanta airport with Delta that international flights you can literally go from the curbside all the way to your seat on the plane without touching anything, talking to anyone. You just go through. And it's all facial recognition-based. So these things - imagine this versus the long queues that you have sometimes at airports. Talk about an improvement of customer experience, of streamlining things. 

Richard Carriere: Anywhere that you have bottlenecks, like sports stadiums, companies like Clear start having their technology not just at security in airports, but to go to sports stadiums where people just walk in. In fact, they just go to a kiosk. Put it - bring it to the next level, which is a technology like ours, you can have a couple of cameras in front of each entrance, and people just walk in the stadium like they own the place. We deployed that not a sports stadium yet, but we deployed that in factories. This factory had, I think, 20,000 employees somewhere in Asia that implemented that. And there was a bottleneck of half an hour for employees to walk in the place and do all the checks for security, COVID, their security gear and then another half hour to leave their workplace after the day. They replaced it with cameras and FaceMe technology, and employees don't even slow down. It's like going to the fast track on the freeway. So they come in, they come out. We resolved the worst issue that that company had with 1080p cameras, our software that runs on workstations that are in the four figures, not the six figures. And everybody is happy. 

Dave Bittner: How is the industry responding to all of the concerns about privacy? 

Richard Carriere: Well, it's not a new concern. And this is where I said at the beginning that technology is ready and is mature enough to be deployed in almost every use case you can think of. The application, privacy protection, is not facial recognition itself. It's the database, the data that you have. And there's a few things that some facial recognition solutions like ours can help a lot is that you don't need to have the actual picture of somebody's face in a database. What we do, we create a - what we call a template, which is a small, encrypted file with a bunch of zeros and ones that marks - maps vectors on someone's face. In every single one of them, there's a chance in a million to, more or less, to have two that would be mixed up together. So they're all very unique. Think of a thumbprint - the size of a thumbprint versus the size of your face in terms of precision. And it gives you an idea. So it's a chance in a million that is there. So it's a tiny file, highly encrypted. It's very hard to reproduce, very easily identifiable with the right technology, so there's no mix-up of people. It's very rare when it happens. And so these are things - and the fact that technologies now don't have to send everything to the cloud back. The early days, you would take a picture, send the picture to the cloud. The cloud processing extracts the template. A few seconds later, it would come back and forth. And next thing you know, it took a minute to recognize someone. Now you talk about about very reasonable hardware, nothing space age. Half a second, you identify people in front of camera. And that could be up to 70 people, I think, in a 1080p camera, without getting too much into details - and identify there's somebody - this person - we read the template on their face. We match with the database and other templates. We say, oh, this person is John, and this happens. And we can also see - tell you in that half second that John is a man. He is between the age of 45 and 50, and he seems happy. To answer the question, I was - I took a big detour. So there's nothing in the facial recognition here that invades privacy. If anything, it protects privacy because there's no better means to control access better than a password, better than pin, the eyes, whatever. But then if the database that has the information about customers - and let's say their Social Security number, bank accounts, the name of their mom and and passport information, if this is not properly protected, then this is where the weakness is. And this is a problem that has existed for a long time. Several of us - we know that - without naming anyone, that our private, personal information is probably everywhere on the dark web by now. So this is a false concern. And the best proof, again, is consumers - the convenience is there. They know that nobody will do worse than that hotel loyalty program or credit agency or whatever. So people go with it again. 

Dave Bittner: Yeah. You know, we've seen a lot of news stories about a lack of accuracy with some of these systems, particularly, as you mentioned early on, you know, having trouble with people of color or, you know, types of people who aren't necessarily well-represented in the training data. Where are we now? I mean, how do you as an industry put people at ease that we're beyond that? 

Richard Carriere: Yeah, well, there's one source that is highly credible and used as the benchmark - is NIST, the National Institute for Standards and Technology. They - every month or every three months they benchmark - like, we can submit - once every three months, each company, vendor like us can submit an algorithm or a number of algorithms. And they do test and benchmark of all of them. And they benchmark the accuracy. You know, is it working? Like, you take your iPhone, and 1 times out of 20, it doesn't work. It's because they have 95% accuracy. So they map the top players. We're at the top of the list together mainly with companies in Russia and China that don't commercialize in U.S. for all sorts of reasons. But we're talking about 99.8%, in fact, 99.73% accuracy, which means that it's very rare that they won't recognize your face right away. And that is measured independently by NIST. And they do test all sorts of tests. They do what they call visa pictures, a face not moving. You have a wild picture on the side. They have all the different models, and they have tests for different ethnic groups. Is the accuracy different? The models - as AI keeps improving, we keep making our models better. It integrates people, like, with different, smaller or less visible aspects related to whether it's ethnic background, the skin complexion. Even in some cases, it could be handicapped. It could be, like, the type of hair style in front of the face or not and in our case, even masks. If somebody wears a mask - that was a big problem two years ago - you can not recognize them. Their level of accuracy goes down very, very low. Now we are at a point where if you show up in front of a camera - FaceMe me is on the other side - you're wearing a mask properly over your nose and mouth. Just the data points between the eyes and the nose bridge are sufficient to give accuracy level of up to 98.9%, which is literally, like, as good as without wearing a mask. And on top of that, we can identify, let's say, in a hall or somewhere - if people are supposed to wear masks and they don't, then we can identify, pinpoint the people for that. So all these things are not only improved by companies, but NIST measures them, ranks them, and it gives a pretty good idea of who is good and who is not so good. And we're pretty happy that we're good there. 

Dave Bittner: Is there collaboration among the companies who provide these sorts of things in terms of having some sort of code of ethics, you know, something along those lines? 

Richard Carriere: I have not seen anything. What I can tell you - and some of our competitors are doing things that we would not do. We try to be very respectful in every possible way. If you go to cyberlink.com, on our home page, one of the few items that show up at the bottom in company information is our statement of human rights. I don't know many companies that have that on their website. But it came at the beginning of our facial recognition work, where we say, hold on; if we're going to do it, we're going to do it at the highest level of ethics, without sharing with you the ingredients to our secret sauce. And when we build our models, we do nothing that is remotely questionable in terms of - are they using pictures of people who did not agree? And are they doing things that are twisted? So we start there. It's totally clean. 

Richard Carriere: And the applications of the technology - we provide, first, a tool, a solution, that addresses these problems of bias, even spoofing. There was - IEEE ran a contest at a conference on computer vision a few months ago. There was 265 companies that submitted their tool to try to detect spoofing attempts. We ended up No. 3 against - at the very top, just behind a Chinese and a Russian company. They were a fraction of a percentage better. But basically, identifying people, whether they - you tried to steal identity, so you use a picture of somebody or a video on their - on a phone. Or even if you want to be more like thriller movies, like that movie of 20-plus years ago, "Face/Off," where you can have a mask... 

Dave Bittner: Right. 

Richard Carriere: ...Look like somebody else. And, like, we are very good at that. So that's what you have to look at. And whether it's us or other vendors, if you're a IoT or a manufacturer, before signing with a technology, you better look at them very carefully, and as far as we know, they all do. There we see the ethics come from individual companies. It comes from the partners we have. We are strategically partnering with the chipset makers, the hardware makers, which are all companies many, many times, a few orders of magnitude larger than we are, and they all have a very strict code of ethics. So whenever we partner with them around facial recognition, we have their legal office involved with us to make sure that we don't promote or do things that are questionable. And so far, we're passing this test with flying colors. So we feel good about that. 

Richard Carriere: I mean, could something happen, that one of our customers uses the technology to sell to a company that does something to do - profile people in their back without letting them know? At that point, you know, it gets very far - we try to avoid all of these things. But anything that is in our control or within our reach, working with our partners, we are very careful about that. 

Dave Bittner: All right. Ben, what do you think? I mean, this is an interesting area, yes? 

Ben Yelin: Yeah. You know, it's kind of a theme of this show, that you can introduce something with such promise, but it always has, you know, significant pitfalls. I think we see that here. He talks about some of the advantages of this type of technology, especially for individual businesses. 

Dave Bittner: Yeah. 

Ben Yelin: So, you know, keeping patrons in stores longer, using artificial intelligence, identifying your most important customers, et cetera. But if it's going to be used for - you know, to further enhance our surveillance state, then we really have to balance those two interests. So, yeah, the technology itself is extremely promising. It's cool. It's exciting. But we just, you know, always have to be on guard that it's not going to be abused by bad actors. 

Dave Bittner: Yeah, absolutely. You know, I think about, you know, back when I was in my teen years and I worked at retail, and I worked in a local mall, and sometimes, you know, mall security would come around with a printed-out sheet of paper that said, here's some pictures of folks who've been trouble in the mall lately, you know... 

Ben Yelin: Yeah. 

Dave Bittner: ...Have been shoplifting and things like that. And this is kind of taking that to the next automated level. But - and so I can see good and bad - I can see it being a necessary thing. If I'm a shop owner, I want to know - I want to keep an eye out for, you know... 

Ben Yelin: Right. Oh, that face is bad news - yeah. 

Dave Bittner: ...Known troublemakers. 

Ben Yelin: Right. 

Dave Bittner: But on the other hand, if it is in the realm of artificial intelligence, we've certainly discussed over and over again how there's all sorts of potential for false positives, and that's no good, either. 

Ben Yelin: Yeah. Let's just say, it's not always a foolproof system, to say the least. 

Dave Bittner: (Laughter) Right, absolutely. All right. Well, our thanks to Richard Carriere from CyberLink for joining us. We do appreciate him taking the time. 

Dave Bittner: That is our show. We want to thank all of you for listening. The "Caveat" podcast is proudly produced in Maryland at the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our senior producer is Jennifer Eiben. Our executive editor is Peter Kilpe. I'm Dave Bittner. 

Ben Yelin: And I'm Ben Yelin. 

Dave Bittner: Thanks for listening.