Caveat 8.31.23
Ep 185 | 8.31.23

Compliance can't wait.


Igor Volovich: Compliance has always meant to be a tool of risk management, right? We create these standards, these frameworks that represent risk models, that represent threat services. And so there is no shortage of sort of guidance for how to manage risk based on your specific threat and risk profile. And yet, because of compliance's position as a historical reporting function, we've never managed to do it real time. So compliance has always been looked at as security's poor cousin.

Dave Bittner: Hello, everyone, and welcome to "Caveat", the Cyberwire's privacy, surveillance, law, and policy podcast. I'm Dave Bittner, and joining me is my co-host, Ben Yelin, from the University of Maryland Center for Health and Homeland Security. Hi, Ben.

Ben Yelin: Hello, Dave.

Dave Bittner: Today, Ben shares the story of a federal judge dismissing a lawsuit from the Republican National Committee against Google. I've got an opinion piece from the New York Times making the case that social media platforms should provide algorithmic choice. And later in the show, my conversation with Igor Volovich from Qmulos. We're discussing how compliance can't wait for the government to find alignment on security and risk. While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. All right, Ben, we've got a lot to cover today. Why don't you start things off for us here?

Ben Yelin: So my story comes from the Washington Post. It's about a federal judge throwing out a lawsuit from the Republican National Committee, which accused Google and their spam filters of political bias. So basically, what was happening is Google obviously filters emails. It only puts emails in our inbox that it thinks we want to see. So it has its own algorithm that determines what is and is not spam. And it's not always perfect, but it does a pretty good job.

Dave Bittner: Yeah.

Ben Yelin: I mean, if you ever check your spam folder, I think most of what you'll see in there is legitimately spam.

Dave Bittner: Yeah, I consider spam to be mostly a solved problem. Like you rarely see spam surface to your view if you're using one of the major providers.

Ben Yelin: Right. Exactly. So one of the things that's interesting here, before we get started, is that the RNC wasn't using Gmail as its provider. They were using a different service. But a lot of their constituents, a lot of people that have donated money to Republicans in the past, have Gmail accounts.

Dave Bittner: Sure.

Ben Yelin: So as you know, if you decided to donate to John McCain in 2008, for the next 15 years, you've received billions and billions of emails. You're put on the list.

Dave Bittner: That is a bipartisan peril, right?

Ben Yelin: Absolutely. No question about it.

Dave Bittner: Right, right.

Ben Yelin: So both parties have these lists, candidates purchase lists. You get 20 text messages, especially towards the end of fundraising periods, where it's like, I'm $1 short of my goal. Please donate now.

Dave Bittner: Right, right.

Ben Yelin: So Republicans started to become concerned that too many RNC solicitations for money were going into the Google spam filter. And they were trying to argue in front of a federal court that this was bad faith and that it was intentional, that Google was doing this for political purposes.

Dave Bittner: Okay.

Ben Yelin: So there are a couple of legal issues here. The first is Section 230. Google is immune from lawsuits if they can show that they were making a good faith effort to moderate content to enhance the experience of their users. That's kind of the bread and butter of Section 230.

Dave Bittner: Right.

Ben Yelin: And here, the Republican National Committee wasn't really able to assert with any type of particularity that Google was acting in bad faith. The one piece of evidence came from a study that was done by a university, I think it was North Carolina State, that showed that a disproportionate number of RNC emails were being sent to a spam folder, as opposed to Democratic emails. And they showed a bias that wasn't as pronounced when it came to other email providers like Yahoo, et cetera. But that was just one study. The author of the study said that the RNC had mischaracterized the study's findings, and the author said there's no way here to determine whether this is actually a conscious decision on the part of Google, or whether it was the magic of the algorithm happening.

Dave Bittner: So just so I'm clear here, the author of the study that the RNC used as their main piece of evidence disagreed with the way the RNC was characterizing the study.

Ben Yelin: Exactly.

Dave Bittner: Okay.

Ben Yelin: They basically said, our study doesn't say what you are claiming it says.

Dave Bittner: Okay.

Ben Yelin: We are not, as the authors of this study, asserting that Google has acted in bad faith, or that they are biased against conservatives. We are instead just noting that a significantly higher proportion of RNC emails relative to DNC emails, or similar Democratic emails, are being sent to spam filters.

Dave Bittner: Okay.

Ben Yelin: Now this is a little Easter egg for our law nerds out there, but we had a couple of big civil procedure cases in the mid-2000s, Twombly and Iqbal. Everybody remembers them if you were in civil procedure. And what they basically say is, when you are filing a complaint in a civil suit, you have to have more than just definitive statements saying that the other party broke the law in some way. You have to offer some type of compelling evidence. It doesn't have to be perfectly compelling evidence. You don't have to put all the pieces of the puzzle together. But you need something more substantial than, you should let this lawsuit go forward because we think that Google is biased against conservatives. You need some type of evidence.

Dave Bittner: Yeah.

Ben Yelin: And there just is not that evidence here. There is no evidence besides this one study that Google actually was acting in bad faith. And because they have that Section 230 protection, the RNC would have needed some type of evidence, maybe contemporaneous emails from Google employees saying, hey, let's throttle RNC emails for our own political benefit. That just doesn't exist. And Google was actually pretty helpful. So they held a training session, Google did, with the Republican National Committee, basically saying, here's how to write your email so that they don't automatically go into spam filters.

Dave Bittner: Right.

Ben Yelin: It was like a private, personalized session, I presume, at their Washington, D.C. headquarters. Apparently, according to the RNC, that didn't work. Their emails were still being throttled. And then Google stopped advising them when they realized that the RNC was going to be taking them to court. So it was probably a wise decision.

Dave Bittner: That seems reasonable.

Ben Yelin: This all kind of makes sense to me. It makes sense that this was dismissed. The judge here is a district court judge named Daniel Calabretta, a Biden appointee. The one thing I can't wrap my head around is he said that this was a close case, that it's possible or conceivable that it could have gone the other way. I have to admit, I'm not seeing it. For there to be a close case, he would have had to have had two things here. One, a site that's not protected by Section 230. Google is, as a provider here, protected by Section 230. And given that it is protected by 230, you'd need some sort of evidence that Google was not acting in good faith, that they were legitimately politically biased. And there just isn't any evidence here. I mean, the one study that purports to show that evidence, according to the study's author, does not show that evidence. So that's why I'm a little perplexed that he said it was a close case. You could understand if there were a different set of facts, that if there were some piece of compelling evidence that maybe that this could have gone through the legal process, but I just don't see how that could have been the case, given what we have here. And then the last thing is, you have this very important question that's invoked in the case about whether Google is a common carrier. You know, normally, federal entities, the judiciary, can't just regulate private companies. I mean, private companies have a degree of control. But when we're talking about common carriers, certain laws that protect people, civil rights laws, other laws that we've decided apply to things like railroad companies, telephone companies, et cetera, they apply because those institutions are what we call, in a legal sense, common carriers. What's interesting about this case is the judge here disclaims the notion that Google, through Gmail, is a common carrier.

Dave Bittner: Really?

Ben Yelin: Because when you're talking about the mail or a telephone company, they deliver every single message to your literal inbox or to your telephone without bias and without any type of filtering.

Dave Bittner: Right.

Ben Yelin: They are purely a delivery service. So I get every single piece of mail in my mailbox, whether it's a dumb political solicitation that I'm never going to donate to, or a personalized, very sweet letter from my daughter's teacher. Many thanks to the teachers who do that. And that's just not the case with Google. They have a function that filters spam. They always basically have had that function since Gmail was created. And Congress, through various laws, has expressed an interest in protecting people from spam.

Dave Bittner: Yeah.

Ben Yelin: So I just thought it was kind of an interesting finding in this case that Google doesn't count as a common carrier. I'm curious as to whether other courts are going to adopt that view. I think it makes sense to me. I just thought that was an interesting analysis here.

Dave Bittner: It is. So help me understand, is the RNC in a different category because it is political speech than, say, a provider of Viagra pills or something like that? When it comes to whether or not their message should have any sort of preference of being put in front of people, is it worse to send something that is political speech to a spam filter than a run-of-the-mill ad?

Ben Yelin: I think that's what the RNC is trying to argue. I don't think the court found that persuasive. What Google was saying is, we regulate spam from political entities the same way we regulate spam for any other commercial advertising. We have our formula. We stick to the formula. It doesn't matter whether this is political speech. I'm not sure there should necessarily be a difference. Ultimately, this is not a political message, necessarily. All this is, is a solicitation for money. So the fact that it is a political solicitation doesn't seem to me to be that relevant in a legal sense. I think what the RNC was saying in their complaint was that the algorithm was discriminating against conservative viewpoints. And that potentially brings up some free speech issues, some issues with the Federal Elections Commission in fairness and advertising, that type of thing. But the FEC has dismissed similar complaints that were filed with this federal court. So they don't see it as much of a problem, or at least this version of the FEC does not. And Google has simply stayed to their same talking point that they have invested in spam filtering technologies. Their goal has been and always will be protecting people from unwanted emails while allowing senders to reach the inboxes of users who want their messages. And their algorithm isn't going to change just because some entity here got butthurt, basically.

Dave Bittner: Well, suppose you're the RNC and you want to pursue this. Because my understanding is the judge in this case hasn't completely closed the door to them, right? It says in this article, the judge granted the RNC a chance to amend the lawsuit. If you're the RNC, what path would you take here to try to make your case more compelling? Do you look for more studies? More facts?

Ben Yelin: Yeah. I mean, that's their really only path here. I think the judge here is allowing the RNC to refile if they develop some compelling evidence. Again, it doesn't have to be 100% compelling, but it has to be well pleaded as something that has a foundation in fact. So the RNC would need to find, I think, some piece of new evidence that shows this level of bias. It could be that there are more studies beyond this one, and it could be that there's an academic study that does come to the conclusion that Gmail was purposefully and in bad faith throttling messages from the Republican National Committee. I think that might change the result of this lawsuit. But that evidence just isn't out there right now. So until that evidence exists, they're just not going to have an opportunity to argue the merits in a federal court.

Dave Bittner: Right.

Ben Yelin: It's possible that they themselves could commission a study or one of their preferred academic institutions. You know, there's a lot of conservative academic institutions, Hoover Institution, Heritage Foundation. They could do a study and try and come up with that same conclusion. It would be up to the judge under the standards set in Twombly and Iqbal whether there's enough evidence to allow the case to move to trial. What's ironic about all of this is generally Twombly and Iqbal, which requires cases to be well pled and grounded in facts, that's generally an advantage to conservatives and conservative attorneys because it keeps a lot of frivolous lawsuits from trial lawyers out of court. But here, you know, they are kind of being hurt by the very judicial philosophy that they support. So there is some irony in that.

Dave Bittner: Yeah. Yeah, that's interesting. All right. Well, I suspect as we are on the snowball rolling down the hill that is the coming 2024 election season, we will only see more of this, right?

Ben Yelin: Yeah, I mean, from my mind, I just want Google in a completely non-discriminatory way to keep all of those solicitations for donations out of my inbox. I don't care if it's a radical socialist or some far-right entity, it's spam to me. And the reason Google knows it's spam to me is because in many cases, I've marked it as spam.

Dave Bittner: Right.

Ben Yelin: So I think what Google was saying is maybe it's not political bias. Maybe more of the recipients of these emails are reacting to these emails as if they're spam by archiving them or by marking them as spam.

Dave Bittner: Yeah.

Ben Yelin: And maybe that's just as logical of a conclusion as the RNC's hypothesis, which is political bias. So yeah, I'm all for keeping all of these emails out of my inbox. But I just don't think this lawsuit is necessarily the right way to go about it.

Dave Bittner: You'd think with as much as Google knows about each of us, right, and particularly if it's handling our email, that the spam filtering algorithms would have some of that knowledge come into play here. And maybe it does. And maybe it is. You know, if I am someone who is, you know, one of the top destinations in my web browser every day is Fox News, and I'm subscribing to all of the right-wing newsletters and all that stuff, you know, and that's who I am and it's what I believe in and it's the things I'm interested in --

Ben Yelin: Can we just cut that as a clip and put it out there without context?

Dave Bittner: Sure, sure.

Ben Yelin: Yeah.

Dave Bittner: Shouldn't that come into play with the spam filters? And like I say, maybe it does.

Ben Yelin: I think it does.

Dave Bittner: Do you?

Ben Yelin: So I get very few solicitation emails because I've marked them all as spam and I never open them and I never interact with them.

Dave Bittner: Right.

Ben Yelin: And I think that's probably what happened with RNC emails. It might be just the nature of their donor base that they're more likely to categorize these incoming emails as spam. Google uses some machine learning magic and realizes, hey, this person doesn't want to see these emails. Let's send them right to the spam folder. Maybe incidentally that means that more RNC emails go to the spam folder than DNC emails. And that seems to be a valid hypothesis to me. Now, again, I don't have proof of that. I haven't done the studies myself. But I think what the RNC is trying to say is the only possible answer is political bias on behalf of Google and its engineers.

Dave Bittner: Yeah.

Ben Yelin: I'm not a Google stan or whatever, but I just have a hard time believing that that's the case. They generally want to stay very far out of politics.

Dave Bittner: Yeah. I also wonder, like from a political fundraising point of view, does a spammy message have more success if it makes it in front of someone than a non-spammy message?

Ben Yelin: I've asked this question myself because to me they're all spammy.

Dave Bittner: Yeah.

Ben Yelin: And seemingly, yes. I mean, there's a reason they still send these emails.

Dave Bittner: Right.

Ben Yelin: Like, hi, this is Donald Trump. I personally saw that you didn't donate in July and I was very disappointed in you. Please donate in August.

Dave Bittner: Right, right.

Ben Yelin: I mean, all of these seem incredibly spammy, but these committees keep sending them, which leads me to believe that they have some proof that it works.

Dave Bittner: Right.

Ben Yelin: Which is kind of unfortunate, to be honest, because eventually it might have kind of the opposite effect. People become so angry at receiving these emails that maybe they reconsider their voting decision. It seems like that has not happened because if that were the case, then the RNC and the DNC would stop sending us all these emails, but I don't think we're there yet.

Dave Bittner: Well, and emails cost virtually nothing to send.

Ben Yelin: Exactly. So even if there's an extremely small return on your investment, you still make the investment because, yeah, the marginal cost of sending an email, even if you're using some type of expensive service, is extremely low compared to any donations you might get as a result.

Dave Bittner: I also wonder if just, you know, having this lawsuit at all to put this out in public and put the pressure on Google, is the RNC rolling the dice here and saying that, you know, maybe this leads to some executive at Google saying, all right, look, this is a pain in the butt. Can we just turn some of the dials behind the scenes and just let this stuff through? Because I'm tired of dealing with this. And then the RNC gets what they want, right?

Ben Yelin: I think they kind of tried to do that by having this special session of the RNC saying, hey, we're willing to work with you guys. To me, that's evidence that they didn't have any intention of throttling these emails. It's, let's work together. We can try and figure out ways that more of your emails aren't going directly into spam folders.

Dave Bittner: Right. This is how our system works. And if you want to make it through the spam filters, here's how you do it. Here's what to do and what not to do.

Ben Yelin: Exactly. And the fact that that didn't work, I mean, there's the other possible conclusion where this is a broader ideological argument on the part of conservatives. And I think in some cases there's certainly merit to it, that big tech is biased against conservatives.

Dave Bittner: Right.

Ben Yelin: And if you look at the voting records of most big tech executives, save for somebody like Elon Musk, they generally are, at least socially, pretty liberal.

Dave Bittner: Right.

Ben Yelin: So I think this lawsuit is kind of part of a broader political argument. I'm not sure they ever expected to actually succeed in the lawsuit, which would have required Google to compensate the RNC for any lost revenue that would have been gained through those emails, which would have been impossible to estimate, by the way.

Dave Bittner: Right. Rigiht. All right. Well, I mean, we'll see. The RNC states in this article that the fight is not over, but we'll see if it's something that they can continue to pursue or move on to something else, perhaps.

Ben Yelin: For sure.

Dave Bittner: Yeah. All right. Well, we will have a link to that article in the show notes. My article this week comes from the New York Times. This is actually an opinion piece written by Julia Angwin. She is an opinion writer and investigative journalist. And the crux of her argument here is that readers should have more choices when it comes to the algorithms that social media platforms are putting in front of them. Now, Ben, I will admit that this argument strikes my fancy.

Ben Yelin: Yes.

Dave Bittner: As we have talked about here many times, I am of the opinion that perhaps we need something for algorithms along the way of the Food and Drug Administration, the same way we test pharmaceuticals, that we should test algorithms and first they should do no harm.

Ben Yelin: I mean, this is one of your longtime hobby horses.

Dave Bittner: Yes.

Ben Yelin: This is something you've cared about for a long time.

Dave Bittner: It is. And this is the argument that's being made here that on these platforms, basically, you have no control over what's put in front of you. And that's a problem. You could be stuck in a filter bubble where, you know, you go down rabbit holes. We've seen many cases where someone is curious about something, let's just say on YouTube, they're curious about something in passing, but then over a short period of time, they start seeing things from the most extreme representatives of whatever that thing is they were curious of.

Ben Yelin: Right. Like, you know, a lot of people are talking about Jordan Peterson. Maybe I'll watch one of his videos.

Dave Bittner: Right.

Ben Yelin: Big mistake.

Dave Bittner: And this, by the way, happens with all sorts of things regardless of your political preferences.

Ben Yelin: Oh, a hundred percent. Yeah.

Dave Bittner: And so what this opinion piece is making the case for is that we should be able to choose what kinds of algorithms are being put in front of us. But also, we should be able to choose that there's no algorithm. And some of this is happening. European regulators are demanding that these platforms have at least an option for their users that they could have an algorithm that doesn't use all the tracking information, that sort of stuff to feed back on.

Ben Yelin: Right.

Dave Bittner: I'm curious what your take on this -- obviously, we know how I feel about it. Do you feel like there's any momentum here that we could be heading toward this sort of thing?

Ben Yelin: I think if there is. I mean, one promising development is Bluesky, which I have an account. I haven't really used it yet. It's a kind of Twitter clone, supposed to be similar to Twitter that was partially developed by Jack Dorsey, who founded Twitter.

Dave Bittner: Right.

Ben Yelin: And they allow people to build custom algorithms. So any developer can come in and create a custom algorithm that's fit to a user, and then users can choose among those custom algorithms. There's promise there. I mean, I don't know how big the numbers have been at Bluesky. I don't think it's still a competitor to Twitter or X or whatever we're calling it these days. But certainly, there's some promise there that they are trying to work through that. I think there is a future because there's going to be increasingly competition among these sites. And one way you could get a competitive advantage is allowing people to curate their own feeds and to really kind of micro-curate their own feeds. I thought what was really interesting here is the author of this article said they would love a librarian to be the curator of their social media feed to determine which news they would get, because libraries are great at curation and summarizing a vast variety of books, sources, et cetera, and getting it into something that would be a digestible reading list for somebody.

Dave Bittner: Right.

Ben Yelin: And I think there's opportunity there because there's going to be a market for it, which is good. I think the algorithms built by the services themselves, in some ways, even if we say we dislike them, we benefit from them. This is one of my hobby horses. I mean, am I always thrilled with the YouTube algorithm? No. But does it generally know which type of videos I want to watch and does it do it with reasonably decent approximation? Yeah. And it's nice that when I log into YouTube, I'm not presented with things that I'm not interested in, like cat videos, sewing.

Dave Bittner: Sure.

Ben Yelin: And I get football highlights, you know, and some music things that I'm interested in. I'll hide my own music tastes. So, I mean, it actually does do a decent job at curation, as frustrating as it can be sometimes. I think it's going to be a bigger problem on X. I can't believe we still have to call it that, but I'm going to do it.

Dave Bittner: The platform formerly known as Twitter.

Ben Yelin: Exactly.

Dave Bittner: Yeah.

Ben Yelin: I think it's going to be a bigger problem there because the Twitter algorithm, as this article points out, or this op-ed points out, is leading into a pretty dark place because it's at least allegedly ideologically driven by Musk himself, who has kind of co-opted Twitter and designed it in a way to boost certain viewpoints that he considers beneficial.

Dave Bittner: Yeah.

Ben Yelin: And I think that's already started a backlash. I don't know if it's going to be enough of a backlash that Twitter is going to lose some of its user base to sites like Bluesky. I think it's kind of too early to say if that's going to happen at a large scale, but that's really the fear if I were one of the dwindling number of employees working for the artist formerly known as Twitter.

Dave Bittner: Right. You know, I think about when I left Twitter and I went over to Mastodon, which is a federated version of Twitter, let's say.

Ben Yelin: More decentralized, yeah.

Dave Bittner: Decentralized and it is non-algorithmic. There's no algorithm, there are no ads. What that means is you have to put in the work to choose what you want to be put in front of you. And I think in a way that's designing your own algorithm, because you're going out and you're finding interesting people who are interested in things you're interested in. So you can still use hashtags. So for example, I'm interested in cybersecurity. I can do a search for #cybersecurity and find things that have been posted with that hashtag. And then I can start to find interesting people who are posting interesting things. So I start populating my feed on Mastodon with other people who are doing their own aggregation and sharing. And over time I'm fine-tuning my timeline, basically building my own algorithm based on the people who I found who I think have interesting things to say. Does that require more work? Yes.

Ben Yelin: I mean, that was the original vision of Twitter, right? If you think about Twitter in 2009-2010, that's what it was.

Dave Bittner: Right.

Ben Yelin: It wasn't really algorithmically driven, it was hashtag driven.

Dave Bittner: Right. And then ads ruined everything.

Ben Yelin: Ads ruined everything, of course. They wanted to get our eyeballs in front of ads.

Dave Bittner: Right.

Ben Yelin: And they realized people were more engaged if you were fed certain types of content.

Dave Bittner: Yes.

Ben Yelin: So I would stay on the site if I was able to read angry tweets that confirm my ideological viewpoints. And that's the way the algorithm pre-Elon was designed. So yeah, I respect that potential. I think it does require more work on behalf of the user. I think if you're willing to put in that work, it will give you a better experience. And I'm glad that a site like Mastodon gives you an opportunity to do that.

Dave Bittner: Right. I guess the question is, is that method scalable or not? And nobody knows, right?

Ben Yelin: Right.

Dave Bittner: It functions more like email than a centralized platform like Facebook or Twitter or anything like that. We have a bunch of servers and they're good and they're bad about that, but it's different. And so I think it's an interesting experiment to see if that sort of thing can take off or not, or will it collapse under its own weight? If it gets too big, too popular, people start trying to use it for all of the things that platforms like Twitter and Facebook end up being used for, and people start throwing ads at it. All of those things, I guess you could call them benefits, you could call them perils.

Ben Yelin: There are life cycles to these things.

Dave Bittner: Yeah.

Ben Yelin: I mean, Facebook and Twitter drew us in by being that.

Dave Bittner: Right.

Ben Yelin: We could curate our own feeds. Once we were roped in, they realized they could make a lot of money off of us through algorithms and getting certain advertisements in front of our faces. And it does eventually end up kind of ruining the platforms.

Dave Bittner: Yeah.

Ben Yelin: But it does make them more, at least in Facebook's circumstances, financially sustainable.

Dave Bittner: Right.

Ben Yelin: I don't think Twitter is financially sustainable, but at least they are trying to be through this type of work. So I hope that it is financially sustainable to have the type of site where you can curate your own feed. I'm not sure that it is. I really hope it is.

Dave Bittner: Yeah. It's heartbreaking to me what's happened over at Twitter as someone who really enjoyed that platform, both the media I consumed and the things I wrote and shared, to have that change in the ways that it has. And I guess this is a good lesson that nothing is unassailable. These huge platforms, things come and go. Things change. Society changes. Our interests change. Technology changes. And so things that we have come to count on -- I mean, think about it. There are people I know who have been lamenting the fact that they relied on a large part of their income from some of the connections and traffic and things that happened through Twitter that simply aren't there anymore.

Ben Yelin: Yeah, don't get me started on it. I mean, it's been a major -- I've felt a major loss from how bad it's become, even though I'm still on there.

Dave Bittner: Right.

Ben Yelin: And I get a lot of bot responses that are anti-Semitic. That's increased exponentially recently. I reported one of them that was blatantly anti-Semitic, and I was informed that the rules were not broken. And that's just kind of scratching the surface of the problems that exist there.

Dave Bittner: Right.

Ben Yelin: They've degraded some of the services. They've put some of the best aspects of Twitter behind a paywall. They at least temporarily had to put use rate limits, which ruined the efficacy of the platform. It's awful. It's awful. Yeah, and it breaks my heart.

Dave Bittner: Yeah, it's a shame. All right, well, we will have a link to that New York Times Opinion piece in the show notes. It's an interesting read, well worth your time. Ben, I recently had the pleasure of speaking with Igor Volovich. He's from an organization called Qmulos. That's Qmulos with a Q. And our conversation centers on this notion on compliance and how we can't just wait for the government to find alignment on security and risk. That organizations really need to look out for their self-interest here. Here's my conversation with Igor Volovich.

Igor Volovich: Well, I think it goes in beyond the government. I think it goes to the industry overall. I think the problem is compliance has always been treated, ever since we inherited it from our friends at Audit and Finance, we always treated it as this historical reporting function. It's always been viewed as a lagging indicator, not a leading indicator, if that makes sense. And because of this weird positioning, compliance never really managed to live up to its expectations.

Dave Bittner: Right.

Igor Volovich: Compliance has always meant to be a tool of risk management.

Dave Bittner: Right.

Igor Volovich: We create these standards, these frameworks that represent risk models, that represent threat surfaces, and the kinds of controls you apply to mitigate those risks. And we create those frameworks and standards for different verticals, different technologies, different industries. And so there is no shortage of guidance for how to manage risk based on your specific threat and risk profile. And yet, because of compliance's position as a historical reporting function, we've never managed to do it real time. So compliance has always been looked at as security's poor cousin.

Dave Bittner: And what are the consequences of that kind of, as you say, historical approach?

Igor Volovich: Well, it doesn't get you onto the same timescale as the bad guys, right? The bad guys are not waiting for that compliance report. They're not waiting for your audit to come through. They're not waiting for your internal assessment or external assessment to happen. They are in your networks right now, real time. So they know exactly what your threat and risk and control posture is. But we, using traditional legacy type of compliance, we don't, right? So we're waiting for that cadence to come around, the next sweep, the next look at the control. And let's not also forget, there's this inherent problem built into the traditional legacy compliance models, is that they're running on opinion, not fact. Let me put a finer point on that. A lot of how compliance has been structured is based on somebody looking at a control, making a decision whether that control passes or fails, and then passing that up the chain, right? So you have all these folks kind of basically playing what I call human fax machine roles. They're just channeling this data around. They're making opinions. It goes up a channel. It goes up a layer. Somebody else makes another opinion. There is some QA function built in, right? Quality assurance. Somebody looks at that control assessment and says, I agree. I disagree. And then on and on it goes until it winds up on some executive report. Well, the process that I just described in about 20 seconds can take months, if not years. And I've had clients in the federal space that tell me, look, I have no choice. I have to split my environment into threes, into thirds. And I look at one third, one year, the second third, the second year, and the third the third year, right? And at any given time, my time horizon on actually knowing what my control posture is, is about three years. If I'm lucky, if I'm doing everything perfectly well -- and by the way, this is perfectly legal, right? In the federal space, an ATO runs for three years, right? The authority to operate. And you do reassessments and you do recertifications, sure. But you're running on these three-year horizons. So you're always lagging at some portion of your environment, sometimes for a third of your environment, three years behind, right? The bad guys know everything real time now. We're looking three years behind. How could you possibly hope to manage risk in any real way?

Dave Bittner: So what has led us to this place? I mean, if this is less than ideal, why do we continue to function this way?

Igor Volovich: I think it's just status quo and organizational inertia. And when I say organizational, I mean at the level of industry, at the level of nation state, we've just accepted that that's what compliance is. It's this onerous task that you have to get through and you've got to do it once in a while. And it's supposed to kind of be not well-managed and it's supposed to kind of be manually based. And it's supposed to, let's be very frank, it's supposed to suck. Compliance sucks. That's just what people have come to accept. And we throw a lot of bodies at the problem. And there are some perverse incentives at play too, and this is not a big slag off against all the industry that all the folks that are doing the good, honest work of managing compliance across all these different fields. But there are a lot of players in the game who depend on this manually-based, manual-operated model. And what I mean by that is, look, you have a federal environment that might have a couple of hundred ISSOs, information system security officers, and over them are ISSMs, the managers, and then over them are directors. And on and on it goes, there's a lot of people burning a lot of hours. And consider this, you've got 50 ISSOs, that's over 100,000 man-hours per year spent. Well, a lot of it is spent on shuffling paper around. A lot of these processes are paper-based. And even though over the last 15 or so years, we've tried to do what's called compliance automation, what it's really boiled down to is workflow automation. And you have a lot of players in the game now also who are really good at workflow automation, IT service management, folks like that, where they looked at compliance and said, oh yeah, it's a lot of workflow. We already automate that. We'll just slap another label on it and we'll call it compliance automation. A lot of the analysts are waking up to that. From the industry perspective, Gardner, Forrester, folks like us are talking about trying to educate the market, evangelize the idea of what actual compliance automation means. That means end-to-end, full lifecycle, not just picking up what the people left off. So there's a lot of misconception of what compliance is. There's a lot of misconception of what compliance could be. And like I said, it's a lot of just accepting the status quo as the best we could ever make. This is just the way it is and it's supposed to be this way.

Dave Bittner: Yeah, I hear a lot of folks talk about checkbox compliance, an obligatory sort of thing, as you say, after the fact. What about organizations who are trying to take it to the next level, to be both compliant but also have best practices in place? Are you seeing folks out there who are attempting that approach?

Igor Volovich: So this is the concept that I've come to call convergence and that's what Qmulos calls Converge Continuous Compliance Strategy. And it's a vision for really converging compliance, security, and risk management. And the convergence point is dual. On the one hand, we're talking about trust and truth. We want to make sure that the data that you have is actual data, not opinion. So moving the needle from all the way on the spectrum side of opinion versus fact. Right now, the needle is firmly on the opinion side, again, because there's a lot of this manual analysis happening in these processes. We want to move it over to the fact side, which means we want to make it evidence-based. Because ultimately, that's remember, again, this is all for the purpose of managing risk at the enterprise macro level. We want to enable better decision-making with better data, which means, well, we actually have to have data, not opinion. So that's side one. The other one is converging on the timescale. And as I mentioned earlier, we want to make sure that we're operating on the same timescale as the bad guys, because we're trying to defend at the same time, on the same timescale that the threats are coming in. So that means real time. So again, you can't throw more bodies of the problem to fix for that. You have to solve for that problem with automation, right? And you want to automate as much as possible. There are control sets, obviously, like policies and processes and workflows that you can automate, right? Because they do require human intervention. But anything that's automatable should be automated. And that's one of the ways that we look at these programs where we say, look, not only do I want to know about the control maturity posture, I want to know about your compliance program posture. What is the maturity there? How much automation are you actually employing? And I want to see end-to-end automation, not just workflow automation. So these are some of the trends that I'm seeing out there. And certainly, it's something that we're evangelizing very, very openly. We're challenging the marketplace to think differently. We have a paper that we published last year called Rethinking Compliance and also Executive Guide to Compliance Management. We're trying to think of different ways to do compliance where it actually delivers the security value that was meant to deliver, right, but always came up short.

Dave Bittner: What about on the government side? Is there any sense that they're open to innovation here? Are they trying to make things both easier and better for the folks that they oversee?

Igor Volovich: Certainly. We've had programs around. In fact, the founder of our company, Matt Koos, he was a DHS executive who was one of the folks behind the creation of the CDM program. This is Continuous Defense and Mitigation. So the idea of continuous risk management, which to me, the convergence again with compliance, if you're using compliance the way it was intended, and you put it in the same timescale as other activities that we accept have to be in real time like security operations, if you do that, you gain that risk mitigation value. You get the understanding of risk posture across your environment. So automation is key. And there's nothing new there. The Continuous Defense and Mitigation program has been around for over a decade. Continuous ATO, CATO as it's known in the federal circles, that's been around for a while. So we've seen certainly attempts to move towards continuous risk management, continuous control monitoring. That's another term of art that's emerging right now becoming more prominent, continuous control monitoring, gaining that real-time visibility to your entire enterprise. So the intent has always been there, but the industry has been very slow to adapt because they haven't been incentivized. Show me incentives, I'll show you outcomes, as Charlie Munger likes to say. And we've had some perverse incentives. We've known that too. So as I talked about some of the federal systems integrators who depend on these manual models, who depend on sort of the classic butts in seats revenue models, that hasn't gone away. So until we start seeing different incentives and we start seeing compliance being understood as a part of competitive advantage with things like CMMC, that's a good example. So within the DOD contractor space, we've got somewhere around maybe 80,000 contractors who will be held accountable for their security posture as a condition of even bidding on a contract, let alone executing a federal defense contract. So getting there first, being able to demonstrate compliance sooner than your competitor, or the vice versa, the flip side to that coin being if you win a bid and then you get a protest because you are actually not able to demonstrate compliance on somebody else's. You might actually lose a bid you already won. So incentives are coming into play, but ultimately, unless it starts getting folks in the pocketbook, you're not going to see a lot of movement, a lot of adoption.

Dave Bittner: Suppose I'm that security person who needs to make the case to my board of directors for changing up the way that we approach that. I'm in an organization that has been, as you described, reactive, kind of taking that checkbox approach. What is the upside for the organization to change the way we come at this?

Igor Volovich: Well, so a number of things. One, of course, we already mentioned competitive advantage. It can position your enterprise as someone who is -- and of course, the executives as well, as someone who is forward thinking, who is security minded, who understands risk, who understands also cyber risk is not just an IT problem, kind of that classic legacy model, but actually understands that this is a component of the overall enterprise business risk. That's something you want to demonstrate as much as possible to customers, to regulators, to enforcers. We're seeing folks like the SEC, the FTC, kind of the non-traditional cyber enforcers actually coming out then and enforcing by precedent. They're creating regulation by precedent by going out there and saying, here's a letter that we want you to look at, dropping letters on people's 10Qs as response to the insufficient reporting of material deficiencies and their general controls. Extending the scope of what Sarbanes-Oxley 404 and 302 mean. Everybody focuses on 404 controls. 302, that's personal accountability. That's what sent folks from Enron to jail, right? So these are the kinds of things we're starting to see again. So there's a lot of pressure, a lot of scrutiny. The sooner the companies get there in their thinking and their understanding of compliance as a tool of risk management, as a tool of credibly demonstrating where they stand on security. And SEC, good example. If you're a public company, you definitely have to worry about it because now you have to disclose, and financial institutions and institutional investors will be making decisions. They will be pricing in your cybersecurity posture in the cost of your securities. And we've already seen studies that show over a period of three to five years, companies that have seen public breaches, they've seen their stock prices depressed over a period of three to five years. So there's evidence now that, well, guess what? Being bad with cybersecurity actually hurts your stock price. And that's pretty much what any executive has to care about on the public side.

Dave Bittner: Ben, what do you think?

Ben Yelin: I think it's really interesting. I mean, I think compliance can be annoying. It means you have to hire attorneys. It requires a lot of man hours, but there is a purpose to it beyond just not breaking the law. It's a way to manage risk. And I just thought that was a really interesting perspective.

Dave Bittner: Yeah, we always talk about there's letter of the law and spirit of the law. And I think there's a lot of wiggle room with compliance with that. I guess different people come at it in different ways and you get different outcomes depending on how you come at it. And so it's a fascinating area. So again, our thoughts to Igor Volovich for joining us. We do appreciate him taking the time. That is our show. We want to thank all of you for listening. A quick reminder that N2K's strategic workforce intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team while making your team smarter. Learn more at Our senior producer is Jennifer Eiben. This show is edited by Trey Hester. Our executive produce is Peter Kilpe. I'm Dave Bittner.

Ben Yelin: And I'm Ben Yelin.

Dave Bittner: Thanks for listening.