Research Saturday 4.20.19
Ep 82 | 4.20.19

Undetectable vote manipulation in SwissPost e-voting system

Transcript

Dave Bittner: [00:00:03] Hello everyone, and welcome to the CyberWire's Research Saturday, presented by Juniper Networks. I'm Dave Bittner, and this is our weekly conversation with researchers and analysts tracking down threats and vulnerabilities, and solving some of the hard problems of protecting ourselves in a rapidly evolving cyberspace. Thanks for joining us.

Dave Bittner: [00:00:26] And now a word about our sponsor, Juniper Networks. Organizations are constantly evolving and increasingly turning to multicloud to transform IT. Juniper's connected security gives organizations the ability to safeguard users, applications, and infrastructure by extending security to all points of connection across the network. Helping defend you against advanced threats, Juniper's connected security is also open, so you can build on the security solutions and infrastructure you already have. Secure your entire business, from your endpoints to your edge, and every cloud in between, with Juniper's connected security. Connect with Juniper on Twitter or Facebook. And we thank Juniper for making it possible to bring you Research Saturday.

Dave Bittner: [00:01:13] And thanks also to our sponsor, Enveil, whose revolutionary ZeroReveal solution closes the last gap in data security: protecting data in use. It's the industry's first and only scalable commercial solution enabling data to remain encrypted throughout the entire processing lifecycle. Imagine being able to analyze, search, and perform calculations on sensitive data - all without ever decrypting anything. All without the risks of theft or inadvertent exposure. What was once only theoretical is now possible with Enveil. Learn more at enveil.com.

Vanessa Teague: [00:01:52] I think the first thing to say is this is joint work between myself, Sarah Jamie Lewis of Open Privacy Canada, and Olivier Pereira of the Catholic University of Louvain.

Dave Bittner: [00:02:04] That's Dr. Vanessa Teague. She's an Associate Professor and Chair of the Cyber Security and Democracy Network at Melbourne School of Engineering, the University of Melbourne in Australia. The research we're discussing today is titled, "Trapdoor commitments in the SwissPost e-voting shuffle proof."

Vanessa Teague: [00:02:21] Olivier and I have worked together a lot before, but we didn't know Sarah. Sarah had been looking at it on the internet and had been tweeting a lot about the code. Olivier and I have worked together on cryptographic protocols for verifiable elections a lot, and we started looking at the protocol. And so, we started collaborating with Sarah, because she had a really good understanding of the code, and we had a really good understanding of what those protocols should be doing. And so, I guess you could say the team of researchers came at it for different reasons, from different directions, but the collaboration worked out really well.

Vanessa Teague: [00:02:56] So, why were we drawn to the Swiss code in the first place? Because somebody put it on the Internet. (Laughs) Because it was there.

Dave Bittner: [00:03:02] (Laughs) That's what researchers do, right?

Vanessa Teague: [00:03:04] Exactly. It's interesting to me, because the same software company supplies voting software for elections in the Australian state of New South Wales. And New South Wales law says you're not allowed to share the source code, or you go to jail for two years. Whereas Swiss law says, roughly, everybody has the right to inspect the source code of the system, and that's an important thing for democracy. So, the Swiss system became, in practice, quite openly available. The code circulated quite openly on the Internet. And I guess that gave us some clues about what might be happening in the New South Wales system.

Vanessa Teague: [00:03:47] And in particular, the first thing we found, the authorities in New South Wales immediately said, oh, that applies to us too, actually...

Dave Bittner: [00:03:55] (Laughs)

Vanessa Teague: [00:03:56] ...But don't worry, don't worry, we'll have it fixed in time for the election next week.

Dave Bittner: [00:03:59] Of course.

Vanessa Teague: [00:04:00] Of course.

Dave Bittner: [00:04:00] Yeah. Interesting, interesting.

Vanessa Teague: [00:04:02] And then the next thing we found, they insisted it didn't apply to them. But of course, since the source code is completely secret, it's not really possible for anybody outside the electoral commission or the software vendor to assess that.

Dave Bittner: [00:04:15] So, take me through what we're talking about here with this Swiss e-voting software. Can you just sort of - for folks who aren't as knowledgeable about all of the encryption schemes, and all of that stuff, how would you describe it?

Vanessa Teague: [00:04:29] There has been in the scientific literature for a long, long time a notion called "end-to-end verifiability." And this can apply to an Internet voting system or to a polling place voting system. And the idea is that you use cryptography to give people evidence that the election has been properly conducted.

Vanessa Teague: [00:04:51] So there's typically two or three steps to that. The first step is, there's some kind of proof given to the voter that the machine they're using to vote really did cast the vote that they asked for. Because of course, it's going to cast an encrypted vote, so they can't see it directly. So, there's some fancy cryptography for proving to the voter that the machine they voted on really did cast the vote they wanted to.

Vanessa Teague: [00:05:13] And then the second part of it is some kind of process for making sure that the votes really got to the place they were supposed to get to, unaltered.

Vanessa Teague: [00:05:21] And then the third part of the process is proving, a little bit like proving a paper count, when everybody stands around and watches the paper being counted. There are cryptographic protocols for providing evidence that all of the encrypted votes that came in were properly shuffled and properly decrypted, and put into the tally in exactly the form they arrived. And of course, that's quite tricky, because you've got to be careful to prove that the right collection of votes went in, without revealing how individual people voted.

Vanessa Teague: [00:05:55] So, this has been a thing in the academic literature for a long time, and there have been a number of university projects - you know, open-source, publicly-funded academic projects - implementing various protocols with these kinds of properties. There's also been efforts by industry to sell products that claim to have these properties.

Vanessa Teague: [00:06:15] Now, it's important to understand that this is sort of - this is inherently a transparency property, right? It's not necessarily claiming that the system is hard to hack, or hard to break in. It's showing you evidence of whether or not the votes have been changed. So, it's weird that a lot of the industry tends to tilt towards secret source code.

Vanessa Teague: [00:06:38] Now, it's not inherently impossible to provide genuine end-to-end verifiability with a secret system, because you could provide a very, very detailed specification that let people write their own verifiers, and so on. But, you know, really, I think anything that claims to be end-to-end verifiable but remains secret in its implementation is pretty suspect.

Dave Bittner: [00:07:01] Yeah, I mean it strikes me also that, if I were just counting paper ballots, I'd want to do that out in the open where people could see the process. If I said to you, hey, I'm going to count all these paper ballots, but we're gonna go back in a locked room where no one can see what we're up to...

Vanessa Teague: [00:07:14] Right.

Dave Bittner: [00:07:14] ...Well, that's not going to get a lot of confidence in the folks who are counting on this election.

Vanessa Teague: [00:07:19] Right. Exactly. And so, the whole purpose of this intellectual movement was to say, look, we can count votes electronically, out in the open. And yet, now we have an industry that goes, oh, no no, it's verifiable. Trust us, it's verifiable, but you don't get to see the source code.

Dave Bittner: [00:07:34] (Laughs)

Vanessa Teague: [00:07:34] So, this is a little bit disappointing for those of us who have watched on the academic side of this for a long time. So, the Swiss system is kind of interesting in being a little bit in-between. Because - so, the New South Wales system is totally secret, and you go to jail if you expose the source code. The Swiss system was, until now, secret - is my understanding - but was made available under this public intrusion test that they had. And not only that - and that was covered by a non-disclosure agreement, which we didn't sign, because at least some Swiss people, I assume, felt confident enough under the law that I mentioned earlier saying that everybody has the right to examine the system, to share the code much more widely. So, in practice, the code for the Swiss system was made very widely available to a very large number of people, and we were able to look at it without having signed onto any restrictions about what we could say about it.

Dave Bittner: [00:08:29] All right. So, you get to look at this code. What did you find?

Vanessa Teague: [00:08:32] We found that none of the proofs of integrity were implemented properly. Which is not surprising, really, considering that they hadn't really been made available for open review before, and that nobody selling this kind of software really has an incentive to do it right.

Vanessa Teague: [00:08:46] So, we found a number of completely independent serious problems. The first thing we found was in the shuffle proof that is meant to prove that the people shuffling the votes before they get decrypted haven't either added or subtracted or changed any of the votes. So, this is as fundamental as it gets, right?

Dave Bittner: [00:09:11] Yeah.

Vanessa Teague: [00:09:11] That - this is the point that's analogous to the point where you've got a whole lot of paper votes in a big ballot box, and you're shaking the ballot box up to protect privacy before you tip it out on the table and count them. So, this process of shaking up the ballot box and electronically shuffling the encrypted votes was supposed to come with a zero-knowledge proof that none of the votes had been changed. Because obviously, if there's no proof that the votes haven't been changed, then there's the capacity for the authorities who are running that election or the software vendor who provided the software, or the janitor who broke into the computer while nobody else was around, to manipulate the votes.

Vanessa Teague: [00:09:53] We looked at the proof, and we found that it contains a trapdoor. And the reason that it contains a trapdoor, is that it's based on a commitment scheme that uses some parameters. A lot like the - many of your listeners will remember the Dual_EC_DRBG disaster, right? In which some parameters just appeared in this random number generator and nobody really knew where they came from. But if you know the discrete log of one parameter with respect to another parameter, you can manipulate the system.

Dave Bittner: [00:10:25] Now, before we go further, can you describe to us - what is a trapdoor? What's the function of that? Why is it in there?

Vanessa Teague: [00:10:30] There's some parameters that show up at the beginning of the shuffle proof, and if you didn't generate those parameters, then you you have to complete the shuffle proof honestly. And there's sort of - there's a nice proof that you cannot cheat, as long as you didn't generate the parameters yourself. However, if you did generate the parameters - and in particular, if you know certain mathematical relationships among the parameters - then you can make it look like the shuffle proof is perfect, and it passes the verification test, but in fact, you can manipulate the votes.

Vanessa Teague: [00:11:03] So, the trapdoor is to know the discrete logarithms of some of those input parameters with respect to some of the other input parameters. And you can't compete that if somebody just gives you those parameters. But if you get to generate the parameters yourself, then you know them - then you can generate them in a way that you know the trapdoor, and therefore you can cheat on the proof.

Dave Bittner: [00:11:24] So, there's a legitimate reason for putting the trapdoor in there?

Vanessa Teague: [00:11:27] No, there's no legitimate reason for putting the trapdoor in there.

Dave Bittner: [00:11:30] Why was there a trapdoor in there, in your estimation?

Vanessa Teague: [00:11:32] This is the million dollar question. As usual, there's the obvious - there's two options: conspiracy and incompetence. Right?

Dave Bittner: [00:11:38] Okay. (Laughs) Yes, yes, yes. And we know the saying about that. Yes. (Laughs)

Vanessa Teague: [00:11:41] Yeah, exactly. (Laughs) Based on the quality of the rest of the code, I think the incompetence theory is highly plausible.

Dave Bittner: [00:11:47] OK. Fair enough.

Vanessa Teague: [00:11:49] So - but, of course, you know, if you were going to deliberately put in a trapdoor, you would put in a trapdoor that looked like it was entirely consistent with incompetence. Nobody knows, really. Well, I certainly don't know. The trapdoor is clearly there. It's clearly explainable if you are the authority or the software vendor - we don't really know why it's there.

Dave Bittner: [00:12:07] And they're not saying.

Vanessa Teague: [00:12:08] Well, weirdly, actually, after I wrote a paragraph saying, look, we're - in no way are we accusing them of putting this in deliberately, it's perfectly consistent with a naive implementation of a complicated protocol, we're not saying they did it deliberately, and so on - they actually put out a press release saying, how dare they say this? It absolutely is not a naive interpretation of the protocol.

Dave Bittner: [00:12:31] Oh my.

Vanessa Teague: [00:12:31] So, I don't know.

Dave Bittner: [00:12:32] That's interesting.

Vanessa Teague: [00:12:33] That's, nah, it's just, I think...

Dave Bittner: [00:12:34] (Laughs)

Vanessa Teague: [00:12:34] ...If anything, that reinforces the idea that they didn't really fully understand what they were doing. So, the important point is, you can't tell whether it's deliberately or accidentally inserted. It almost doesn't matter, right? Because it still could be used deliberately by somebody who did figure out the possibilities, even if it wasn't inserted deliberately. So, it's bad. And it also kind of begs the question of how it got through that many layers. It had had a great number of supposed layers of internal assessments at SwissPost before it got put up for public analysis, and yet nobody seems to have noticed this absolutely fundamental weakness in the cryptographic implementation.

Dave Bittner: [00:13:16] And this was something that jumped out to you and your colleagues quite readily?

Vanessa Teague: [00:13:20] Yes, yes. And in fact, after we went public with it, we discovered that two other people had pointed out the same problem. So, it was...

Dave Bittner: [00:13:28] Ah.

Vanessa Teague: [00:13:28] ...You know, it was pretty obvious to people who knew what they were doing, and yet it had evidently not been properly checked before. It really reinforces the idea that code like this should be totally open.

Dave Bittner: [00:13:39] There's an element of this that involves the sophistication of the random number generators? Is that correct?

Vanessa Teague: [00:13:44] No, no, sorry. The reason I mentioned the Dual_EC_DRBG thing is that the trapdoor is very similar in nature to Dual_EC_DRBG trapdoor. So it hasn't got anything to do with random number generation - it's just got to do with these parameters that pop up and out of the middle of nowhere. And if you know the discrete log of one parameter with respect to another, then you can do stuff that you're not supposed to be able to do. That's all.

Dave Bittner: [00:14:08] I see. I guess I was getting it a couple of the cheating examples that were in one of the articles you sent over, and it said that weak randomness generation would allow the attack to be performed without explicit collusion.

Vanessa Teague: [00:14:21] So, if you're going to cheat in this way, the authority who's going to cheat in this way needs to know the randomness that was used at the client's end - like the voters end - to generate the encrypted vote in the first place. It's just part of the math of how you kind of make it work. There's plenty of ways to do that. For one thing, the same software company writes the client code and the shuffling code, so it's absolutely not out of the question that one entity could find out both of those things. But in fact, even if it was somebody who had just managed to break into the internal server, this same company has had issues with randomness generation in the past. And in particular, there was a bug in the Norwegian Internet voting system, in which the - a lot of the random generation at the voter-end defaulted to zero.

Dave Bittner: [00:15:09] Oh, really.

Vanessa Teague: [00:15:09] So, a whole lot of votes showed up at the Norwegian electoral authority that had all been completely trivially encrypted with the zero-randomness. They were all exactly the same.

Dave Bittner: [00:15:15] Wow.

Vanessa Teague: [00:15:15] Imagine, they're getting all these encrypted votes in, they sorted them, and when they sorted them, they got huge big chunks of thousands - of many, many votes that were all identical. So, that's not good.

Dave Bittner: [00:15:27] When it comes to random number generation, in this day and age, is that something that is understandable that could be a weakness in a system like this, or are we past the point where that sort of thing should sneak in?

Vanessa Teague: [00:15:40] Good question. I mean, there's a lot wrong with this system that shouldn't have snuck in. And I mean, this system has had a very intensive review. Weak randomness generation - look, I mean, just about anything could go wrong with anything, really, couldn't it? I wouldn't necessarily say that we're past the point where any particular problem should ever arise.

Dave Bittner: [00:16:01] I guess what I'm getting at is that, if someone is doing this sort of work, are there standard toolkits you can, you know, grab off the shelf to reliably generate random numbers?

Vanessa Teague: [00:16:09] In a web browser, I don't know...

Dave Bittner: [00:16:11] Yeah.

Vanessa Teague: [00:16:13] I wouldn't necessarily want to say with great confidence that any particular thing is okay.

Dave Bittner: [00:16:17] It's not an easy thing to do, right?...

Vanessa Teague: [00:16:19] Absolutely not.

Dave Bittner: [00:16:20] ...I mean, true random number generation is complex.

Vanessa Teague: [00:16:22] It's very difficult. Absolutely. Okay.

Vanessa Teague: [00:16:24] Let me talk about the second thing we found. So, the second thing we found was, as part of the zero-knowledge proofs for decryption, part of what you have to do - so, there's an authority who knows the private decryption key, and that authority is going to decrypt the votes. Right? So, there's a bunch of encrypted votes coming in, and a bunch of decrypted votes going out, that everybody can read. But obviously, there's gotta be a proof that this authority really did properly decrypt the votes. Otherwise, they could just substitute whatever votes they like, and say, oh, this vote for my favorite party is an encryption of this ciphertext that I received. So, the system includes another zero-knowledge proof that is supposed to demonstrate the plain-text vote really did come as a proper decryption of the encrypted vote. Right?

Vanessa Teague: [00:17:17] Now, unfortunately, that proof was broken too. Now, bear in mind there are now only - there are only two proofs that are part of the shuffling and decryption process, and we broke both of them. So, the decryption proof is wrong, because of a reason that my colleague Olivier Pereira had already written a paper about. And it's kind of a complicated technical thing, but the short summary is that it's possible to run some parts of a proof backwards, and to start off with the answer you're looking for, run the proof backwards, make up - well, run some of the proof backwards, make up some of the inputs after you've generated the rest of the proof, and therefore make up inputs that are not truthful.

Dave Bittner: [00:17:54] Hmm.

Vanessa Teague: [00:17:55] And then you end up with the proper proof that you've properly decrypted something, when in fact, you haven't done the proper decryption at all. Now, in this case, the only way we could actually demonstrate making it work was decrypting to a nonsense vote. And we couldn't figure out how to make a decryption that produced another valid vote - although we're not sure that it can't be done, we just couldn't quite see how it could be done. So, at this point, they were sort of saying, oh, well, look, our verification mechanism doesn't seem to be sound. We're not very happy about that. But it's okay, because we've never run that part of the system in any real elections. That was proposed for upcoming elections - I think it was May.

Vanessa Teague: [00:18:33] However, it turns out, there's another part of the system that uses essentially the same proof. And that part of the system had been used. So, now, remember back to the list of things that we were talking about had to be proven...

Dave Bittner: [00:18:48] Right.

Vanessa Teague: [00:18:47] When the person casts their vote in the Swiss Internet voting system, they send an encrypted vote, and they also send some - the Web browser also sends some other cryptographic stuff, which allows the authorities to produce some special codes that go back to the voter. So the voter can check whether their Web browser sent the right vote. You get a paper code sheet in the mail which says, you know, if you vote Green, expect the following code back. If you vote Red, expect the following code back, and so forth.

Dave Bittner: [00:19:20] So, a bit of a receipt.

Vanessa Teague: [00:19:22] Yeah, exactly. It's a lot like a receipt.

Dave Bittner: [00:19:23] Yeah.

Vanessa Teague: [00:19:23] So, you cast your vote, you wait for your codes to come back. If you get back the code for the candidate you expected to vote for, then that's some evidence that your - the machine you were using to vote on sent the vote that you wanted. Right?

Dave Bittner: [00:19:39] Mm-hmm.

Vanessa Teague: [00:19:39] So that's all good, except that the flaw that we found in the proof applies to this section as well. So, again, we were able to show that it was possible for the voting client to send a nonsense vote in, and yet still manage to derive the correct return codes. So the voter was happy, even though in fact nonsense had been sent in instead of the vote that they wanted. So, at that point, that became a much more serious issue, because that part of the system had already been in use in Switzerland. And so, that potentially affects elections that have already happened.

Vanessa Teague: [00:20:18] Now, again, we weren't able to show that it was possible to send in a valid different vote - although we're not sure that it's not, we just couldn't quite figure out how to do it - what we were able to show was that it was possible to send in a nonsense vote and get the right return codes back.

Vanessa Teague: [00:20:34] And the Swiss authorities have now said, well, we've never received any nonsense votes in our elections, so we're sure that this attack hasn't happened, so we're not too worried. Now I don't - obviously, I don't know how many independent Swiss citizens checked that themselves, but it's quite plausible that they never have received any such votes, so that they would have been suspicious if they had done so.

Dave Bittner: [00:20:53] Mm-hmm. It's interesting to me, even if that hasn't happened, the news about this can cause an erosion of trust in the system, which is certainly, here in the United States, that's something we've been dealing with since our 2016 elections.

Vanessa Teague: [00:21:07] Absolutely. Well, in this case, an erosion of trust in the system is thoroughly justified. And in your case too, if you don't mind my saying so.

Dave Bittner: [00:21:13] So, what happens next? You published your research. What sort of feedback do you get? Are the folks who make these systems - have they been making changes? Are they collaborating with you? What's the state of things now?

Vanessa Teague: [00:21:27] The people who made the system put out a reasonably civilized press release, actually. There's two different uses of the same system, which very much complicates the thing. So, first of all, the SwissPost system - which is the system that was openly up for review - the Swiss authorities have made an announcement saying that they're not going to offer up the system for use in the next round of Swiss elections. That they're going to take some time to reassess the situation.

Dave Bittner: [00:21:55] Hmm.

Vanessa Teague: [00:21:55] So that's what's happening in Switzerland. In New South Wales, we know that they were using something very similar to the same system, because they admitted that they were affected by the first problem that we found. They denied that they were affected by the second problem that we found, which is somewhat implausible, since this affects exactly the same part of the code - not exactly the same part of the code, certainly the same segment of the process. But because they've never made their source code openly available for any kind of expert examination, it's impossible to say.

Vanessa Teague: [00:22:29] The New South Wales election ran the Saturday before last, and they're still counting the votes. We don't know whether the votes that came over the Internet are going to be substantially different from the votes that came over paper. We don't know whether there are going to be any seats in the Parliament that are very close, and therefore possibly dependent on the Internet voting system. So, we don't really know what's going to happen in New South Wales. It's possible that nothing will happen because nobody cares. But it's also possible that somebody will care, and will raise questions about the integrity of the electronic voting process.

Dave Bittner: [00:22:59] So, in terms of these electronic voting systems being in our future, what do you think is a good way for us to keep these things out in the open, and to head towards a state where we have confidence that the systems don't have these sorts of problems?

Vanessa Teague: [00:23:13] Vote on paper, I would say.

Dave Bittner: [00:23:17] Hmm.

Vanessa Teague: [00:23:17] Vote on paper. If you really can't vote on paper, then I don't think there should be any tolerance at all for closed-source electoral systems. Every time we look at something, we find something seriously wrong. This isn't the first time we've looked at an Internet voting system, and it certainly isn't the first time that scientists have looked at an Internet voting system. Every time we look at something, we find something seriously bad. And I think, because the challenge of voting security is really pretty much unique, in that there's a tremendous concentration of power in one place, the attacker model is incredibly strong, and the opportunity for one person to really alter the path of a whole country is pretty much unparalleled in any other scenario.

Vanessa Teague: [00:24:11] So, I think these systems should be treated with the greatest possible scrutiny. I don't think there's any excuse for deploying anything that hasn't had very broad, very open, very extensive public review. I'd be surprised if any system really does pass, after that very broad, very open, extensive public review. If it does, okay. But I suspect that at the moment, it won't. I just don't think we are able to build systems - if you really think about the likelihood that people are going to verify properly, the likelihood that there are no serious bugs in the code, and the importance of the task, I think we should be sticking to paper for now.

Dave Bittner: [00:24:57] Our thanks to Dr. Vanessa Teague from the University of Melbourne for joining us. The research is titled, "Trapdoor commitments in the SwissPost e-voting shuffle proof." We'll have a link in the show notes.

Dave Bittner: [00:25:10] Thanks to Juniper Networks for sponsoring our show. You can learn more at juniper.net/security, or connect with them on Twitter or Facebook.

Dave Bittner: [00:25:19] And thanks to Enveil for their sponsorship. You can find out how they're closing the last gap in data security at enveil.com. The CyberWire Research Saturday is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. The coordinating producer is Jennifer Eiben. Our CyberWire editor is John Petrik. Technical editor, Chris Russell. Our staff writer is Tim Nodar. Executive editor, Peter Kilpe. And I'm Dave Bittner. Thanks for listening.