The CyberWire Daily Podcast 6.8.20
Ep 1104 | 6.8.20

Regional rivals jostle in cyberspace. Election interference and vulnerable online voting. Phishing for a competitive advantage. Reducing dependence on foreign companies for infrastructure.

Transcript

Dave Bittner: South and southwest Asian regional rivalries play out in cyberspace. Election interference could move from disruptive influence operations to actual vote manipulation. Someone is spear-phishing leaders in Germany's PPE task force. Nations move to restrict dependence on foreign companies in their infrastructure. Justin Harvey from Accenture on the train of thought behind breach disclosure. Our own Rick Howard on DevSecOps. And Washington state recovers some, but not all, of the unemployment funds lost to fraud.

Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Monday, June 8, 2020.

Dave Bittner: Regional rivals continue to expand their operations in cyberspace. Pakistani operators Telangana Today describes as criminals are said to be smishing Indian defense officials. Their aim appears to be data exfiltration. The goal and the target set suggest a connection to espionage. Both India and Pakistan are said by Eurasian Times to be increasing their cyber operational capability and doing so with the aid of allies, respectively Israel and Pakistan. 

Dave Bittner: As more information about the exchange of cyberattacks between Iran and Israel comes to public attention, an essay in Foreign Policy assesses those operations as indicating the future of warfare - increasingly conducted in cyberspace, especially at the lower end of the spectrum of conflict, and increasingly overt. Both recent operations hit civilian infrastructure. Iranian operators are said by Israel to have attacked water treatment and distribution systems. Those attacks are believed to have been unsuccessful, their effects mitigated by defenders. Israeli operators are believed, on the basis of apparently deliberate leaks from within the Israeli government, to have retaliated by crippling operations at an Iranian port. That the operations are becoming increasingly overt suggests not only a growing disinhibition in the offensive use of cyber tactics, but also that there's an emerging deterrence regime. 

Dave Bittner: Remote voting online has been used in some US states' primaries and may see some limited use in November's general elections. The New York Times discusses the risks this may pose for direct manipulation of votes by hostile intelligence services. They focus, of course, on Russian services. Delaware, West Virginia and New Jersey plan to use Democracy Live's OmniVote platform, but researchers at MIT and the University of Michigan report that OmniBallot represents a severe risk to election security and could allow attackers to alter election results without detection. 

Dave Bittner: OmniBallot isn't new, researchers Michael A. Specter and J. Alex Halderman write. It's, quote, "long been used to let voters print ballots that will be returned through the mail," end quote. What's new this year, they say, is its use for filing ballots online. The three states are using it differently. New Jersey has decided to make online voting available to voters with certain disabilities, and it's treating that limited availability as a pilot that could be expanded if the need arose. West Virginia lets the disabled, military voters and West Virginia citizens overseas to vote online with OmniBallot. Delaware is making the most expansive use of the system. As Specter and Halderman write, online voting will be an option to anyone who's sick, self-quarantining or engaging in social distancing, which as a practical matter includes close to everyone in the state. 

Dave Bittner: The researchers see four problems with the system. First, they conclude that OmniBallot's ballot return function cannot achieve either software independence or end-to-end verifiability. The system used third-party services and infrastructure, including Amazon's cloud, with JavaScript executed from Google and Cloudflare. Either unauthorized third parties or Democracy Live itself could alter votes without being detected. The threats could be either malicious insiders or external threats who've gained access. 

Dave Bittner: Second, the version of the ballot marking mechanism that's being used in Delaware in particular sends the voter's identity and ballot selections to Democracy Live even if the voter opts to print the ballot and mail it in. This, the researchers say, needlessly places ballot secrecy at risk. 

Dave Bittner: Third, even where OmniBallot is used only to deliver blank ballots, the researchers find that the ballots could be misdirected or altered in ways that would cause them to be counted incorrectly. Election officials could mitigate these risks, but only with the expenditure of considerable effort and in conducting rigorous post-election audits. 

Dave Bittner: And finally, in all cases, Democracy Live, the platform's corporate parent, collects a great deal of sensitive personally identifiable information. That information includes voters' names, addresses, dates of birth, physical locations, party affiliations and partial Social Security numbers. And when the system is used to submit ballots online, more comes in, including ballot selections and a browser fingerprint. The possibilities for misuse of this information are extensive and obvious. It could be used, for example, for targeting political advertising, equally rifle-shot accuracy and hitting targets for disinformation and so on. And the researchers point out that OmniBallot seems to have no privacy policy posted, leaving it unclear what, if any, safeguards may be in place. 

Dave Bittner: Secure online voting is a difficult problem, and it would be difficult to object to the goals with which states are planning to use OmniBallot. Enabling disabled citizens to vote, for example, is one, and anyone who's struggled to get even a mail-in absentee ballot during their military service can tell you that snail mail isn't exactly a day at the beach either. But, clearly, there are problems to be worked out, especially since this election and all elections for the foreseeable future are going to be held under conditions of opposition. 

Dave Bittner: IBM's X-Force reports that the PPE task force Germany's Health Ministry organized to facilitate procurement of personal protective equipment - items like masks - has been subjected to a phishing campaign directed against PPE supply chains. It may be the work of a nation-state intelligence service interested in gaining competitive advantage in the market. What kind of advantage, one might ask? Well, if you can cripple a competitor in a market, you might clear the field a bit and give yourself a better shot at getting scarce commodities at a knockdown price. There are other possibilities as well. There's a degree of overlap between executives connected with the task force and those connected with the development of COVID-19 vaccines and treatments. Intelligence about these may also be a goal. 

Dave Bittner: And, finally, we all know that the COVID-19 pandemic and the relief programs designed to ease people's economic pain have generated a great deal of fraud. How much? Well, it's not clear, but here's one indication. The AP reports that the U.S. state of Washington says it's recovered $333 million in fraudulent claims. That's a lot. But maybe that means they've made good their losses, right? Not so fast. The state's not sure just how much has been lost to fraud, but they think the total is somewhere between $550 million and $650 million. There are a lot of venti lattes in that margin of error, friends. And if the missing $100 bills were laid end to end, well, at a low estimate, that's the combined height of about 1,800 Space Needles. 

Dave Bittner: And joining me once again is Rick Howard. He is the CyberWire's chief analyst and chief security officer. Rick, always great to have you back. 

Rick Howard: Hey, Dave. How are you doing? 

Dave Bittner: Not bad, not bad. So in this week's "CSO Perspectives" podcast, you are digging in on DevSecOps. And in order to get there, you kind of have to take a trip through DevOps first, right? 

Rick Howard: Exactly right. And for this whole series, I've been working on kind of infosec first principles discussion. What are the most important things that cybersecurity practitioners should be doing based on first principles? 

Dave Bittner: Right. 

Rick Howard: So DevSecOps has come up. And I was one of the original enthusiastic supporters of the DevOps idea (ph). And what got me really excited about this - do you know the Google story? You know how they got started? Have I told you this? 

Dave Bittner: I do - well, let's say that I do, but our audience doesn't. Let us know what it's all about, Rick. 

(LAUGHTER) 

Rick Howard: Nice setup. 

(LAUGHTER) 

Rick Howard: So back in 2004, when Google was nothing more than just a search engine, the leadership made this extraordinary decision. They handed the management of the network over to the development team and not to the traditional network managers that everybody else on the planet uses. So when you hand a task like that to a bunch of programmers, what do they do? Well, they program it, right? 

Dave Bittner: Yeah. 

Rick Howard: And so they've automated everything. The Google stuff is not just automated. It's an autonomous system - all right? - and a clear six years, OK, before we even had a name for DevOps. And when I read that, I was going, oh, my goodness, that is the thing. That is how the security practitioners will shift left in the design and deployment of new capabilities for our organizations because we'll automate everything and inject a layer of consistency as we deploy all of our things. 

Rick Howard: What's happened is that has not really happened at all, all right? There's been a lot of resistance to doing that. And one of the reasons is it turns out that the network security community, we don't have a lot of coders. The ones that we do have, they're really, really good, OK? But most of us struggle with putting, you know, lines of code together to do anything useful. OK. So that's been one problem. The other problem is the DevOps people use a set of tools that we are not familiar with - you know, things like Puppet, Ansible, other really strange-sounding tool sets - and where our folks prefer to use things like, you know, Python and C and, you know, those kinds of things. So we're kind of like oil and water. So our progress of pursuing this DevSecOps mission has not really come to fruition. So we'll be talking about that. And really, though, it's still the atomic thing we need to put on our infosec first principles wall because we still have to do it. We just have to get there. 

Dave Bittner: Yeah, I was going to say, whatever happened to Google? Hardly hear about them anymore. 

Rick Howard: (Laughter) They're a small up-and-comer. I think you should pay attention to them. 

(LAUGHTER) 

Dave Bittner: All right, will do. Will do. All right. Well, it is Rick Howard's podcast, "CSO Perspectives." You get that as part of your membership with CyberWire Pro. You can find that on our website, thecyberwire.com. Rick Howard, always a pleasure. Thanks for joining us. 

Rick Howard: Thank you, sir. 

Dave Bittner: And I'm pleased to be joined once again by Justin Harvey. He is the global incident response leader at Accenture. Justin, it's always great to have you back. I wanted to touch today on breach disclosure and get your insights on the factors that go into the decisions that companies make when it comes to disclosures, whether or not to disclose and the variables that go into those decisions. What can you share with us? 

Justin Harvey: Well, there's a lot of variables, but the main theme that I see in the press is always X or Y organization did not disclose fast enough. And after having been embedded with our team doing incident response on a regular basis for our large customer base, this actually becomes a very sticky and complicated point. I want to mention here that it may not be the right move at the right time to discuss a breach. Some - there is one school of thought that says as soon as the C-suite understands that they've been hit or they've lost something, they need to go public or they need to go right to the regulator and go public with this. And that is actually counterproductive to the well-being and to the successful conclusion of an investigation. 

Justin Harvey: When you're running an investigation, you need to keep in mind that for the most part, particularly at the beginning, the adversary, A, doesn't know that you're in the environment, and, B, you still really don't know what the true impact of the cyberattack is or what it could be. So really, at the beginning, you're in a discovery phase. You're saying - you're going out into the network and the endpoint and you're trying to see where is the adversary, because if you were just to take a knee-jerk reaction and say, well, we know that this adversary came in on this machine and they've moved to these three others and then it looks like there's some more, but we don't care about that - well, if you were to go through an expulsion event, which is turn everything off, change passwords and kick the adversary out, they know you're onto them. And without truly understanding where the adversary has been and where they are, you don't know if you've plugged all the holes that they're going to use to get back in. 

Justin Harvey: And that is magnified when this becomes public because when it becomes public, the adversary will probably read it in their own news feed. Oh, OK. 

Dave Bittner: Right. You've tipped your hand. 

Justin Harvey: You've tipped your hand. The next thing is that when you tip your hand, it makes the adversary change behavior. And they understand that their main infrastructure has been burned, so they immediately move to their secondary infrastructure, which you don't even know about, and so on. 

Dave Bittner: Now, understanding that there are some regulatory issues here, I mean, is - does the situation ever arise where, for example, a company can go to a regulator and say, listen; this is what - this happened. This is what we know so far, and here are the reasons why we don't want to make this public yet. We're sharing this with you, but we think we have rational reasons to delay a little while. Are they likely to get a positive response from something like that? 

Justin Harvey: Absolutely. Let's not forget about the role of regulators. The regulators are there to develop a relationship with the regulated and to have an open dialogue and communicate and to be the oversight. Many people out there that don't work in the industry think that regulators are more like binary human beings, saying you're either in compliance or you're not, and if you're not in compliance, we're going to lift the lid and tell everyone, and you're guilty. That's not the way it really works. 

Justin Harvey: The way it works is - the most effective means to this end is having a great relationship with your regulators, of which many global organizations do, and when something happens, being able to go very early on in the process, particularly with things like GDPR, and say, hey, regulator, we understood it happened on this date and we just discovered it - I don't know - yesterday or within 24 hours. This is what we have seen, the initial impact. We are not ready to go public with this because we don't know where else the adversary has been, and the number could be bigger or, better off yet, this was a SWAG for what we think the impact is. It might actually be less with further analysis. 

Justin Harvey: And the problem is if you don't give those sort of controls to the regulators, when it does become public, if the numbers are artificially large - it's really difficult when you go public with, let's say, a hundred million individual breach. Let's say two weeks later, you actually find out it was a million or a hundred thousand or a hundred that were actually stolen. No one ever remembers that - right? - in the press. 

Dave Bittner: (Laughter). 

Justin Harvey: They're like, oh, that hundred million - no, no, no, you said a hundred million. 

Dave Bittner: Right, right. 

Justin Harvey: And because when you do have that case to say, well, yeah, we thought it was a hundred million; we went public too early, and it's really this number amount, if they had just taken the time to get it right, then they would've gone public with the right amount. 

Dave Bittner: Yeah. All right, interesting insights and words of wisdom there. Justin Harvey, thanks for joining us. 

Justin Harvey: Thank you. 

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our daily briefing at thecyberwire.com. And for professionals and cybersecurity leaders who want to stay abreast of this rapidly evolving field, sign up for CyberWire Pro. It'll save you time and keep you informed. Listen for us on your Alexa smart speaker, too. 

Dave Bittner: Don't forget to check out the "Grumpy Old Geeks" podcast, where I contribute to a regular segment called Security, Ha. I join Jason and Brian on their show for a lively discussion of the latest security news every week. You can find "Grumpy Old Geeks" where all the fine podcasts are listed. And check out the "Recorded Future" podcast, which I also host. The subject there is threat intelligence. And every week, we talk to interesting people about timely cybersecurity topics. That's at recordedfuture.com/podcast. 

Dave Bittner: The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Elliott Peltzman, Puru Prakash, Stefan Vaziri, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Ben Yelin, Nick Veliky, Gina Johnson, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe. And I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.