Special Editions 8.1.18
Ep 24 | 8.1.18

Data-centric security. — Special Edition

Transcript

Ellison Anne Williams: [00:00:03] Well, because data's often the largest asset of an organization, all types of people have their eyes on the data, for exfiltration purposes, for stealing it, et cetera. So, you really have attack vectors coming at you from every possible angle.

Dave Bittner: [00:00:20] In this CyberWire Special Edition we take a look at data-centric security, focusing on the security of the data itself, rather than the surrounding networks, applications, or servers. To help us on our journey of understanding, we've lined up a number of industry experts. Ellison Anne Williams is CEO of Enveil, a company that's developed cutting-edge encryption techniques. Adam Nichols is principal of software security at Grimm, a cybersecurity engineering and consulting firm. Mark Forrest is CEO of Cryptshare, makers of secure electronic communications technologies for the exchange of business-sensitive information. And John Prisco is CEO at Quantum Xchange, a provider of what they claim is unbreakable quantum-safe encryption. Stay with us.

Dave Bittner: [00:01:06] And now some notes from our sponsor, Cylance. You remember the old song, "Thanks for the memories..." Well, sure, but no thanks for the memory-based attacks. This increasingly common class of cyberattack the experts at Cylance will tell you goes after memory, as opposed to more traditional targets, like file directories or registry keys. They usually start when a script or file gets into an endpoint without exhibiting traditional file features. Once they're loaded, they execute and use the system's own tools and resources against the system itself. If you go to threatvector.cylance.com, you can check out their report on memory attacks. That's threatvector.cylance.com. We're pleased to say they're not just sponsors of the CyberWire - they're the people who protect our endpoints. Visit cylance.com to learn more. And we thank Cylance for sponsoring our show.

Mark Forrest: [00:02:12] Something I've seen for many years - I've been in the industry now twenty-five years or more - is that one of our greatest vulnerabilities is to individual data users who lack an understanding as to the level of vulnerability they face with that data.

Dave Bittner: [00:02:26] That's Mark Forrest from Cryptshare.

Mark Forrest: [00:02:28] Whether that vulnerability is through, you know, direct security breaches that are intended for malicious purposes, or indeed, for use by agencies which they don't realize are using their data for purposes such as advertising or promoting products. And I think there's a very limited understanding amongst the general population, and around corporate users, about the way in which their data can be both manipulated for criminal reasons or for commercial reasons.

Ellison Anne Williams: [00:02:56] So, typically, data security comes in three parts.

Dave Bittner: [00:02:59] That's Ellison Anne Williams from Enveil.

Ellison Anne Williams: [00:03:02] So, the first part is securing your data at rest on the file system. This is going to be your standard file-based encryption techniques. The second is encrypting your data in transit. So, securing your data when it's moving through the network - that's the second piece of the data security triad. And then finally, securing it when it's being used or processed. That's typically done by things like searches or analytics. Because data really only stays in three states within the organization: it stays resident on the file system or in some other storage technology, it's moving through the network, or it's actually being used or processed.

Dave Bittner: [00:03:39] And so, what are the vulnerabilities that are inherent with each of those states?

Ellison Anne Williams: [00:03:44] Of course, if you don't lock down your data on the file system, or encrypt it on the file system or in your storage technology, then you're leaving that wide open for an attacker just to come and take your files. If you don't lock it down in transit when it's moving through the network, then the same applies. So you leave it open for an attacker to come and take it off the network as it's moving through your environment, or as it's transiting from one location that you own to another. And then finally, of course, in use. So if you don't protect it as it's being processed, then you're leaving that whole processing layer open for all types of attacks.

Mark Forrest: [00:04:21] There are many points of vulnerability. We can't solve all of them all of the time, but we can take simple steps to solve some of them and do that quickly, and that actually minimizes the exposure that we have, both for significant data, by the way, and for stuff that we may consider trivial.

Ellison Anne Williams: [00:04:36] Traditionally, only two areas of the data security triad have been focused on by organizations. And that's encrypting or securing the data at rest when it's on the file system and securing it in transit when it's moving through the network. Why? Because those types of encryption that you use in those cases have been well-understood for a very long time. So, because of that, you have a lot of solutioning going on in the commercial market around those two areas, and a lot of choice for organizations. That third area of the data security triad has been far less solutioned over the years, because the area of encryption that really deals with that has been very computationally intensive for a very long time.

Dave Bittner: [00:05:18] Does that mean that it just hasn't been available for - it hasn't been practical to be used?

Ellison Anne Williams: [00:05:22] Correct. Correct. So, traditionally, the area of encryption that would apply to the usage of data and keeping that encrypted at all times during processing is a type of encryption called "homomorphic encryption." It's a special type of encryption. It's not new. It's been around forty or so years at this point, and it allows you to perform processing or operations on encrypted data as if it were unencrypted data. Historically, like I mentioned before, it's been very computationally intensive. A lot of work has gone into that field - in academia, in some commercial organizations and research labs - and only recently have we really seen breakthroughs in that space.

Dave Bittner: [00:06:04] Now, help me understand, because I think it's a difficult thing for many people to wrap their heads around, myself included, which is, you know, this notion of being able to do things to encrypted data without decrypting the data, and getting an answer, I suppose, from that encrypted data, while not revealing the original data. How do you go about explaining that to people?

Ellison Anne Williams: [00:06:27] It's certainly a mind-bender, you are correct, and it's really all math. So, what I tell people is, it sounds magical, it sounds impossible, but really it's just mathematics. So, there are some very strong, well-understood mathematical principles that underlie that ability, and allow all of that to move into the realm of, not only possible, but now practical.

Dave Bittner: [00:06:48] So, how do you prevent the original data being revealed by knowing the answer to the calculation that you're performing?

Ellison Anne Williams: [00:06:57] So, you make sure that, as it's being processed, what's coming out of that processing - say, if you're performing a search, the results of that search are encrypted as well. So there is never a point in time where your results for that search, for example, are in an unencrypted state, and then they've become encrypted, the way that you've traditionally seen things like data-at-rest encryption operating. With this special type of encryption, homomorphic encryption, as you process over the data, what's coming out of that processing are encrypted bits. And you can't tell what's being selected and what's not being selected.

Ellison Anne Williams: [00:07:33] That means that now, they can make sure that their business processing of data is completely secure at all points during the processing lifecycle. So it opens up whole new worlds for things like secure cloud processing, so allowing people to migrate their most sensitive workloads and data to public cloud environments that are fundamentally untrusted locations, and have them processed there in a way that's completely trusted to that organization. Because, of course, nothing is ever decrypted as it's being processed in that cloud environment. It opens up whole new worlds around interacting with third-party data services and providers outside of their walls, in ways that stay completely secure and private to the organization. And of course, large implications for compliance with regulation, particularly under things like GDPR.

Dave Bittner: [00:08:20] Now, when you look towards the future, towards the horizon, what do you see the role of the sort of encryption playing?

Ellison Anne Williams: [00:08:27] Our goal is to see ubiquity in this. To make sure that that last gap of the triad truly is closed at all times when processing data, which, like I said before, is the most valuable asset, often, in the organization.

John Prisco: [00:08:40] Well, you know, we're using encryption protocols that are based on solving difficult math problems.

Dave Bittner: [00:08:51] That's John Prisco from Quantum Xchange.

John Prisco: [00:08:54] And if we look historically at our secret key, RSA-type keys, we understand why they've gone from smaller numbers to larger numbers, and that's because the smaller numbers have been cracked. We're now up to RSA 2048. It will be very difficult to factor that large number into two prime numbers, but it won't be difficult to do that when quantum computers are available. Problem is, we can't simply say, oh, well, that problem isn't here, and it's five to ten years away. I won't even argue about how long quantum computers will take - I'd argue with, can you guarantee that your data won't be stolen? Because, if it's scraped today, it can be decrypted tomorrow.

Mark Forrest: [00:09:46] There's no question that computers are getting faster, and the reality with any kind of encryption is that brute-force attacks on encryption services enable the bad guys or the governments to break down even the strongest encryption. There are two solutions to it, of course. One is you use more complex encryption algorithms, but obviously that's a never-ending story. You know, we've increased the strength of the algorithms considerably in recent years. And as computing power gets greater - and clearly with quantum computing we're seeing a kind of a stark leap in capability - that is something which we, you know, we can deal with and the, you know, the changing of the algorithm, to some extent.

Mark Forrest: [00:10:28] The other way is to consider how we do the encryption. The classic mode of PKI, using static key pairs, where you form a prior relationship and then you trust that the key pairing remains uncompromised. I think quantum computing challenges that statement. And so, another way of dealing with this is to make sure you use a unique encryption algorithm for every transaction that you make, so that if a quantum-computing power is applied to your transactions, at least you know that every message and every package of data needs to be cracked independently and not, say, by having one cracked, you don't have all of them - all of your communications - broken into. And that approach to symmetric encryption, I think, has a fundamentally greater strength than the old asymmetric methods as used in PKI.

Ellison Anne Williams: [00:11:21] So, for us, from a quantum perspective, our encryption is modular, so we're able to adjust it, swap it out, be very open and transparent about it, which means that, not only can we take advantage of the latest mathematical breakthroughs in the special types of encryption that we use, but also we can make sure that the algorithms that we leverage are quantum-resistant.

John Prisco: [00:11:42] There are two approaches going on in parallel. I don't think they're competitive. I think they're complimentary. One is the solving-difficult-math approach, but it's not simply making the number bigger. It's using what's called post-quantum cryptography - other mathematical approaches, that are not proven to be unbreakable, but are more difficult than simply making the number bigger.

John Prisco: [00:12:14] The other approach is to use a property of physics, which is the quantum-key approach. And that's where, instead of using large numbers and difficult math, we're using keys that are composed of photons, and photons that have a related quantum state that won't exist if someone eavesdrops on them. So, this is leveraging a law of physics that says if you try to observe or eavesdrop on a quantum key, you'll change that key in a profound way, and the key will become useless and will not be able to be used to decrypt the file. And that is proven to be unbreakable. So a combination of both of those schemes will probably get us to where we want to be.

Dave Bittner: [00:13:13] And I've heard stories of - particularly, some folks have been talking about nation-states that have been gathering up data, sort of vacuuming it up and storing it, on the hope that in the future, even though they may not be able to get at that data now, maybe five, ten, twenty years down the line, they'll be able to get at it.

John Prisco: [00:13:31] I think that's the fear. And I really don't know argue in favor of five, ten or twenty years for the quantum computer, but if you just look historically at what's happened in the past year with Google and its 72-qubit machine, and IBM with its 50-qubit quantum computer, and Microsoft, and the Chinese, and so many people that are investing billions of dollars into the development of quantum computers - it's not an if, it's a when. People like Google are saying it's three years away. But whenever it is, we can definitely lose our data today, and it definitely can be stored cheaply and decrypted later.

Mark Forrest: [00:14:23] The practical issue we face with this is that explaining that methodology to people who are selecting technology is often a very difficult thing, because they need to invest time and understanding how you approach things differently, and it is something in which buyers have to invest time. I think the days in which you would simply snatch a solution from the marketplace based on the claims of a vendor - whether it's Microsoft or anybody else - are long gone. I think there needs to be a rather more forensic analysis of exactly what these tools do and how they work, and whether they match the use case of an enterprise. And I think that's a fundamental requirement for anybody buying encryption technology today.

Ellison Anne Williams: [00:15:02] The usage space is a new one, and we're certainly telling a lot of people raising awareness around that issue. I think the intel disclosures that, you know, have been - that type of attack surface that's very common from a nation-state perspective is now becoming very real in the commercial world. So, our idea is to make sure that people are aware that this is actually occurring, and that there is a solution to stop it and to close off those attacks surfaces.

Ellison Anne Williams: [00:15:29] Traditionally, because attacking that usage gap was so difficult from an encryption perspective, people have really worked around the problem. And that comes in a lot of different forms. At one extreme, we see people simply calculating that they're willing to take that risk, of processing sensitive data in the open. as it were. Moving along the spectrum a little bit, we see people that have built various types of fences around the processing of data, whether that fence is extremely large - for an organization, something like a firewall - down to extremely small kinds of container security, or even enclave types of solutions.

Ellison Anne Williams: [00:16:09] The issue with those types of technologies, from a fencing perspective, is that, when you break in through the fence, which all attackers can do at some point, the data and the operations inside as they're being processed are completely exposed.

Ellison Anne Williams: [00:16:24] And then, finally, we see people now trying to apply the encryption as it's become possible to keep everything encrypted as it's being processed.

John Prisco: [00:16:34] I think a lot of these techniques are terrific. They're better than anything we've seen before. But standardization will come slowly. I know NIST has been requesting submission of algorithms. Back in November, I believe, they accepted a number of them. And the one thing that doesn't sit well with me is that, even though these algorithms are difficult math, they still haven't stood the test of time where, you know, many people try to break them.

John Prisco: [00:17:11] So, you know, combining an algorithm that NIST certifies with a quantum key, I think would be an ultra-safe approach. I think quantum things by themselves today are better than anything we have. And in fact, I believe they are unbreakable, because they're relying on a property of physics that's as immutable as gravity.

Adam Nichols: [00:17:38] We see a lot of different recommendations from people, from security experts, and they're generally based on anecdotal evidence.

Dave Bittner: [00:17:46] That's Adam Nichols from Grimm.

Adam Nichols: [00:17:47] So, there are things that they've seen in the past in some certain environment, and they worked well, so they're advocating them, you know, usually just in general. And we've seen some cases where people have been making these recommendations for years - like, for example, mandatory password changes. Years later, when someone actually did a study, where they split their users into two groups, and enforced this policy in one and didn't on the other, they found that people were more likely to choose poor passwords, and they also found that there was, like, a 17 percent chance that an attacker who knew one password would be able to guess the next one in five tries or less.

Dave Bittner: [00:18:27] So, in terms of coming at your security decisions in a data-centric approach, how do you establish a culture where that is the standard?

Adam Nichols: [00:18:38] It's an uphill battle right now, but I think that's probably going to change going forward. There's been a lot of people who are pushing metrics, and that way you can at least see, within an organization, how you're doing. People will bring this to the board, and when the board is saying, you know, we're spending a lot of money on cybersecurity, can you show me that it's working? So, these are the kinds of demands that we're seeing from board members of the CISO or CSOs. And now, basically, it's up to the boots on the ground to make this happen and figure out, how can we measure this? What makes sense in our environment? Is it the number of days that a computer is down? Is it the number of incidents that we've responded to? How do we measure this, and what can we do better?

Dave Bittner: [00:19:22] And how do you ensure that you're looking at the right types of data that will lead to the best results?

Adam Nichols: [00:19:29] The things that you need to pay attention to are the things that have the most impact to the business. If you look at the cost of an incident, like if a computer is down for one day, it costs this much money and, you know, that employee couldn't do their job that day. Things like that. You want to pay attention to what's important and measure that, because if you're just measuring the same generic, oh, how many password attempts were there on an account? Like, that doesn't actually tell you anything useful. You want to make sure that whatever metrics you're gathering are going to be useful, and the goal really is to kind of publish these things.

Adam Nichols: [00:20:03] Unfortunately, it's difficult to get corporations to publish these things, because they don't have any real incentive. Like, they've collected the data, they know the answer, and they know how good their science was in determining all of this, but they don't have a whole lot of incentive to share with anyone else. And the industry suffers as a result.

Dave Bittner: [00:20:22] Sometimes people will trust outsiders - a salesperson will come in, or a consultant, and they'll say, well, this is what you should be doing, and it sort of shifts the responsibility to that person without necessarily checking out to make sure that they're giving you the right information.

Adam Nichols: [00:20:38] It doesn't actually shift any of the liability of that person. So, while you might have a scapegoat, at the end of the day, if there's an incident and money is lost or the customers don't trust your company anymore, that's still your problem. And you can point to, oh, well, you know, this expert told me that I should be doing, you know, whatever it is that they recommended. But you're the one holding the bag at the end of the day if you're the CISO.

Dave Bittner: [00:21:00] Quite often, when I talk to analysts, they'll say that sometimes they'll be prompted to chase something down because something just doesn't feel right. You know, they might not have the numbers in front of them, but they just have have a feeling. Is there anything to that, when it comes to setting a security posture, or is that a blind alley way that we go down?

Adam Nichols: [00:21:22] I think most of the time it is not a blind alley, and I think it's - well, I mean, you don't know what's going to be at the end of it. It might turn out to be a dead end. But at the same time, it might turn out to be a huge issue. And that's kind of where the industry is at right now, is that we don't really have the data that we need to figure these things out from first principles. So you kind of go with your gut, and that's what people have been doing for a long time.

Adam Nichols: [00:21:46] It works. It's, you know, better than just randomly guessing, like, oh, I guess I'll just make up some policy that they need to change your password every seven days, or that we won't install patches, you know, until they've been tested in a testing environment, or maybe we're just going to install them on a production live and hope that the patches were good. Having an expert to guide you is better than, you know, just kind of blindly choosing something. At the same time, it's not the same as, like, an actual proper scientific study.

Dave Bittner: [00:22:15] And so, what are your recommendations for people who want to adopt this approach? How can they get started, and how can they convince the higher-ups that this is a good way to invest their resources?

Adam Nichols: [00:22:26] From what I've seen, the higher-ups are pretty much already demanding you give them some kind of evidence that what you're doing is working. So, from what I've seen, it seems like they've already pretty much bought in, and saying, like, we're spending this much money, like, why don't we cut the budget 20 percent and see what the impact is? Or raise the budget 20 percent, like, what's going to happen? So I don't think it's going to be so much of a problem with them getting onboard.

Adam Nichols: [00:22:52] It's more of a problem with people not knowing what to measure. And part of the reason is, like I said before, since people aren't really sharing this data. Like, when they collect data and figure out what's the best policy in their particular circumstance, or what's the best methodology, then they're not sharing this. So some other company is like, well, where do we even start? This is one of the problems that we've been seeing, and academic papers take this - address this to some degree, but things are moving pretty fast, and academic papers tend to make very small, incremental improvements. And we need to move a lot faster than that.

Dave Bittner: [00:23:32] And so, where can people go to get good information?

Adam Nichols: [00:23:35] So, for metrics, there's not a whole lot out there. You have, like, your standard sources of information, like NIST, and things like that. Based on anecdotal evidence - which is something, it's not just random - there's not really a whole lot of good sources.

Adam Nichols: [00:23:52] One of the things that me and my team are going to be doing is basically putting some of these recommendations to the test, and trying to disprove them - set up a test environment, and maybe if the recommendation is that you're safer if you have whitelisted applications, and then we'll basically run tests and try and figure out, well, is that true? Is there some way that we can break out of that? How many of the actual, just normal, off-the-shelf malware does that prevent?

Adam Nichols: [00:24:21] And then do the same thing with other things, like running as an unprivileged user. All the different kind of recommendations that we we hear all the time, we're going to try and put them to the test, and then publish our results so that people know, like, okay, this is effective, and here's the data to back it up. And it needs to be repeatable, so that other people can verify that this is in fact accurate, and if there's a problem with the testing methodology, they can come out and people will say, well, no, that's wrong because of this, and then we can say, yeah, you're right, we need to rerun the test and, you know, we'll have different data, and we can actually iterate. Whereas if people keep all these results secret, then it's going to be much harder to iterate and kind of improve as we go, as a industry as a whole.

Adam Nichols: [00:25:07] Cybersecurity is still a young industry by all accounts. And other industries like medicine or engineering, like structural engineering - they, at one point, they were kind of doing it the same way, where everyone just kind of does their own thing and, you know, people might share information once in a while and sometimes they don't. But now, pretty much that's kind of the standard is, you know, we've got these figured out, and when something we've been doing for a long time is wrong, people do, you know, rigorous studies to try and prove it, and even then it takes a while for it to be accepted as fact. Like, someone else needs to verify it in another study, and so on. So yeah the inertia factor is definitely going to be a difficulty.

Adam Nichols: [00:25:48] People should know what they're getting. So, when they actually implement some policy or install some product, they should know something about it and, like, how effective is it in practice? Those are the metrics that we generally don't have right now. Ways of attempting to verify these, like you have your network security assessments and things like that, to try and say, oh, well, this device was actually ineffective at stopping any of the attacks. We have that fairly well for products, to some degree. For policies, it's still kind of a Wild West, where you pick a policy and you're like, well, it seems plausible. It makes sense. Yeah, let's do it. And then that's pretty much the end of it.

Dave Bittner: [00:26:29] Until that bridge collapses.

Adam Nichols: [00:26:32] Exactly.

Dave Bittner: [00:26:33] Yeah.

Dave Bittner: [00:26:39] And that's our CyberWire Special Edition. Our thanks to Ellison Anne Williams, Adam Nichols, Mark Forrest, and John Prisco for joining us.

Dave Bittner: [00:26:47] And thanks to our sponsor, Cylance, for making this CyberWire Special Edition possible. And Cylance is not just a sponsor - we actually use their products to help protect our systems here at the CyberWire. You can learn more about how Cylance uses artificial intelligence at cylance.com.

Dave Bittner: [00:27:04] The CyberWire is proudly produced in Maryland at the startup studios of DataTribe where they're building the next generation of cybersecurity startups and technologies.

Dave Bittner: [00:27:13] Our coordinating producer is Jennifer Eiben, editor is John Petrik, technical editor is Chris Russell, executive editor is Peter Kilpe, and I'm Dave Bittner. Thanks for listening.