Research Saturday 6.12.21
Ep 187 | 6.12.21

Taking a look behind the Science of Security.


Dave Bittner: Hello everyone, and welcome to the CyberWire's Research Saturday. I'm Dave Bittner, and this is our weekly conversation with researchers and analysts tracking down threats and vulnerabilities, solving some of the hard problems of protecting ourselves in a rapidly evolving cyberspace. Thanks for joining us.

Adam Tagert: Every year we do a report on our activities from the previous year. You know, we find it is a good way to talk about and increase transparency of what is being going on in the program.

Dave Bittner: That's Adam Tagert. He's a Science of Security researcher at the National Security Agency Research Directorate. The research we're discussing today is their 2021 Science of Security report.

Dave Bittner: Well, can we dig into sort of the philosophy behind it here? Why approach security from a scientific point of view? What's your goal?

Adam Tagert: So, our goal is to build up an academic discipline, investigating the fundamentals of cybersecurity. So we're talking about developing theories, models, and having scientific evidence to help inform cyber. So, the reason we do this is a lot of times our gut initiative – our reactions to things, you know, it sounds right, but you really should dig in and do the study to figure out what is the best solution. One of the common ways you look at it is think about – we're always seeing, what is the best password you can make? And we've done plenty of studies on passwords, and often it's like, you know, make it longer, add special characters, do this. But the science says that that doesn't actually fit with how humans remember things. So you get all these other parts involved in it. So, you know, so in the sense we want to really know the nature of how all these things fit together. so that way when we provide advice and provide technology to solutions, we are confident that it's going to provide a good solution.

Dave Bittner: So, is this a largely a matter of having a good amount of rigor behind the work that you're doing? Good scientific principles?

Adam Tagert: Absolutely. So, we're very much doing rigorous work, stating your assumptions, testing those assumptions, trying to validate what you're doing, so that way you're not just creating something that sounds right. Let's test it with the real world and see if that is the actual solution.

Dave Bittner: Well, let's go over some of the details here of the SOS initiative – the Science of Security Initiative – you sponsor several different groups that you call "Lablets." What is that about? Can you describe that for us?

Adam Tagert: Sure. Lablets are small virtual labs at leading American research institutions or universities. And the idea of a Lablet is we don't want to just create a good research lab. We want to have it be multidisciplinary, so it's not just a computer science activity or electrical engineering activity, but philosophy and psychology get involved and actually pull the hard questions apart and actually really dig into it. And the other aspect of the Lablet is that it brings in other institutions. So, we wanted to have not just one – or in our case, six – really great places. We want to have them bring in other institutions, other researchers, professors, graduate students together to really, you know, have that vibrant discussions and research and collaboration.

Dave Bittner: Hmm. So, be able to use the I guess, the scale that these universities and institutes bring to the table, their own resources, their own network of folks who can help with these hard problems.

Adam Tagert: Exactly.

Dave Bittner: Well, let's go through some of those organizations. Who is in the lineup, and are there any particular or any particular specialties from each of them?

Adam Tagert: So, yeah, so let's just go through it. We have Carnegie Mellon University out in Pittsburgh and they're very much in scalability and composability, looking at, like, programming languages. And the other aspect is like doing long-term human behavior studies, so we can get really new perspectives from that. We have Kansas University, and they have specialty work. They're very much in cyberphysical systems. And one of their big projects is trying to have computers be able to prove that they are what you think they are – not just who they are, but they're running the right type of software and the right configuration and those attributes that you care about to secure a system.

Adam Tagert: We have a lab at the International Computer Science Institute, which is a research organization in Berkeley, California, and they're, of course, connected to University of Berkeley. And they are bringing much more on the privacy aspect of it, because, you know, in the privacy, you look at all these things and it's like, how is information flowing? What are people doing with your information? And they have quite rigorous resources to actually bring privacy policies into understanding contextual privacy, because it's not just what the information is, it's how it's going to be used. And that that really brings changes to how people perceive things. In addition, they have a really robust test bed where they test thousands of Android apps to see if they're actually following these privacy policies or doing other types of work.

Adam Tagert: We have a Lablet at NC State, North Carolina State University, and this Lablet is really working on, like, norms of how does – what are expectations of how information works. And one of my favorite projects they're working with in collaboration with Rochester Institute of Technology is they've been working in those collegiate competitions to get a better understanding of how attackers attack systems.

Adam Tagert: And finally, we have two more Lablets. We have Vanderbilt in Nashville. And Vanderbilt brings an expertise in cyberphysical systems. So all their research projects have a connection to those computer devices that connect between the physical world and the real world. So whether they're understanding how train control systems work or how the power grid influences things through information and then the actual you know, how the power comes together.

Dave Bittner: Mm-hmm.

Adam Tagert: And finally, our last one is the University of Illinois in Urbana-Champaign, UIUC. And they're very much looking at the resilience of systems here, looking at how systems continue to work under compromise. You know, don't want – we're not in a stage where if something gets broken into, we shut everything down, wait a few days, try to recover everything new. That just doesn't work in our world where everything is dependent upon these computer technologies. So they're looking at those kind of tasks, bringing humans involved, uncertainty, because we – a lot of our models assume we know everything, but we really don't. So how do you bring those into your modeling of what's going on now?

Dave Bittner: Now, to what degree are the Lablets their own sort of silos, and to what degree do they interact with each other, if at all?

Adam Tagert: So, we really try to push them to interact with each other. Obviously, proximity to within the Lablet creates good collaboration, but – so we try to work on that. We have them meet quarterly where they can present on general themes. So, sometimes we'll do a empirical study day presentation. So all the different Lablets talk about their work in that area and they can build up a more robust connection there. And then we have our annual conference to bring everyone together, and we also have a continuous virtual organization which helps how people have collaboration consistently over the year.

Dave Bittner: Now, one of the things that you outline in the report is this notion of coming at what you describe as five hard problems. It's an interesting list. Can you take us through that and give us a little description of what you're after here?

Adam Tagert: Yeah, absolutely. So, the five hard problems, these were developed in collaboration with the Lablet leads and NSA, and saying, what are really hard fundamental challenges we have in cybersecurity that we really need to make progress if we're going to really transform how cybersecurity is done? We have five – resilient architectures, the idea of working through compromise and being able to recover from it. We have secure collaboration, which is the challenge of having information move between devices and platforms and have it to be secure and meet the objectives. We have metrics, which is that perennial challenge of trying to measure how secure is something, or prioritize areas to focus your security.

Adam Tagert: We have scalability and composability. And this one seems a little weird, in the sense that it's the idea of often solutions work really well in the small but don't work in the big. So how do you take those smaller solutions and scale them to tackle bigger problems and more data? And then the composability part is, you can write secure parts, but how do you put them all together so you don't have to redo all the security thoughts of a system? So, you know, challenge of is a secure product A, secure product B, doesn't mean you get a secure system when you put them together.

Adam Tagert: And then the final one is really a very interesting one, and it's the human aspect, the human behavior of cybersecurity. And that is all about trying to bring in an understanding of how humans interact with systems and make decisions. So that way you can have systems that are realistic. Because you can develop a perfect system, but then the human will make a decision, and it's like, what were you thinking?

Dave Bittner: (Laughs) 

Adam Tagert: Well, that just means the technology wasn't prepared for how a person would respond.

Dave Bittner: Right. Yeah, I mean, I'm fascinated by sort of the whole approach to this, because, I guess what I wonder is, are there any areas of cybersecurity and privacy that have a hard time being fit into a scientific framework? You know, are there times when you and the folks you're collaborating with find yourselves thinking, you know, this is a square peg in a round hole?

Adam Tagert: Generally, I think we're usually thinking most solutions do fit into a scientific, rigorous approach to looking at it. Not every problem would fit into the five hard problems because they're not intended to be – they're just big problems that we're really focused on. So, yes, I think science can bring us very much here in cybersecurity, but we're not trying to tackle every problem.

Dave Bittner: Right, right. Yeah, well, that makes a lot of sense. Well, let's go through this year's report together. What are some of the highlights for you? What really stands out as interesting?

Adam Tagert: Well, this year has been definitely an interesting year with the pandemic. That has definitely changed many of the ideas or the activities that we normally do. But we're very appreciative that the universities considered the national security research being done here as critical and worked through their difficulties to continue to make progress. So, the three areas that we really work in is we do foundational research with the universities. We hold competitions, which is really a unique thing where we're trying to inspire people to do good work. And then we grow this community, because, as good as having twenty people working on a project, two hundred's better and twenty thousand is the best.

Adam Tagert: So, let's just talk about some of the interesting research findings. So, at Carnegie Mellon, we had a long duration study of how humans have been working, making decisions in cyber. So they have gotten hundreds of volunteers and then they monitor what they're doing. And they investigated the question of, we see these breach notification emails all the time. You know, we get an email, all our systems were broken into, for your safety, you should change your password. So they started looking at, what do people actually do with that information? And it's probably – we're all probably guilty of it, in the sense that most people kind of ignore it and move on with their lives. They don't actually rush out to change the password for that system.

Adam Tagert: And it gets even more interesting in the sense that when people do change their passwords, they do just a slight variation. And I'm sure many of the people can think, oh, yeah, I just added an "A" to the end or I added a "1" to the end of my password. And then, if they don't do that, a lot of times when they change their passwords, it actually gets easier to guess, which is, you know, really the human aspect of it is like, well, this I had this really hard password. I can't remember that, so I'm going to make it easier this time.

Dave Bittner: Hmm.

Adam Tagert: So, you get advice out of this and saying, oh, maybe these breach emails where everybody's saying change your passwords all the time, people aren't really listening to it. So we need to have new, effective ways of communicating.

Adam Tagert: We had a study at University of Alabama, and this is one I've been working with and what they've been looking at it is, what is a good research paper? What do you put in it? And you're like, oh, that's gotta be a challenge, because every paper is different. But there are certain attributes of a paper that you really want to see. You know, do you want to see the assumptions, you want a clearly laid out a goal and approach. And so, they've been working on this question for a few years, and this past year they had done an open expert elicitation. So they went out and talked to experts in security research and said, what are you looking for? You know, you don't want to just talk to the professors of what's going on. You want to talk to the engineers who actually have to make use of the papers and say, is the information in here useful to you?

Adam Tagert: And one of the things that they often find is papers struggle in understanding the validity of their research. So, what are the flaws in my analysis that you can not trust my research, or what is something going on outside of my research that influences my research? So, you know, you can think of it as like, what is just the limitation? Like, I want to make a big claim, but really my big claim isn't so big.

Dave Bittner: Right. So it's a matter of people having your natural sort of human biases?

Adam Tagert: Mm-hmm. Yeah, exactly. So, you'll have studies that will say, we sampled programmers, but in reality, they all looked at freshman programmers in colleges. And you're like, well, does that really apply to somebody who's been a professional for twenty years? Maybe, maybe not. But you need to talk about it in your paper.

Dave Bittner: I see. You need to acknowledge it

Adam Tagert: Exactly, so that other people know that those challenges with it.

Dave Bittner: I see. Interesting. What other things caught your eye this year?

Adam Tagert: So, there's a project at NC State that is really interesting. We hear the advice, patch – when there's a vulnerability out there for your software, patch. Patch now, don't wait. Patch. And in reality, that is such a not-scalable solution. You know, you think of these people who have large cloud presences. There's – I would say thousands, millions of virtual machines running on these computers, if you just took time down to patch, you're spending huge amounts of time to patch. And so, people a lot of times just don't do it. And this research project's really been looking at, well, all right, let's make the assumption that you don't patch just because it's out there. You patch when the vulnerability and somebody is trying to attack you. So they've been developing those models and the sensors, so that way it detects and says, oh, you care about this now, so install this patch now. That way you respond to what's going on rather than just being proactive better and patch. Which I know sounds really weird, in the sense that you're like, why are you doing this later? But when you have so many machines that you're dealing with, you need to be able to prioritize, and this system helps you prioritize, saying, this is what you're going to be attacked on – deal with it now. Versus, this is not – they're not working on this one right now.

Dave Bittner: Right. I just imagine if you you have a whole lot of, I don't know, retail stores. You're going to prioritize putting security guards in the ones that might be in bad neighborhoods versus the ones that are in good neighborhoods. So, like you say, you're sort of bringing evidence to the table.

Adam Tagert: Exactly.

Dave Bittner: Yeah. Interesting. Now, what does NSA get out of their participation here? The leadership role – what comes back to NSA?

Adam Tagert: So coming back to NSA, one of the things Science of Security is this research is completely unclassified and public. So, the results are going out in the leading journals, being presented at the leading conferences. So, it's going out to everyone. So in that sense, as NSA works with people who use these results, these ideas and concepts get put into products that the US government incorporates and uses. So, in that sense, it helps defend the US government, and NSA is responsible for working on the security of national security systems. So having better things to build it with is a great benefit. More directly, we have NSA researchers called and SOS Research Champions who actually stay abreast and work with the research projects, so that way they can get these ideas incorporated into their research and on NSA missions. So that way we can have a direct response and understanding internally. At the same time, we build up the base of the security technologies and even information technologies in the country that help benefit our cyberspace and help make it more secure

Dave Bittner: So it's really, I mean, is it fair to say that it's sort of a pure research effort here? You know, it's aside from what's happening in industry with the development of, you know, products that people are selling, as you say, you're bringing scientific rigor to some of these questions without, I don't know, the veil of having to worry about marketing or profits or, you know, many of those things that the big providers they have to deal with.

Adam Tagert: Yeah, absolutely. We're really looking at what is the fundamental value that you're providing? What is the what can we do now that you can't do before, and release it as well as widely as we can, whether, you know, obviously there's the IP issues, the intellectual property issues, but we want people to be able to use it and make benefit of it.

Dave Bittner: Our thanks to Adam Tagert for joining us. The research is the 2021 Science of Security report from the National Security Agency. We'll have a link in the show notes.

Dave Bittner: The CyberWire Research Saturday is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Elliott Peltzman, Puru Prakash, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Ben Yelin, Nick Veliky, Gina Johnson, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe, and I'm Dave Bittner. Thanks for listening.