Pentest reporting and the remediation cycle: Why aren’t we making progress?
Rick Howard: Hey, everyone. Welcome to "CyberWire-X," a series of specials where we highlight important security topics affecting security professionals worldwide. I'm Rick Howard, the chief security officer, chief analyst and senior fellow at the CyberWire. And today's episode is called Pen Test Reporting and the Remediation Cycle - Why Aren't We Making Progress? A program note - each "CyberWire-X" special features two segments. In the first part, we'll hear from a couple of industry experts on the topic at hand. In the second part, we'll hear from our show sponsor for their point of view.
Rick Howard: I'm joined by Amanda Fennell, the CIO and CSO of Relativity. She's also the host of her own podcast called "Security Sandbox" right here at the CyberWire. Amanda, thanks for coming on the show.
Amanda Fennell: I'm happy to be here. Thanks, Rick.
Rick Howard: So we're talking about pen testing today. And for those who aren't familiar, can you take a stab at telling everybody what a pen test is?
Amanda Fennell: It's kicking the tires at something (laughter).
Rick Howard: Yes, that's true.
Amanda Fennell: It's when you want to know if there's anything that might be vulnerable that you should have caught, but you haven't caught. And there's two ways to approach this, right? So a penetration test, we want to do some kind of a test either on your process or your people or your tech to determine if it's, A, running as should be, running as intended, but, B, anything we should have been doing differently or configured differently to be more secure and lessen your risk. And there's two types of CSOs I've noticed out there, the kind that really don't want you to find anything because it proves how great they are in their program, or the ones that get mad if you don't find anything because they figure you're not good enough to find their stuff. Like, there has to be something, right? And I'm the latter. I think you should be finding something always. But yes.
Rick Howard: I think you're - I'm in that latter camp, too, because I don't really reach for a generic pen test as one of my go-to first things because they should be able to find something, right? I mean, that's, you know, that's just the way it is. Yeah, now, if I have a very specific thing in mind, like I spent some money or we improve some feature of our defenses, I might direct the pen test at that to see if we - if it was actually successful. I don't know. What do you think about that?
Amanda Fennell: It's a good question. And it's actually one of the things that I love the most is whenever I do pen testing, because that's your next question, do we do pen testing? We do. When we do that, we bypass a couple things that - after a few years, I know you can get past a turnstile. I don't need you to do that. So I will give you the credential, like, the badge and say, OK, can you just get to work now? I don't need you to do the physical too much because that part you get used to after a while. But I really need to not waste my money for pen testing on you getting through a turnstile. I need to know what happens when you get to a laptop.
Rick Howard: Yeah. I don't want to know if you can, because like you and I were just talking about, you better be able to. That's kind of your job. But what I really want to know is can you get to this thing that I'm trying to protect? I guess for the most, pen testers are contractor, right? Or do you have your own internal team?
Amanda Fennell: We have both. And that was actually because we had some rioting internally by the teams, that if they did not get to test things and - the first part is we're a software company too, right? So we have to naturally pen test our code. It can't go to production unless we've tried to test it and make sure that there's nothing wrong - the 10, OWASP, et cetera. nothing super vulnerable. So pen testing is naturally a part of the product security realm. But specific to what we're talking about here, the cyber side, we had people on the team that were like, if you don't let me get those skills under my hat, like, I'm not going to want to stay here. And I said, oh, a pen test team is born. Here we go. So that's how we came about is to make sure the talent stayed happy.
Rick Howard: Well, you guys are a software house. And what you just described before is a little bit beyond what most CSOs deal with. Most of us don't deal with software penetration testing. Tell us what the difference is. What's the difference between the typical thing that was invented back in the '70s about testing your network versus what you guys are doing today in a modern software development house?
Amanda Fennell: I always loved that movie "Sneakers." Have you seen that? Oh, it's like the best.
Rick Howard: I was just watching it this last weekend.
Amanda Fennell: The part when he has the flowers, and he walks over and distracts the guys - I got to get these delivered right now - the social networking that's going on there and the hacking.
Rick Howard: For those of you that don't know, the 1992 movie "Sneakers" is one of the all-time great hacker movies. And by the way, the movie was written by the same guys who wrote another all-time great hacker movie, "WarGames" - Lawrence Lasker and Walter Parkes. In this scene that Amanda was talking about, Robert Redford, probably best known to this audience for "Avengers: Endgame" and "Captain America: Winter Soldier," and River Phoenix, probably best known for "Indiana Jones And The Last Crusade" - he played the young Indiana Jones - are trying to get past a security guard and an electronic lock - two factors. The scene opens with River Phoenix, dressed as a delivery man, standing in front of the security guard with a stack of Drano boxes claiming that he has a work order to deliver them to the top floor. The security guard doesn't have him on the access list and is having none of it. The two get into a heated argument. That's when Redford walks up to the counter with some lame story about his wife delivering the birthday cake and the balloons.
(SOUNDBITE OF FILM, "SNEAKERS")
Unidentified Actor: (As character) Listen, I'm sorry. They didn't have anything on record.
River Phoenix: (As Carl) Hold on a second, I got the invoice right...
Robert Redford: (As Bishop) Did my wife drop a cake off for me?
Unidentified Actor: (As character) What cake? There's no cake.
Robert Redford: (As Bishop) Surprise party for Marge (ph) on the second floor. She was supposed to drop a cake off. Did she drop...
Unidentified Actor: (As character) I don't know anything about it.
Robert Redford: (As Bishop) There she is. Ladies, you...
River Phoenix: (As Carl) OK, well, it states right here very clearly that I am to deliver 36 boxes of liquid Drano to this here address.
Unidentified Actor: (As character) Look, I don't care what that says. You're not on the list, you can't get in.
River Phoenix: (As Carl) Now, if you have a problem with that just - understood. Understood.
Unidentified Actor: (As character) I do have a problem. You can't get in.
River Phoenix: (As Carl) I might lose my job.
Unidentified Actor: (As character) That's not my problem, kid. Now beat it, all right?
Rick Howard: That's when Redford walks past the guard up to the electronic door that's locked, carrying a bundle of helium balloons and a birthday cake box, and starts yelling at the guard to let him in.
(SOUNDBITE OF FILM, "SNEAKERS")
River Phoenix: (As Carl) No special interest to you. Well...
Robert Redford: (As Bishop) I can't reach my card. Could you buzz the...
River Phoenix: (As Carl) ...Why don't I just dump all this stuff right here?
Robert Redford: (As Bishop) Could you just buzz it? I can't reach my card.
River Phoenix: (As Carl) This would ruin the floor.
Unidentified Actor: (As character) Wait, one minute, OK?
Robert Redford: (As Bishop) Hit the buzzer, OK? We're late for the party on the second floor.
River Phoenix: (As Carl) Excuse me.
Robert Redford: (As Bishop) Push the goddamn buzzer, will you? Thanks.
Rick Howard: I love this movie. And if you're a security professional and haven't seen it yet, consider this your homework assignment. It's a cybersecurity classic.
Amanda Fennell: So, yeah, there are two parts to this. There is the general way of pen testing that we're used to, and it's the really exciting thing where we all pop out those lockpicking kits that we're excited to have. And our attempt here is to simulate, in some way, an attack that would use a tool, a technique or a process that an attacker would use to exploit a weakness. And that could be that turnstile. That could be the person who's stressed out and going to buzz you through with the badge or et cetera. And then what? If you get to that laptop and you - can you hack it? Is it bricked? Is there endpoint protection, so on? Is it going to be seen on the network when somebody does get into it? Is there noise, traffic, et cetera? Is it noisy? Is there lateral movement? Those are normal pen testing, I think what most people think of.
Amanda Fennell: Ours is the same, but also includes that product side. So when the code is created, there's an entire application security team that - they think their job is, quote-unquote, "to break things." Like, that's what they put in their Slack, you know, what I do. I break things. It's so sad. I'm like, you know, there's more to your job - right? - not just that. You should fix things, too. But - you should fix things, too. But they do the same exact thing. We just do it in code. And so we use the same tools, techniques and processes that people who are doing things for nefarious purposes, but we use it against our code. And we're constantly scanning, with that dynamic and static analysis and so on, anything vulnerable, anything we should be fixing. And then every once in a while on that code, we have a person who tries to break something.
Rick Howard: Amanda mentioned the OWASP Top 10. Let me explain what that is. Back in 2003, Dave Wichers and Jeff Williams, working for Aspect Security at the time, a software consultancy company, published an education piece on the top software security coding issues of the day. That eventually turned into the OWASP Top 10, a reference document describing the most critical security concerns for web applications. Today, OWASP is an international volunteer team of security professionals led by the foundation executive director and Top 10 project leader, Andrew van der Stock. OWASP is dedicated to enabling organizations to develop, purchase and maintain applications and APIs that can be trusted. Today, there are tens of thousands of members and hundreds of chapters worldwide.
Rick Howard: So just like a network penetration tester will have a bag of tricks that they will try to, you know, walk themselves through the intrusion kill chain, let's say, when you're doing software penetration testing, you have not an equivalent set, but a set of tools that you're talking about. Like, you were talking about using the OWASP Top 10 to check your code. Is that what you're talking about there?
Amanda Fennell: Yeah, we use the same things, just differently. And so we scan, just like what you would do with, like, a web application, right? First you have to have some kind of connection to it. Then you scan it. And I'm making all these movements on video that's not going to translate, right?
(LAUGHTER)
Rick Howard: I totally understand.
Amanda Fennell: So I'm going to scan it. This is the Italian background in me coming out here to talk with my hands. But so we do the same thing. We first have to make sure we have a connection to something. So if we have code before it's in production, it's in any kind of analysis stage. So we take that code, we connect with it, we touch with it, we feel it out. How big is the breadbox? You know, how long is it? So, like, relativity, as a product, is, like, 3, 4 million lines of code. I can't exploit that much code all the time if I wanted to pen test it.
Amanda Fennell: So we have to do what you mentioned earlier, focus in on the parts that have the most access to the crown jewels - authentication, any kind of identification access management. Those are the parts that we spend more times doing our pen testing against to say, OK, what happens if I did authenticate? What happens if I did get access to this? What could I get, and then what could I exfiltrate? So it's super similar to the network style. It just happens to also be that we're protecting crown jewels, which includes our code.
Rick Howard: William MacMillan is the senior vice president of security product and program management at Salesforce, and one of our most recent additions to the CyberWire's subject matter experts who routinely visit us here at the Hash Table. I asked him how Salesforce uses pen tests to protect their enterprise.
William Macmillan: We consider it to be just one part of a broad spectrum of activities focused on securing our customers' data. But we do think it's an important one. We use a wide variety of offensive security capabilities, including both in-house and third-party pen testing. These are well managed programs that we use deliberately. In other words, we find them to be of significant value, and we resource them accordingly. Pen testing absolutely helps our company and our customers meet various compliance obligations. But we view pen testing as much more than checking a box. It adds unique value to our overall security program in numerous ways.
William Macmillan: The key to using pen testing resources wisely, in my opinion, is to make sure they're focused on the front end and that the results that come out of the back end of the process are timely, relevant and easy to quickly operationalize as a regular matter, of course. In other words, if pen test reports just sit around in a pile or become focused solely on meeting compliance requirements, the value diminishes dramatically. One area that stands out a bit in my mind is that we do a lot of pen testing around M&A activity. As an innovative and fast-growing company, it's really important for us to make sure we understand any sort of cyber risk we might acquire. We operationalize the results of this kind of pen testing in a number of ways throughout the M&A cycle.
Rick Howard: One thing that Salesforce includes in its penetration team operations is a serious bug bounty program. In other words, they pay somebody else to find the bugs in their code.
William Macmillan: We also have a highly successful bug bounty program, which is a great way for us to tap into a broad talent pool. Our program has been underway for several years now, and we pay out millions of dollars on an annual basis. We really feel we derive tremendous value from this program.
Rick Howard: Well, let's go back to the typical network stuff. Do you distinguish between typical pen testing and, say, red teaming or purple teaming? Is that a different movement arm, or is it all the same?
Amanda Fennell: This became, like, the big movement and the hot buzzword of purple teaming. And we actually - we did, like, a whole - I guess it would be, like, a workshop with our entire team and explained what blue team and what red team was, in terms of are you offense or defense? This rolls right into my fantasy football style right now, by the way. So offense and defense...
Rick Howard: (Laughter) Well for me, it's Dungeons & Dragons, OK. But go ahead, all right. I'm with you (laughter).
Amanda Fennell: I'm also good at Dungeons & Dragons. I am chaotic evil, so we can do this.
Rick Howard: (Laughter).
Amanda Fennell: So we had blue, and we had red, and then we had to shock and awe everyone on the team to explain purple, the blend of both where you have to be able to play on both sides, right? So you do offense and defense, and you have to have the skill sets for both. So we maintain that you should be purple, that all people should have the capability to be purple, with potentially a few people who are straight blue. Our incident response team - right? - they really feel strongly they're straight blue. And then straight red - our AppSec people I mentioned, they feel really confident that they're straight red. But the majority of our team falls into the bell curve of being purple. And they're a little bit of both. They have to be able to defend. They have to be able to also attack.
Rick Howard: Well, I'd take it one step further, too. When I do purple team exercises - maybe we should just back up and explain what those three colors mean. Well, you said it before, but blue team means what?
Amanda Fennell: Blue team, I'm going to defend something. I'm going to make sure I know when somebody does something they shouldn't be doing. I can see it, and I can respond to it.
Rick Howard: So this is your security operations center, your intel analyst - they're watching all the telemetry and they're trying to figure out if something's going on. That's the blue team, correct? Yes?
Amanda Fennell: I argue that about that intel one, but yes, it's fine. Yeah, I would debate that...
Rick Howard: OK, we'll come back to that.
Amanda Fennell: ...One with you (laughter).
Rick Howard: All right. The red teamers are the team trying to penetrate? They're trying to get in through some way, correct? That's what a red team is.
Amanda Fennell: That's what I'm with. I'm with you on that one.
Rick Howard: So a purple team, then, is when you combine the two exercises, I think, right? So the red team tries, you know, something along the kill chain. They try 10 or 15 steps, and if they are successful, they go to the blue team and say, this is what we did. What did you see, right? And what did you do once you saw that, right? And so it's a learning exercise for both sides of the defense-offensive situation. That's where the learning happens. That's what I think. What do you think?
Amanda Fennell: Yeah, but it can be - it can - it's such a difficult thing. And I put this as this caveat, OK? So the common goal of we want to improve security, I believe while people think this is the purple team goal, I think that's everybody's team goal. You should all be wanting...
Rick Howard: Yep.
Amanda Fennell: ...To improve the posture of your organization. But the thing I'm most careful about is the adversarial relationship. I hate that moment when the blue team has to sit there on a call and get the readout from the red team and it become - I saw that, though - like, I don't want to - look, I'm not your mom...
Rick Howard: Oh.
Amanda Fennell: ...I don't want to argue about...
Rick Howard: That's fair.
Amanda Fennell: ...This. Yeah.
Rick Howard: Yeah. So you had to be - you're saying you had to be very careful not to create that animosity between the two teams. They should be purple teaming. It's together they're doing that.
Amanda Fennell: And that's why I focus on the purple with, like, maybe only one person at end spectrums. And the reason why is just that - I just - it's a same thing of, like, punitive anything. It's never going to be great feelings for human beings. And I say this because I've been the person in incident response that had to hear the readout from a pen tester contractor, and they were adamant that they had caught something. And I was like, no, no, you didn't. I saw you. And, you know, it just - it became adversarial, so...
William Macmillan: Well, I take red teaming one step further, too. I don't want - I'm not going to pay for a generic pen test or red teamer. If I want to hire someone or have my own team do it, I'm going to say, I need you to emulate one of the known adversaries that we know are going to come after us. And so instead of them making it up on the fly, I want them to emulate, say, Cozy Bear or Fancy Bear or something like that. So at the end of it, I can say, you know, we're protected pretty much against that kind of adversary. Would you take it that far at Relativity?
Amanda Fennell: That is why I argue about the threat intelligence team being on blue. That's exactly my point, actually...
Rick Howard: Yeah, yeah.
Amanda Fennell: ...Because you should always know your threat modelling and your risk modelling. Every year, your threat intel team should be telling you these are your top adversaries. Whether it's commodity or not, these are the top risks that we have and so on. And so typically there's one advanced threat out there, and I'd say - so any of the Bears are a good one. The Bears are - any of the Bears, you know, or, you know, any Panda. Like, we're - we got a lot of stuff going on in the activity, in the realm here. But some of those TTPs, they kind of cross around all of them. They all have similar ones. They're hodgepodging (ph) from each other. But I would say that that's exactly where the threat intelligence team comes in to shed light. Hey, these are our big adversaries that could come after us. You should be testing on these. And so that's why I think they kind of are a little bit of both. They're going to tell you what to go after and how to do it as an adversary. But they're going to also be able to educate the blue team about how they should defend or see it - or see the activity.
Rick Howard: So I think it's fair to say that you're an advocate for penetration testing, but everything can be improved. How would you improve it? You've done this for a while. What would make it better for you to use? How would it make it more practical or more useful?
Amanda Fennell: You know, besides you stealing my thunder about using relevant TTPs and threat actors, I think about being - you know, that would be the big one. I think that we probably just need to make sure that we're doing some kind of a - I don't know. I don't want to say the word homogenization. But, like, all reports don't look the same. And so some amount of standardization could go a long way in this industry. I would really love to see something like that in the years to come. And so that's one of the reasons why you keep the same contractors, potentially. But if you have these teams inside, there should be something that you're seeing, as an annual, like, this is the report. This is what we're getting and stuff. And that stuff's great.
Amanda Fennell: This is also why we sometimes change our external pen testers is because you got too cozy - not to be cozy bear, but you got too cozy. You know - you already know how to get through everything. I need to see somebody who's never seen it before. So I think some level of standardization could go a long way to getting people to understand some real value, from the contract side and on the internal side. So I'm talking, like, a cybersecurity framework kind of thing but for pen testing.
Rick Howard: Yeah, I agree. Like, we were talking about at the beginning of the show, it's not enough that you got in, OK? It's - did you exercise the thing I was trying to - you know, the thing I was trying to exercise? And how well did we do? And the other thing I would ask for, too, is relatively - because if you're going to - doing this with an outside contractor, you know, compared to other organizations, how are we doing, right? Or what are those other organizations doing that make them better than us? I would love that part of their report.
Amanda Fennell: Yeah, that is a really good point. So I'm going to steal that, in case anybody asks me that question again. I'm going to say, and - but, yeah, I think that would be awesome. It's always the question, how are we doing comparatively? It's a wonderful one.
Rick Howard: Yeah. Well, all this is good stuff, Amanda, but we're going to have to leave it there. That's Amanda Fennell. She's the CIO and CSO of Relativity and the host of "Security Sandbox." Amanda, thank you very much.
Amanda Fennell: Thank you, sir.
Rick Howard: Next up is Dave Bittner's conversation with Dan DeCloss, the founder and CEO of PlexTrac, our show's sponsor.
Dave Bittner: So let's start by getting a little bit of the backstory here. I mean, in terms of the history of people gathering security reports and pen testing and all that sort of thing, can you give us a little bit of the legacy way that people used to deal with this and kind of what led us to where we find ourselves today?
Dan Decloss: You know, the legacy way is still a very popular paradigm, right? And that's where you will go and you will do an assessment - you know, my experience was penetration testing specifically - where you're spending a lot of time documenting evidence, collecting screenshots and trying to put them, you know, into the right format and using different style templates. You know, as a pen tester, there's never a shortage of work. So, you know, it didn't - I mean, I had a day job, but then I would also potentially be doing, you know, side gigs and moonlighting for different folks. So, like, you're always exposed to everybody's different reporting style and templates. So you're spending a lot of time just documenting the findings. And it's - you know, my experience, obviously, was pen testing, but this could be for any kind of security assessment or security audit in general - right? - where you're going and you're doing this assessment - or even like an incident response, you know, exercise where you're doing all this work, and then you're collecting that evidence into a document.
Dan Decloss: And so you're spending a lot of time just writing the document and trying to stay consistent and then delivering it with, you know, the - kind of this - you know, like, you're either doing it via, you know, a secure file share or, you know, just having different ways of - or trying to encrypt it and password protect it. And, you know, having, like, just lots of, I would say, friction around that whole process of, like, getting the document together, getting it reported accurately, getting it reviewed and correct, then getting it into the hands of the people that need to do something with it. And then from there is really where I started to feel a lot more pain after having been on the front end of it, where the back end is like, what am I supposed to do with this potentially 300-page document?
Dan Decloss: And so what people end up doing is they tend to copy and paste the relevant information that they feel is relevant out of the report into some other form of a ticketing system, right? And that's really where some of the breakdown starts to happen as well, is that, hey, there was all this time spent on this report, and then a small percentage of it actually makes it somewhere where it can be actionable, right? And so that was - you know, I would say that's kind of the legacy issue and still a very popular paradigm today, even in 2022, right?
Dave Bittner: I mean, to what degree are we dealing with just the basic reality that, you know, folks who do pen testing are very good at pen testing, but they may not be the best graphic designers in the world?
Dan Decloss: Yeah. Yeah. No, you're exactly right. I mean, I've definitely worked for firms where they've actually hired outside marketing agencies to put together a style guide and a template for them, right? So they're not. You know, they - and so these folks recognize that this isn't my area of expertise. And I think a lot of people - some people will put, you know, kind of their ground - stake in the ground that this is, like, kind of their secret sauce to how they, you know, write up the reports and things like that. But I think most people recognize that, hey; I'm getting paid to utilize the skills and techniques that I know of how to attack a network or my security knowledge and experience and how I exude that into the organization that I'm doing testing for rather than the way the document looks and - you know, not to say that that's not important, right? It is important to be able to convey information correctly. But that's so much - it's so much more a factor of what did you actually do and what do you know and what are you actually being paid to do.
Dave Bittner: I suppose there's a risk of it sort of being like a game of telephone also where, you know, the number of times that something is translated by one person to another person or reworded or - there could be a loss of clarity or a loss of emphasis - those sorts of things.
Dan Decloss: Yeah, yeah, most definitely. I mean, you know, that's why, you know, we get into this situation where if companies are getting - say they're going for their SOC 2 certification or potentially even, like, FedRAMP or something like that. They get asked, like, show us a copy of your latest pen test report to be able to at least say, like, here's what was originally reported, versus, you know, where - what did you actually do with these results, right? And so there is definitely that - you know, and I experienced this. There was the notion of - I'd come back and rewrite the same report because nothing got done. And that could be a very - you know, a variety of factors. It got lost. It got put into a spreadsheet that that person, you know, that was doing that work is now no longer at the organization.
Dan Decloss: So there's a variety of factors that could play into that. But there's definitely a loss of fidelity, of, you know, that telephone kind of game where, well, I thought this one was important, but the auditors said - or, you know, the pen tester said it was really important. I don't feel like that. So I'm going to move it into this low, you know, informational-type status where nobody's going to look at it.
Dave Bittner: So what is the paradigm that you're recommending here? I mean, would - how do you suppose people can come at this in a better way?
Dan Decloss: Yeah, yeah. And, I mean, I think continuing to move away from the notion that we need - yeah, I mean, basically - you know, and this is why I started PlexTrac - was, let's start moving away from the document is the final deliverable. And let's get away from that as, like, the form of delivery where we can have a dynamic platform that facilitates better reporting and more consistent reporting. The document is still there. It's an artifact of the engagement. So it's still that point in time. But it really facilitates deeper collaboration not only from the teams that are doing the testing and the reporting - speeds up that process - but also the collaboration between the people that are responsible for fixing these issues and being able to say, like, what is important for our organization? Can I provide more context - and then have visibility into the actual remediation of these issues because at the end of the day, we're all on the same mission, right? We're all on the same team as to reduce risk to the organization. How we categorize that, how we prioritize it is really important. And that's really what the crux of the matter is - is being able to have deeper visibility, better collaboration so that you can actually show progress.
Dave Bittner: Can you walk me through a potential use case here? I mean, rather than, you know, plopping down on a desk, as you say, you know, a 300-page report with everything that I've found, it sounds to me like what you're suggesting is much more dynamic and fluid.
Dan Decloss: Yeah. Yeah. So, I mean, I think probably, like, a good use case outside of, you know, just being able to write up a full report is an important scenario that happens a lot when we're talking about a security assessment of any type. But we'll talk about a penetration test, where, during the course of the engagement - you know, those engagements can last - can be scoped anywhere from a few days to several months, right? So as they're doing their assessment, they may come across something that is actually very critical that they were like, we need to report this right away. That paradigm, you know, would actually take quite a bit of time to get done because they don't have the final report. They got to get the information into the hands of the people that should fix it right away.
Dan Decloss: So what I've actually seen happen is people document things in, like, a text document, encrypt it with the screenshots in either a .ZIP file or put it on a Dropbox or something so that there is, you know, an escalated way to get that information into the people's hands. And then people have to go grab it. They have to, like, find out, where do I put this? And so that is a very cumbersome process just to get, like, a highly critical vulnerability reported during the course of an engagement.
Dan Decloss: So, you know, a better solution, you know, would be, hey; I'm just going to publish this finding in this platform that says I've already written up everything, because, you know, a lot of testers like to report as they go anyways. They're just documenting their work. So, you know, I'm going to recognize - I'm going to publish this because this is really important for the end user to go identify and resolve. And so that speeds up that whole process because it's just right there, right? So that's kind of a notion of - or a scenario of how you would, you know, not be so focused on a document and - because then you get into these binds at time of, like, well, I don't have the full document, and engagement's not supposed to be done for another, you know, three weeks or whatever. But I really need to make sure they have this information.
Dave Bittner: Right. And first, it has to go out to the design team and all that stuff.
Dan Decloss: Yeah, exactly. Yeah, yeah.
Dave Bittner: What about for the end user themselves, the people who are consuming these reports? How does it change the way that they approach this?
Dan Decloss: Yeah. So I think it allows them to have deeper visibility into all of the issues, as well as better collaboration, not only with the people doing the testing because they can have more clarity as to when these things are getting reported, how often, what are the techniques that the, you know, the proactive side of the house is using but also who's working on remediating these issues. That starts to get lost, you know, in that whole process. So having more visibility into where are we at in terms of the remediation cycle for these, can I have more analytics on how often these types of issues are getting resolved? And it also facilitates a much more continuous mindset, which is where the industry is continuing to focus, is that you want to be doing this form of testing in a more continuous basis. And so a solution like this really helps facilitate that more continuous mindset, deeper visibility and truly being able to report up the chain what kind of progress is being made and what's working and what isn't.
Dave Bittner: What about for folks who are working under regulatory regimes? I'm imagining the necessity to capture a moment in time, to have that snapshot. Can you do that as well?
Dan Decloss: Yeah, absolutely. I mean - and, you know, like, specific to PlexTrac, you know, we have multiple ways to do that. You can - in your analytics dashboard or whatnot, you can filter based on specific what things look like in time. You can also do comparisons as to, like, here was a report from last year versus this year. And then, like I said before, you can also have the original document of, you know, when you - when the report was finalized. You can export that out as, like, a standalone here was this point in time. But even when you view the report in PlexTrac specifically, you can see, like, when this issue was reported and you could filter based on those dates and be able to identify, you know, what was going on at that point versus now.
Dave Bittner: And how about - I mean, security itself, the security of the documents themselves and being able to, you know, sling these things back and forth between the various folks who have an interest in it, I mean, I suppose that's a concern as well.
Dan Decloss: Yeah. You know, I mean, it's - you know, I think this is - you know, the fewer documents that are getting shared as attachments in emails, the better, right? And so, you know, this facilitates a much more secure environment to be able to access these results and pulling these results down if you need to. But you don't have to, right? So you have a more secure platform that you can be accessing these results and providing information on how we're fixing them as opposed to just shipping around a document and then probably having another document that you're using for tracking it like a spreadsheet.
Dave Bittner: What sort of feedback have you gotten, you know, from folks who are used to doing it the old way and now have moved over to this platform-based approach? How do they feel on the other side of it?
Dan Decloss: You know, it's interesting. We get a ton of commentary simply, like, improved morale, right? Because - right - because if you think about the security testing side of the house, they want to just be testing, right? They want to report - I mean, they know that - they're getting paid to write these things up and to deliver a good report. But they have - they're saving so much time and energy on the mundane aspects of that process that they have more time to actually focus on the testing and getting their - either their clients or their organization better at these security issues, at resolving them. So one huge benefit is improved morale that we hear a lot, in addition to the efficiency gains where, you know, we are in an industry where it's hard to find good talent. And so it truly saves them time to be able to not have to go hire more people. They can just make their team more efficient. And that's one huge benefit on the testing side of the house.
Dan Decloss: And then another thing that we hear a lot about in terms of benefits is the people that are actually receiving these, they have a more centralized way to manage the results. And so it not only speeds up their process, but they have deeper visibility into who's reporting what, you know, what areas of the organization, whether it's an app or a business unit or a subnet, you know, what are the areas that are actually, you know, higher risk? And it gives them a deeper sense of where they should be focusing and where they should be prioritizing their work. So overall, you know, not only does it provide deeper efficiencies and improvement in morale but also better visibility of their security posture and where they should be prioritizing the remediation cycle.
Rick Howard: We'd like to thank Dan DeCloss, the founder and CEO of PlexTrac, Amanda Fennell, the CIO and CSO of Relativity, and William MacMillan, the SVP of security product and program management at Salesforce, for helping us get some clarity about pen testing and making it work for us.
Rick Howard: "CyberWire-X" is a production of the CyberWire and is proudly produced in Maryland at the startup studios of DataTribe, where they are co-building the next generation of cybersecurity startups and technologies. Our senior producer is Jennifer Eiben. Our executive editor is Peter Kilpe. And on behalf of my colleague Dave Bittner, this is Rick Howard signing off. Thanks for listening.