Cloud Architect vs Detection Engineer: Mutual benefit.
Dave Bittner: Hello everyone, and welcome to CyberWire-X, a series of specials where we highlight important topics affecting security professionals around the world. I'm DaveBittner. In today's program, we delve into the dynamic and increasingly critical fields of cloud architecture and cybersecurity detection. Our focus today bridges the nuanced roles of cloud architects and detection engineers, two vital cogs in the machinery of modern digital infrastructure and security. We're joined by Brian Davis, Principal Software Engineer, with a wealth of experience in cloud architecture, and Thomas Gardner, a senior detection engineer known for his expertise in identifying and mitigating cyberthreats. Brian and Thomas are both from Red Canary, our show's sponsor. Together, they'll shed light on the symbiotic relationship between their roles. We will dive into how detection engineers distinguish normal administrative activity from potential intrusions and what behaviors and patterns they vigilantly monitor in customer environments. Bringing Brian and Thomas together offers a unique perspective on how these roles interact, challenge, and ultimately support each other's objectives in the digital world. Stay with us. [ Music ] So today we are talking about, kind of contrasting, this notion of cloud architects versus detection engineers, and we want to start off with some definitions here. Why don't we go through these one by one. Can we start off with a cloud architect? And for folks who aren't familiar with that, how do you describe it?
Brian Davis: Oh, that's a fantastic question, and I always struggle to answer that actual question. So in my mind, a cloud architect is someone that knows how to use the tools of the cloud, whatever cloud platform is your favorite, to build the applications, to build the things that you want to build. And so, what I do is I work a lot with the other engineers that we have on our team to help them to build the system in such a way that it will scale well as we grow in such a way that it's resilient and kind of knowing the landscape of what the different tools are that are in our toolbox. And so, my focus is looking across scalability, looking across resiliency, and making sure that what we're building can withstand all of that and the cloud part of that is just to use those cloud-based tools to enable those features.
Dave Bittner: So in your estimation, I mean, what's the background that goes into somebody being a successful cloud architect?
Brian Davis: That's another great question. I think at least for me, a lot of it is I've built a lot of stuff over a lot of time. I've built them without using the cloud. So I know. The ways to do it and the on-prem concept, and I've also built a lot of these things within the cloud. And so, I think a lot of it is battle scars and lessons learned from either doing it the wrong way or doing it a bad way to know that there are better ways to do it. And so, I think a lot of it has to do with learning, again, the tools that are available within the cloud platform. So understanding the tools quite a bit, but also, a lot of experience in building previous systems and knowing ways to do it and ways not to do it.
Dave Bittner: Well, and Thomas, in this corner, we have a detection engineer. Let's do the same thing with that job title. How do you describe that to someone who might not be familiar with it?
Thomas Gardner: So, yeah, as a detection engineer, I'm really responsible for researching attacker behavior, breaking it down into manageable pieces, and then communicating it to people on my own team, people on another team, customers. At Red Canary, we've built our own detection engines, built a few detection engines, in fact. There's many ways to be a detection engineer. A lot of companies will use like their own sims or build on top of custom rules in their EDRs to do it. I think the core of detection engineering is really understanding attacker behavior and breaking it down into manageable pieces that can then be, essentially, detected later on. There's some overlap with like threat hunting. It's pretty common to take threat hunts as outputs and turn them into automated detection rules. There's some overlap with like incident response. Once you have an incident and you've understood kind of what happens, how an attacker got in, what behavior they went into afterward, then you really want to make sure that doesn't happen again. And so, you might build automatic detection rules after that, and detection engineering is really focused on that sort of taking output from these other sort of disciplines in cybersecurity and trying to scale it and ensure that bad things don't happen again or you get ahead of adversaries before they get into your network.
Dave Bittner: The relationship between these two positions, you've got your cloud architect; you've got your detection engineer. Is this, by nature, an adversarial relationship?
Brian Davis: It's funny you asked that. We were actually talking about that before we started talking with you that, no. I don't think it's adversarial at all. I think what we can do together is understand how each of us do our job, and that's really critical, right? Because Thomas and detection engineers are out there looking for threats in the cloud landscape, in the cyber landscape, and some of the actions that folks on the engineering teams, such as cloud architects and software engineers, are doing can look like threats. And so, what you need to do is have a regular conversation to understand, oh, this is normal behavior. This isn't something that an adversary is necessarily doing. They might do something that looks like that, but that conversation enables us both to understand kind of each other's space a little bit more effectively. I don't know, Thomas, if you feel the same way.
Thomas Gardner: Absolutely, I do. I think one of the differences between detection engineering and just sort of rule creation is being able to put actions into a wider context. It's really important as a detection engineer for me to understand sort of the full attacker life cycle of how they break into things; how they persist in environments; how they escalate privileges and sort of what that chain of events looks like. It's very rare to see cloud architects do exactly all of that in exactly that order. Not saying it doesn't happen, but understanding how Brian does his job, why he does certain things -- I think the example we were -- that I like to give is interactively logging into like a Kubernetes pod and then running recon-looking commands. Turns out cloud architects love doing that.
Brian Davis: I'm not sure we love doing. It's sometimes a necessity.
Thomas Gardner: But there's a good reason for why they would do that. Typically, like troubleshooting during an incident or trying to set up some sort of finicky application or something. Having a good relationship with our cloud architects really helps us put that sort of stuff into context. And like I can go up to Brian and ask him, you know, hey, we saw you do this. Why did you do this? You know, what were the things you were after? And then we can go back and compare it to known adversary behavior that looks similar and just try and identify the differences so that we can put our own detections into better context and really improve the product we give.
Dave Bittner: How much of this is just kind of keeping in regular touch with each other to give each other a heads up and say, hey, listen, you know, we're going to be doing such and such today. So if you see something, that's probably what it is. But having those lines of communication open?
Thomas Gardner: I think the more you have those lines of communication, the less chance you're going to have a false alarm in that respect. But with as many engineers as any organization has, it's really easy to miss that communication and send someone off on a wild goose chase because you forgot to say, oh, hey, by the way, I'm going to go open up permissions on this bucket because I'm testing something out. It's really easy to forget that, and anything having to do with a human notifying another human, it's going to get missed. And so, I think where you can have that communication, it's critical. But it's not always there, unfortunately. It's actually nice that it's not always there, too. Being able to sort of test some assumptions that we have about our own detections and doing so without sort of knowing ahead of time what cloud engineers are up to and having to work our way back, sort of, from our detection and put ourselves in our customers shoes to really have to analyze our own work output is a really helpful exercise for us to make sure that we are challenging our assumptions about what's truly attacker behavior and what's just sort of general cloud behavior? You know, there's a reason you can open buckets up to the entire Internet, like there's legitimate reasons to do that. It's not only a bad thing, and it's not often a bad thing. And so, sometimes not having heads up in being forced to challenge our own assumptions about that can be a really helpful exercise.
Dave Bittner: Some accidental red teaming? >>
Thomas Gardner: Yeah. Great way to put it. >>
Dave Bittner: Right. Opportunistic red teaming.
Brian Davis: Right. Right.
Dave Bittner: I'm curious. I mean, how do you strike that balance between needing to keep up with what I think is fair to say an ever-increasing cadence, right? I mean, nobody is going to claim that the attackers are slowing down, right? I think the opposite is true. But also like, Thomas, from your point of view, you don't want to be the department that's always crying wolf, you know, or saying -- you don't want to be pestering the cloud engineers, as you say, with false alarms. How do you strike that balance between the two?
Thomas Gardner: Oh, that is a big question. That is a great question. We always strive for more specificity in areas like the cloud where kind of we're all learning new things about it. Even the cloud architects are learning new things about it. We tend to start pretty broad with some assumptions, and as we learn things, we just we constantly trying and revisit those assumptions like I was saying in the previous answer. This is where putting sort of that behavior into context really comes in handy because if we can say, you know, a certain action happens, but you need these kind of three other things around it for it to really be bad, and if we can translate that in our detector logic so that we quiet that idea down ahead of time without requiring a human to validate those things, we will tend to be faster. We'll tend to be able to communicate specific threats better, and we'll just generally be happier because we're not constantly dealing with a bunch of manual labor trying to validate our own work.
Brian Davis: Wel, I think to expand on that, I think to what Thomas said, the context is really key. You know, we've spent a lot of our time at Red Canary working on EDR, which is endpoint focused. And in an endpoint, you're working on a single computer somewhere. And granted, there is lateral movement between machines and things of that nature. But at the end of the day, you're looking at processes that are executing on a single computer, and the context is what's going on in that computer? There's more to be gained there, but just looking at the activity on that computer can give you a lot of insight into what's happening because there are certain patterns that adversaries will follow. When you step back to the cloud, you're almost never dealing with a single computer, and you're probably not dealing with a single cloud service. And so, now you can't go with one piece of information because that one piece of information might be an engineer or cloud architect or someone else with privileged access doing something that they're supposed to be doing. So you have to gain more context in order to say, well, is this a false alarm or is this something that I care about, and I know that's one of the things that we've really worked hard at is trying to assemble more of that context for the detection engineering team so that they have all of the information to say, oh, well, they did A and then B and then C. That's not something that our engineering team usually does. That's probably adversarial relation -- an adversarial behavior. And so, the context, I think, has been one of the biggest challenges that we've had of providing that insight so that we don't cry wolf all the time; so that we know what's really dangerous behavior versus normal behavior because they're -- they can look really same if you're looking through a small aperture.
Dave Bittner: I want to wrap up with you guys with this question, and I'm curious, and an answer from each of you from your individual perspectives: What's your recommendation to somebody who is starting down this journey? You know, who is going to be having this relationship with the cloud architect detection engineer relationship within an organization? Let me start with you, Brian, from the cloud architect's side. Any tips or words of wisdom for how to get the most out of this relationship?
Brian Davis: That's a fantastic question. I think it starts with assuming good intent on all parties, and that's a good thing to go for in anything, in any in any relationship that you have. But knowing that everyone has a job to do, and there's also so much information and so much stuff to learn, that not everyone has a full understanding of all the activities that are going on. And so, if anything comes off as confrontational, and if anything comes off as accusatory or sounds that way, assume it's not and have the conversation and establish that relationship. Because if you start with good intent and you assume a good intent on the opposite party, you can find out that they have a difficult challenge, a difficult job, to achieve as well, and you'll start to build more bridges that way.
Dave Bittner: Thomas, how about your perspective?
Thomas Gardner: I think it's very easy as like a security practitioner to sort of say no or sort of invalidate the actions of other people a lot and say like, you know, it's not the most secure way of doing things or it's not the recommended way of doing things. And trying to avoid that habit, trying to basically view your coworkers' actions as valid, even if they maybe don't make sense to you, understanding their intent, and treating them as like a normal way of operating is the best place, as a detection engineer, to start. There are so many times where we get confused looking at certain behavior thinking, why would you do that? You know, this is what we know attackers do. This is how you sort of, I don't know, misconfigure systems or something. And especially in the cloud, when cloud providers give all kinds of APIs and build them for legitimate reasons, I think it's really important to view the use of any of these like sort of APIs or actions as legitimate and valid ways of operating. And so, as a detection engineer, you sort of need to be able to separate those valid things that like a cloud architect is going to do, like logging into a Kubernetes pod interactively, opening a bucket publicly, creating some sort of access key for a service account in the cloud or something. You need to view those as legitimate business operations and not just assume ill intent, essentially.
Dave Bittner: Yeah, it's that, I mean, it's that classic, you know, practically a stereotype to not be the "Department of No."
Thomas Gardner: Exactly. [ Music ]
Dave Bittner: And that wraps up our episode of CyberWire-X. Our thanks to Brian Davis and Thomas Gardner from our show sponsor, Red Canary, for joining us. And thanks to you for listening. I'm Dave Bittner. We'll see you back here next time. [ Music ]