Threat Vector 1.25.24
Ep 14 | 1.25.24

The Role of Threat-Hunting in Cybersecurity


Oded Awaskar: So I think that the interesting fact that I'd want to share is that if you originally asked me what do I want to do when I grow up? I wanted to be a teacher. My mom's an English teacher and I really love working with kids. So what I'm doing, I'm conducting some cybersecurity courses for children. And for my kids, you know, and their classes and even some lectures. When I decide that, you know, I'm done with my cyber day job, maybe it's time to go be a teacher. [ Music ]

David Moulton: Thank you to Unit 42's "Threat Vector," where we share unique threat intelligence insights, new threat actor TTPs, and real-world case studies. Unit 42 has a global team of threat intelligence experts, incident responders, and proactive security consultants dedicated to safeguarding our digital world. [ Music ] I'm your host, David Moulton, Director of Thought Leadership at Unit 42. Today, we're going deep on threat hunting with Oded Awaskar. In addition to his role as a Senior Manager of Threat Hunting for our MDR team, Oded teaches cybersecurity classes for children. Alright, Oded, let's hop right into it. Threat hunting and incident response, that's the topic that we're going to get into. And let's start with a definition. How do you define threat hunting?

Oded Awaskar: Threat hunting to me is thinking of an hypothesis, coming up with some ways to find and to prove these hypotheses, and then executing it and just deep diving into the leads that these hypotheses have generated for you. This is the ultimate definition of threat hunting to me.

David Moulton: So if I'm picturing it, it's like a Sherlock movie. You've got this little bit of a clue, you've got a hypothesis or a theory, and then you jump in and you find the different pieces that can prove or disprove what you think is a threat?

Oded Awaskar: That's actually a pretty good definition that I've never heard. I think that also if we're trying to use some analogies in here, I think what I like best about threat hunting is the fact that I'm able to try and think as the bad guy. You know? Like I'm trying to wear the hat of the attacker and just try to see what would I do? If I was, you know, like that employee who wants to do a bad thing to the company because I'm just not satisfied or I'm the threat actor and I just got into the environment, or I have access with this, what would I do? And this is the cool part, right? I mean, you get to play the bad guy in a way in order to create those hypotheses.

David Moulton: Oded, when you're working on something, do you ever have that period where you get a little bit stuck and then in the middle of the night you wake up and you finally remember the name of that Van Halen song you used to mow longs to? But except for like how to act like a criminal?

Oded Awaskar: Actually, first, if you ask me, I'm dreaming about work related stuff way too much. Some of them are related to threat hunting. I mean, I think that a lot of the threat hunting ideas, and this is probably going to be kind of funny, have actually came up to me when I was taking a shower. Or I'm doing the dishes. Because this is the -- you know, like, the single, you know, time in the day that you don't have access to your phone, you don't have access to your Slack, you're not sitting in front of your computer, you're not getting any phone calls. And your mind kind of, you know, like injustice everything that you've did. And then tries to work out the things that you have challenges with. And this is where all the creativity comes. You know, when I'm doing the dishes, or I'm taking a shower.

David Moulton: I'm reading Schwarzenegger's book, called "Be Useful," and he has a whole section in there about letting your mind have space by going for a walk or going to the jacuzzi, in his case. It makes me wonder, it makes me wonder, how did you get into threat hunting?

Oded Awaskar: It was a mix of a lot of things I did in my past. A lot of work-related things I did in my past. A lot of roles that I've taken along the way. And when you're taking enough different roles, you have a general understanding on how does an organization look like, because you did some system administration and you've done some networking administration, and you know what these kind of, you know, roles look like. And then you switch into security. So you did some, I don't know, security research. And trying to clean some files when they need to go out of a specific environment. And then you switch to UEBA, kind of, I would say, team leading. UEBA stands for User Entity Behavior Analysis. When you try to think of how does a standard user operating look like? Then later on, going to run a blue team. So when you take all this and when you are coming from different types of things that you did, then this entire, I don't know, knowledge that you've gathered throughout the years is being -- enabling you to come up with, I would say, weird hypotheses. And then getting to threat hunting. I mean.

David Moulton: No, it makes sense to me. I mean you're talking about having range and different experiences that you draw on such that when you get into a place with an ambiguous problem and things that are constantly changing, you've got different pathways to go forward and ways to run your investigations. I would wonder, how does the role change from when you started to today. What are some of the big differences that you've observed?

Oded Awaskar: Today's very, very different from what it looks like. Like let's not even go too far back in the past. I mean, let's go five years back. When you look at threat hunting. Traditional threat hunting were all about finding something that was probably running on-prem. On-prem meaning we have a company, we have our data center, we have our computers, servers, work stations, trainers, cameras, whatever. But usually everything is within our span of control. I mean, we bought a server, we installed some software on it, and we own it in a way. Or originally, when I was performing some threat hunting, I had visibility to everything that happens in the environment. But nowadays, I mean, with all the cloud, SAS, all of those specific services that we are utilizing through our day-to-day tasks, which we do not own. We're buying services from a third-party company in order to get our mails, in order to store some files. And we don't have access to the actual platform. We only have access to some logs. And so first, everything becomes more vague. System logs. Machine logs. Software logs. But nowadays, the logs can be very, very limited to only what the vendor allows us to see. Which may create some problems when I'm trying to prove a hypothesis. That's on one hand. On the second hand, I have a lot of different products that I need to perform hunts on top of. There are tons of different SAS applications that we're using. A lot of third-party vendors. Cloud vendors. And that makes my job as a threat hunter, that not only I need to create the hypothesis, but I constantly need to adapt myself to newly adapted products within my organization. And the specific changes that are made to these specific applications as they move towards, I don't know, upgrading their servers or updating their software in a way.

David Moulton: I want to shift into a question about incidence response and how threat hunting is used during an IR. Can you talk about that sort of high pressure, high stakes environment and where threat hunting's value really shows up?

Oded Awaskar: So when an IR case is being like launched and kicked off, I think one of the best, one of the biggest sort of challenges that we have is we need to get a scope in right away. Right? I mean we're getting to an environment, the customer's telling us something bad has happened. Sometimes they know what happened, sometimes they don't. But for sure they don't know the entire scoping. Like how far in the threat actor this actually is. How much grip do they have on the environment? I mean, is it too late in a way? I mean, do we have some time? How much time do we have to make sure that we don't have to burn the entire environment and build everything from scratch? Our main goal is to first understand exactly what are the assets that the threat actor has managed to take control of. And we're using a lot of hypotheses and pre, I would say, pre-written queries to help us with these type of questions. Like when has this started, what is the scope, what are the assets that are affected, what are the uses that are affected? And this helps not only us as a threat hunting and an incident response team, but it is also very, very important to communicate to our customers, right? Because all they care about is how long is it going to be taking to make sure the threat actor is out and also how long is it going to be taking us to get them back into full business, right?

David Moulton: What should organizations be aware of before implementing a threat hunting program?

Oded Awaskar: Organizations that fully want to adapt threat hunting first needs to be aware that the first thing that those threat hunters are going to be worrying about is access to all the logs. One thing that threat hunters are always going to be asking from their, I would say, managers is we want to have access to everything. Are you using a specific product? Great. Can we please get the logs? Alright, are these are the only logs that you can give us? Are you sure there are no more logs that we can get? The more logs that threat hunters can get, the bigger the problems that they can ask themselves. And sort them using those logs. Once the access has been sorted to all of those logs, you need to give them an appropriate platform for them to be querying those specific logs. Think about this. I mean, if you are an organization, you're like a small/medium business, and if you have like, I don't know, 5,000 end points, you probably have terabytes of data that is flowing every day to your log repository. And now you need to put someone, a threat hunter, on these logs and they need to make sense out of it. But if you don't give them the right system to work with, a system that can enable them to create dashboards and create queries and save some queries and have the query returned in a reasonable amount of time and not having them like wait 35, 45 minutes between each one of the queries, so those are the immediate things that you need sold. It's a kind of maturity that when you're a customer and you want to be performing some threat hunts, you fully accept the fact that at some point, some threat actor or some attacker is going to be getting into your environment one way or another. And by getting those threat hunters, you will be able to do one out of two things. Ideally, get those specific reaches in a way, or loopholes in the system that would enable the attacker to get in on the first place, based on those hunting queries that they're going to be running for you. Or make those threat hunters find that sophisticated threat after it is already leaching within your system.

David Moulton: Oded, it sounds like what you're describing is a team that is really comfortable with the entire haystack being dropped on them and being told to go find the needle.

Oded Awaskar: That's exactly right. I mean, when you're a threat hunter, you have to be mentally ready for the fact that most of the leads that you're going to be pursuing is going to be leading towards a dead end. The amount of times that I was really, really excited about a lead, because I thought alright, there we go, this is the next threat actor that we find in the system, this is the next ATP that we find, I know this is the one. And then when you're reviewing the results, you're like alright. That's a false positive. But that's okay, I mean, and you have to accept it as a professional because this is your way to learn. I mean, making mistakes in a way is not really a mistake. It's kind of a cliche, right? I mean you're always being told that making a mistake is not bad. But on threat hunting, that's even more important because those iterations and more iterations that you're doing between cycles when you're researching an hypothesis is so important because you have to make a mistake in order to learn from that and improve for the next phase, if that makes sense.

David Moulton: Yeah, it sounds like in software, fail fast, and threat hunting, find your false positive fast and move to the next one. Let's talk about some of the limitations of threat hunting. Maybe when it's not the right tactic or some of the areas where it's just really not the right approach.

Oded Awaskar: If you think about this, you know, to take on threat hunting, you'd have to be a mature security adaptive client. I mean, you'd have to be aware, you'd have to have some people that are looking at the day-to-day tasks. The immediate things that need to be handled before you worry about the unknown. I mean, if you think about this, there's that, you know, famous picture of that iceberg. When you see like only the 10% of the iceberg appearing above water level, and then the 90% being hidden under level, so first you'd have to cover the 10%. You'd have to worry about the day-to-day. And only after this has been done, try to scrape off and try to scratch the surface for the things that are unknown. So if you ask me what are the immediate limitations for organizations to implement threat hunting, is maturity. You would have to understand that threat hunting are not going to be solving your immediate needs. They are not going to be solving your computer that's infected by an infostealer. That's not their task. Their task is to find some interesting vectors that are going to be leading to a major incident. Or, maybe even better, help you find that sophisticated attacker that has managed to maybe even bypass their security standards, security controls. These are the tasks for the threat hunting team.

David Moulton: Oded, when and why should organizations consider managed services like managed detection and response and managed threat hunting?

Oded Awaskar: I think that one of the biggest challenges that the entire security market is experiencing right now is being able to staff positions. Right? I mean, when you're trying -- when you're an organization, and you're trying to staff a SOC team, a Security Operations Center team, you need, first, you need a lot of people. I mean, you need at least like seven to eight people to staff a 24/7, 365 shift. And you need some good people. So first as an organization, how do I find these key talents? Because we know that key talents are hard to get. So you have to spend a lot of time on recruiting cycles. And then another problem comes up that when you recruit someone to be a tier one SOC analyst, you have to worry about their career. After a year or so, they're going to be looking into progressing into another kind of job that they want to take. So we have to worry about their career. And career development and different kind of tasks that they should be doing. And that's hard. So, and with that, people's going to be leaving, you're going to have to recruit them again. Not to mention that if you were going to be creating at any point some threat hunters, then the problem even gets even worse. Because there are less qualified people to do some threat hunting job. And it's going to be relatively hard getting the right talent to do the specific tasks. Again, with all the things considered in mind, those threat hunters also needs to worry about their career development and their need to be challenged all the time, which is sometimes kind of hard when you are a small company. One of the biggest advantages that organizations can have when they decide to go with an external and MDR service is the fact that we can get the best people out there. I mean, we are able to offer those people first, variety of working on a lot of environments. So they wouldn't have to work constantly on the same environment and being challenged with the same tasks over and over again. No, we have a very broad visibility to whatever happens in the security market because we are marketing a lot of customers and that offers a lot of challenge and diversity to our analysts. Which makes them grow, which makes them learn, and they feel that this is great for their career. The second thing that I'm able to do as an MDR vendor is to offer them to progress within their career. I have a very big team and they can move from tier one to tier two and constantly work up the ranks, I will mention, and worry about their career development. So I think this is the immediate thing that we're able to offer our customers.

David Moulton: Oded, how can MDR help?

Oded Awaskar: So, that's a great question, first. And it comes up a lot from -- for aspects and customers. I mean, why do you need MDR? I mean, I'm a small company, why the hell would a threat actor want to attack me? So if you've been following some security news recently and the last few years, you're seeing that threat actors don't really care who they attack, they just -- they're after anyone that is prone to be attacked will be attacked at some point. So when you're going to be subscribed to a demand services team, the first thing that you're going to have, you're going to have a 24/7, 365 covering by an expert. That means that whenever something is triggered within your environment, it will have someone that is going to be looking on this specific thing that happened in your environment. And yes. They might not know the right answer. And that's okay. Because they are backed up by our entire team in order to make sure that even if they don't know how to handle this specific alert or incident that was raised, we will have the right person to come in and help you getting the right action in place as soon as possible. The customer can have some peace of mind and sleep well at night, and that someone is always there for them to give them a heads up when something goes wrong.

David Moulton: Oded, what should orgs look for in an MDR partner?

Oded Awaskar: When you're looking at an MDR partner, you need to be focusing on things like what is important for that MDR partner in terms of the [inaudible 00:21:00]? That means that you're probably going to be meeting with the MDR partner and they're going to be explaining to you a lot of things about the things that they do. And you need to be looking for some key things and try to capture some key sentences or phrases from those specific conversations. Like, automation. Automation is a big thing in the security operation world. And MDR particular. With the explosion of logs, there are a lot of incidents and a lot of alerts that are coming into each one of the security systems out there. And you need to make sure that your MDR partner knows how to handle them without the need to get more and more analysts all the time. You need to make sure that your MDR partner knows how to prioritize by developing some custom automation. By utilizing all the great technology in order to make sure that the people are going to be able to focus on what matters. And what -- and keep prioritizing things that are coming in using some playbooks and smart logics that are going through prioritizing. This is really, really important. And this is where I feel our edge is on. We're really, really aimed towards having smart automation in place so our analysts wouldn't have to do the same things over and over again.

David Moulton: What should orgs look for in an MDR partner when it comes to skills?

Oded Awaskar: You need to make sure that your MDR partner can take the best people in the market. That means that there's a ton of competition on top of talent and a talent is something that is going to be very, very hard to acquire. And you have to understand that you as a customer, by the end of the day, is going to be fully protected by this specific MDR partner if they are able to get the best people in market.

David Moulton: How much should you expect a partner to do beyond the investigation?

Oded Awaskar: You should be aiming for initial response at the very least. Things like isolating a machine, being able to stop malicious processes, pulling files for additional analysis is key. Threat actors in particular are working faster than ever. You need to make sure that your MDR partner has all the rights and the technical ability to not only give you a heads up when something bad is happening, but also proactively perform some preventative measures within your environment to make sure the attack doesn't progress anymore.

David Moulton: So one of the topics that seems like it's got a lot of heat behind it right now is AI and ML. How do those technologies contribute to threat hunting?

Oded Awaskar: AI's probably going to be changing the world in a couple of years. And threat hunting is not different. And MDR is not different. I mean, the ability to take a machine that is constantly taking the same decisions over and over again is not prone to any prejudice or anything else is going to be huge in the security world in general. I mean, if we are going to be able to harness the machine's capability in order to not only create the hypothesis for us but also, you know, like do the iterations of creating the query, running it against the dataset, reviewing the results and doing over and over again. And then only hand to us the end query and the leads that are considered by it to be a true positive, that's going to be huge in this specific world. Because essentially that means that threat hunting is going to be assisting AI and ML heavily in order to just feed to the machine the hypothesis and then the machine does everything on their own. I'm really, really excited about how the threat hunting security operation MDR's going to be looking in let's say three years from now, I think we are probably going to see big changes in this environment.

David Moulton: So following up on the thoughts that you have on AI and ML, how does the feature or bug of AI hallucination affect this?

Oded Awaskar: So I think we're pretty far away from having like processes that are 100% AI driven. You're always going to have to have a person in the loop that overlooks on the entire process and making sure that things are still going in the right way after the AI has taken a different iteration on the same test. So I mean, for the next couple of years maybe is a little bit extreme, but we're still going to have to have a human in the loop.

David Moulton: So let's shift gears a little bit. What changes are you seeing in a threat landscape today?

Oded Awaskar: Circling back to AI, I mean, think about this. Like three years ago, maybe even two years ago when ChatGPT wasn't a thing just yet, the real confident threat actors knew how to code. Right? I mean, they would be able to come up with something they want to do in an environment and then use a coding language in order to create something executable or a program that is going to be doing the specific action for them. And that required a skill, right, required the threat actor to be familiar with things like coding. And making sure that the coding is not detected by any antivirus. And using some different techniques in order to hide it. So that was a skill that not all threat actress out there had. So they either had to not do what they were wanting to do or have them purchase something that someone else has coded for them. Nowadays, after the GPT era and all the other LLMs, the entry level for coding has become significantly lower. Because you don't have to know coding anymore. Right? You don't have to know C Sharp, Python, C, Java, Go. All you have to have is an access to a chatbot like ChatGPT or anything similar to that. Or a local LLM that you are forming on your computer. And just guide it on what you want to do with plain English. And that means that you have a coder that is working for you. And yes, they're not going to be right 100% of the time. But when you do enough iterations, that means that you have a coder that is going to be able to execute for you and write the code. And that lowers the bar for sophisticated attacks that are going to be getting into some organizations. Because you don't have to rely on existing code like you did in the past. You can develop your own custom code, which makes us, the attackers, sorry, which makes us, the defender's job, quite harder because this is going to be something new that they're facing rather than just ordinary programs that we saw in the past.

David Moulton: So how are these challenges addressed with managed threat hunting services or other managed services?

Oded Awaskar: So, in general, that's the name of the game, right? I mean we've been adapting a lot throughout the last years. Like I mentioned earlier, you constantly have to adapt. And this is just -- it's not just another adaptation, this is a big change that we need to do as defenders in order to tackle this new era of AI assisted attacks. Let's call it AI assisted attacks. I think that one of the things that we should be considering maybe even a lot more is relying on anomalies. I mean, anomalies have been a thing a lot of the time. You know, when you see something new in the environment that has never popped up, it's always interesting to understand the root cause. But now more than ever, when I personally feel we cannot continue relying on strict IOCs, indication of compromise, like caches and domains, because those things are changing constantly, we'll have to switch to anomalies. What is unexpected in the environment? What is not happening rarely? And try to find out what is the root cause. And then find out some specific software that is misbehaving or is new to the environment. And then being able to flesh out those newly coded programs that are doing something malicious in the environment.

David Moulton: Let's get into our lightning round. Name a commonly underrated technique in threat hunting?

Oded Awaskar: One of my personal favorite things that I think we're not hunting for enough, the insider threat. The insider threat meaning that, you know, we're always trying to look and prevent from threat actors that are coming from the outside to wreak some havoc within our organization, which makes sense. But we're constantly forgetting about an employee that knowingly or unknowingly is doing something wrong with the data that they have access to.

David Moulton: Name your favorite language to write queries.

Oded Awaskar: It was SQL, funny enough, I used to deal a lot with SQL, so I'll mention SQL.

David Moulton: What's a common misperception about threat hunting and MDR?

Oded Awaskar: If I have threat hunting and MDR, they can find everything out there including zero days and all the things that I don't need to invest anymore in anything that is related to security.

David Moulton: True or false, threat hunters are only looking at ATPs?

Oded Awaskar: False. Hopefully false.

David Moulton: So you think vols, weakness, overall hygiene, those sorts of things are also in the scope?

Oded Awaskar: I really hope so. Because when you look at ATPs solely, you have to understand that ATPs are rare, right? It's going to be very, very hard for specific threat hunters to identify an ATP that is currently attacking one of their customers or even one of their organizations. So, I think that if a threat hunter is solely going to be focusing on ATP, they need to have a very large amount of customers they're working on, or a very large dataset, in order to find what they're looking for or their job is going to be really, really frustrating because they're not going to find an ATP.

David Moulton: The ATPs are the rarest of the pokemon cards. Final lightning round question for you. Do threat hunters have a favorite method for conducting hunts?

Oded Awaskar: Everyone is going to say something different. For me, if you ask me, my favorite method is putting the bad guy hat on, trying to be the bad employee who tries to impact the organization. And think like alright, what would I do? And then you know, like take off that hat and switch to alright, now that I know what am I looking for, let's go for the queries that I need to execute on top of that.

David Moulton: Oded, wrap it up for us. For our listeners, what's the most important thing that you want them to remember from this conversation

Oded Awaskar: I want you to remember that threat hunting is an art. And when you're conducting threat hunts, most of the times, it's not going to be yielding into some very interesting or outstanding results. It's not. Most of the work is finding a needle in the haystack. And finding that needle takes time. So when you speak to your threat hunting team and your managed threat hunting team, don't always try to focus on what are the outstanding things that they have found? Because sometimes when they find the small, so to speak, thing, those are the sort of things that are going to be eliminating the outstanding thing from reaching to your environment.

David Moulton: Oded, this conversation has been really rich for me. I hope for our listening audience, it has been as well. Thanks for joining me today on "Threat Vector."

Oded Awaskar: Thank you for having me. It was a pleasure. [ Music ]

David Moulton: If you're interested to learn more about Unit 42's world-renowned threat hunters, I've included links in our show notes. In the meantime, stay secure, stay vigilant. Goodbye for now.