Internal Network Security Monitoring (INSM) for the electrical sector.
[ Music ]
Dave Bittner: It's Wednesday, May 17th, and you're listening to Control Loop. In today's OT Cybersecurity briefing, Ukraine argues that cyber attacks against civilian infrastructure should be classified as war crimes. The Five Eyes take down Turla and its Snake Malware. An Iranian threat actor exploits end day vulnerabilities and turns its attention to infrastructure. Bitter APT may be targeting Asia-Pacific energy companies. A look back at the Colonial Pipeline ransomware attack two years later. ETHOS is a new private sector OT risk information sharing platform. And if you have thoughts on software self attestation, CISA would like a word. Today's guest is Patrick Miller, CEO of Ampere Industrial Security, joining us to talk about Internal Network Security Monitoring, or INSM, as a concept for the electric sector. The Learning Lab has Dragos' Mark Urban, Principal Adversary Hunter Kyle O'Meara, and Principal Intelligence Technical Account Manager Michael Gardner discussing threat hunting.
[ Music ]
We begin with some news from the cyber fronts of Russia's hybrid war against Ukraine. CERT-UA reports that Russia continues to attempt cyber attacks against civilian infrastructure. Ukrinform quotes a spokesman for the State Service of Special Communications and Information Protection as saying, "Where are the attacks coming from?" CERT-UA, which is manually engaged in prevention, detection, and response to cyber attacks and cyber incidents, monitors the activities of more than 80 groups, most of which are hacker groups from the Russian Federation, whose 90% of the members are Russian military operatives. That is, we see that Russia uses the same tactics in cyberspace as it does on the conventional battlefield. That is, it tries to attack civilian infrastructure. One recent attack was evidently intended to disrupt logistics, specifically international trucking into Ukraine. Ukrainska Pravada reported that Russian operators apparently hacked [inaudible] auxiliaries, conducted an unsuccessful cyber attack against EQ - Ukraine's system for managing border crossings by commercial trucks. The system is currently running smoothly, according to reports. In case of any changes the drivers and carriers will be informed swiftly. In another attack on transportation infrastructure, the European Air Traffic Control Agency, EuroControl, reported a cyber attack by Russian actors. EuroControl's site has a terse account of the attack, which appears to be the familiar distributed denial of service variety. The Wall Street Journal reports that Killnet has claimed responsibility. Speaking at RSAC at the end of April, Illia Vitiuk, Ukraine's head of the Department of Cyberinformation in the Security Service of Ukraine, urged that cyber attacks against civilian infrastructure should be treated as war crimes. Stating, "I do believe that military commanders that are in charge of special forces and special services, like the Russian GRU or SVR , who are responsible for cyber attacks on civilian infrastructure, should also be convicted as war criminals," Info Security Magazine quotes him as saying. Such attacks would presumably violate one or more of the principles that underlie the laws of armed conflict. Proportionality, discrimination, and military necessity. With all the stories of Russian cyber activity, it's worth noting that Moscow's Intelligence and Security Services aren't 10 feet tall. Earlier this week, The Five Eyes took down the Snake infrastructure Russia's FSB has used for espionage and disruptive activity for almost 20 years. We note that the FSB unit responsible, generally known as Turla, has been implicated in espionage more than it has sabotage. But the cooperation and the methods used in the takedown have broader application. Operation Medusa involved not only technical disruption of Snake malware deployments, but lawfare as well. Operation Medusa was the work of an international partnership, whose principal members were, in the US, the NSA, the FBI, CISA, and the Cyber National Mission Force. And in the other four eyes, the Canadian Cybersecurity Center, the UK National Cybersecurity Center, the Australian Cybersecurity Center, and the New Zealand National Cybersecurity Center. The joint cybersecurity advisory these agencies issued describes Snake as, "the most sophisticated cyber espionage tool designed and used by the Center 16 of Russia's Federal Security Service for long-term intelligence collection on sensitive targets." The malware is stealthy, readily tailored to specific missions, and well engineered. Strings within Snake's early coding gave the malware its early name. Uroboros, after an ancient symbol of eternity, a snake clutching its tail in its jaws. The FSB coders had an esoteric streak. They embedded a drawing of an ouroboros by the early modern Lutheran mystical theologian, Jakob Bohme, in their code. The Justice Department describes Operation Medusa as a court authorized operation to disrupt a global pure to pure network of computers compromised by sophisticated malware called Snake that the United States Government attributes to a unit within Center 16 of the Federal Security Service of the Russian Federation. That unit, which we previously noted is commonly known as Turla, and is called that in court documents, but which has also been known as Venomous Bear, has been actively collecting against targets in some 50 countries for nearly two decades. The FBI obtained a Rule 41 Warrant to remove Snake from eight infested systems. Such warrants are uncommon. The Department of Justice has used them twice in the past, the record reports. Once to disrupt China's Hafnium espionage campaign, and once to dismantle Cyclops Blink, a Russian intelligence service botnet. The FBI used a tool they created named Perseus, which issued commands that cause the Snake malware to overwrite its own vital components. The US Magistrate judge authorized the bureau to remotely access compromised computers. Internationally, where the US Writ doesn't run, the FBI is cooperating with the responsible national authorities and supporting their remediation efforts. So security and law enforcement services are able to go into affected systems and neutralize the malware they find there. A final note on naming. If the FSB is given to esoteric Lutheran allusions, the FBI apparently has a classicist streak. Perseus, after whom their remediation tool was named, was the slayer of the gorgon Medusa, the sight of whom could turn victims to stone. Microsoft has reported that the group it's hitherto tracked as Phosphorus and will henceforth refer to as Mint Sandstorm, has developed a specialty in weaponizing end day vulnerabilities. That is, vulnerabilities for which a fix or mitigation is available, but which some organizations have failed to apply. It's also been known mostly for reconnaissance and cyber espionage, but that may be changing as there are signs the group is turning its attention to critical infrastructure. Mint Sandstorm has been known to conduct cyber espionage against both military and civilian targets, including political dissidents, but over the past two years, the group has been observed to carry attacks against infrastructure. And Microsoft thinks that its future activities may show a continued and growing disinhibition and loss of restraint. Infrastructure operators should keep an eye out for Mint Sandstorm, or Phosphorus. After all, what's in a name? It's the same bad actor. Intisar [phonetic] concludes that a new string of energy sector targeted phishing attacks are using tactics that resemble those previously used by Bitter APT. They state, "Bitter APT is a South Asian threat group that commonly targets energy and government sectors. They've been known to target Pakistan, China, Bangladesh, and Saudi Arabia." The group makes its initial approach through phishing. Although, Bitter APT's involvement in the attacks is not fully confirmed, there are circumstantial grounds that point in its direction. The researchers have found that the threat actors are using the same tactics previously observed by the Bitter APT group, such as the use of Microsoft Office exploits through Excel files and the use of CHM in Windows Installer files. The exploits have been initiated with an email to personnel in the energy sector being invited to a conference or round table. The phish bait is intended to induce the target to download and open a RAR file that contains a malicious payload. This Sunday marked the second anniversary of the Colonial Pipeline ransomware attack, and the US Cybersecurity and Infrastructure Security agency issued a short statement on lessons learned from the ransomware attack. "The general problem the attack exposed," as CISA frames it, "is ransomware. Countering it requires effective, centralized information sharing, interagency cooperation, and a robust public-private partnership." And it's also worth noting that while ransomware is generally seen as a threat to IT networks, especially business systems, as opposed to OT networks, or industrial control systems, in the case of Colonial Pipeline, a ransomware attack on business systems, disrupted delivery of fuel through Colonial System in the eastern United States. Russia's war against Ukraine brought urgency to the US government's preparations for cyber attacks against critical infrastructure. That indeed is the threat that CISA's Shields Up campaign has been designed to counter. And its sister Homeland Security agency TSA also established close working relationships with over 25 major pipeline and industrial control system organizations to strengthen the common defense. And CISA also received authority from Congress to expand the visibility and threat detection program it operates as CyberSentry. Obviously, CISA's statement says work remains to be done. Not only improving information sharing and threat detection, but in assigning cybersecurity in appropriately high priority and along incentives that promote security. Some significant movement toward improved information sharing was announced last month at the RSA Conference in San Francisco. A community of private sector companies announced the formation of ETHOS, an acronym for Emerging Threat Open Sharing. ETHOS is intended to be an open source, vendor agnostic technology platform for sharing anonymous early warning threat information across industries with peers and governments. It's intended to function as a hotline across which early indications of threat activity can be shared. The 11 founding members of the ETHOS community are 1898 and Company, ABS Group, Clarity, Dragos, FourScout, NetRise, Network Perception, Nazoni Networks, Schneider Electric, Tenable, and Waterfall Security. The initiative also has the support of CISA. Eric Goldstein, an Executive Assistant Director for cybersecurity at CISA said the agency, "looked forward to collaborating with communities like the one that's formed ETHOS." ETHOS is structured as a not-for-profit entity run by an independent mutual benefit corporation. At present, its technology resources can be found on GitHub. And finally, CISA has released a request for comment on a drafted self attestation form for federal government software providers. The secure software development attestation common form was a combined effort between CISA and the Office of Management and Budget, and is based on the National Institute of Standards and Technology's Secret Software Development Framework. FCW wrote that the form is intended for software vendors to prove their products are secure to the standards of federal government customers. With the government's ultimate goal to work towards securing the supply chain. This follows a 2021 executive order on improving cybersecurity throughout the United States, and the later memo that same year from OMB requiring federal agencies to acquire self attestation forms from vendors with a looming September deadline. Public comment on the form will be accepted through June 26th, 2023 via a comment box on the regulations.gov website. If you're interested in supply chain security, and who isn't, take a look at CISA's comment site.
[ Music ]
Our guest this week is Patrick Miller, CEO of Ampere Industrial Security, joining us to talk about Internal Network Security Monitoring, or INSM, as a concept for the electric sector. Here's our conversation.
Patrick Miller: INSM is Internal Network Security Monitoring. And this is an acronym that's been used basically by FERC. It's also been included in some language from the executive office. There's been like national security memorandums and other things, executive orders, that have included the same language. But really, it just means inside a protected network, monitoring that kind of what's called east-west traffic. Like how the systems are talking to each other. Which is above and beyond like what's crossing a firewall, for example. So this is kind of looking at the traffic inside of a network and monitoring what's going on inside. So that's what FERC is trying to achieve with this order. It's Order 887. So the goal is basically to add an additional layer of security inside that network that doesn't already exist in the standards.
Dave Bittner: Let's touch on sort of where we stand right now. I mean, you mentioned it doesn't really exist. I suppose there are probably some folks in the audience who might be surprised to hear that. Should they be?
Patrick Miller: Yeah, actually, because there's a lot of controls in the CIP standards. The NERC CIP standards are not perfect, by any stretch. And they are effectively kind of the lowest common denominator that all the utilities can meet. It's the, you must be this tall to ride the security ride, as an electric utility in the States, North America. But even though we already have like a lot of controls, we've got, first of all you've got to put the- you've got to define what the protective stuff is. Like what's critical. And then you got to put it inside of, you know, a protected network, like inside of a firewall. And there's physical security controls as well. There are all kinds of controls around the systems themselves. Like, you have to log for traffic on the system itself. You've got to have anti-virus or something similar to mapping to manage malicious code. There are things like system baselining where you have to know the software on the system. You know, report all ports and services that are on the box. And have justifications for why each service is running. Access controls on who can get permission to even be on the system. And then even controls about the information about the systems. There's an enormous amount of controls already. This actually is one that wasn't there. And it was basically saying we've got the controls on the system, what we don't have are controls for in between the systems, inside those networks.
Dave Bittner: And so how heavy a lift is this for the folks this will affect?
Patrick Miller: This is not an insignificant ask. Frankly, this is a very significant, very, in some cases, a very difficult ask. And it will vary by the type of facility. So if you're in a control center, for example, this is probably something that's going to be much easier for you to do. Even if the network isn't architected the right way to get this information. Then, you know, even in a control center. Because most of these control centers, you've got like a backup control center or another system you can roll to while you change in architecture. So that you never really go down? And maybe in a transmission substation, for example, you might be able to go offline for a little while and not have any issues to make some network changes. To architect the network in such a way that you can actually do INSM. A big one is going to be generation facilities. Because in those cases, gen plants, you don't really take those networks offline and most generation facilities run all the time, or as much as they possibly can. Because that's their job. So, for example, if you've got multiple units at a big generation facility, you might have your scheduled maintenance window for an outage is like maybe once every two, three, four years. So, you may have to take those-- the dreaded words of an unscheduled outage to actually get your network in a place where you can even do INSM.
Dave Bittner: So, what is the pathway going to be, then? Given that this is quite an undertaking, how do you recommend folks come at this?
Patrick Miller: Yeah, I recommend getting busy now. In fact, you probably should have been busy yesterday. And if this is the first time you're hearing it, get motivated. This will take a while to get just the plans on how you're going to do this in a way that has the least amount of impact. Like from an outage window perspective. There are essentially, just to describe the architecture in a simplest sense, you have to have something that can see all of the traffic between all of the systems inside your protected network for your CIP assets. So some organizations use things like a spam port on a switch, for example. Some switches in some facilities are old and they don't do this well or just don't have the capacity to do it at all. It's just not a feature. Because there's a lot of old equipment out there. And even in some cases, if you do have a spam port, if it does take too much network traffic, it will actually prioritize the switching and not the spanning. So you'll have gaps in your network, your INSM requirements, basically. So there's a lot of considerations to be made on how this is going to be done. So you've got to start the planning now, you've got to think about what equipment you need to purchase, who's going to do the work. Because in some cases, you've got to get new people or an integrator or somebody else, is it even your facility? Sometimes you own it but you don't maintain it. So you got to work with another party. So all of this kind of logistical mess just to get to that place is- it's a significant amount of work. So start planning now. And think about if you're going to- which products you might want to buy. Because this isn't something you do manually. This isn't like a spreadsheet kind of thing. This is like a technology component that- and do they have a lead time? Is it like six months or eight months or a year before you can even get a device? And how many of them do you need? What's the- again, the labor force that's needed to put these things in place?
Dave Bittner: Is there any kind of mandated timeline in place here?
Patrick Miller: Yeah, there is. From when the order was issued on January 19th, what the order basically says is okay, NERC, convene a drafting team and write us a standard, write FERC a standard that they can approve that includes the INSM requirements. So, you know, to make this simplified, basically FERC just told NERC to go write a standard. That's what's happened so far. And they got to produce the standard within 15 months. Any time you're writing regulation, fast regulation is always bad regulation. So this one is going- it's going to come out of the gate kind of at an early stage and it will likely be refined over time. So in the 15 months, they got to produce the standard for FERC. What happens after we get this to FERC, FERC has to chew on it for a while and think about it. And they'll decide if they want to approve it, as it stands. Or they'll remand it, which basically means they'll reject it and send it back. And give them another probably shorter timeline to produce something that they want. And they usually say why and what to do. Or they'll adopt it and they'll say we want these changes within this time frame. It's good enough, but do some more things. So that'll happen after FERC considers it. So within 15 months of January 19th of this year, FERC will get the standard. They'll do their thing. And then, if, we'll just say best case, FERC takes a quarter to analyze it. Then they'll adopt it. And then typically they'll have an implementation window, which is likely going to be about a year or so. Because there's been a lot of other motions around INSM and it's not something brand new. People should have been hearing about this through all the other official channels for a while. So I think FERC will give them probably a year to 18 months. So you're really looking at kind of Q2, Q3 of 2026 before it'll be like mandatory and enforceable and you might get audited on it.
Dave Bittner: Are there opportunities here for collaboration or input from industry along the way?
Patrick Miller: Absolutely. Absolutely. And that's probably the most important part. Is this is a drafting team. The weird thing or good thing about the CIP standards is that it's actually written by the industry for the industry. And FERC doesn't write them, FERC just approves them and kind of says we want these things. And the industry writes it. Kind of writes their own destiny, choose your own adventure. And then FERC approves it and makes changes over time to refine them. So yes. The industry is- now it's on them to write something that they can live with but also meets the objectives that FERC has stated.
Dave Bittner: As you've been thinking about this, are there unexpected things that have crossed your mind? Are there any unintended consequences or potential traps that you see being set? Perhaps I'm overstating it.
Patrick Miller: No, no, and that's actually a really good question. We've seen this before. The unintended consequences with these standards has been, you know, I guess kind of eye opening, significant. For example, in the past, where there was basically- if you had what's called external routable connectivity, basically if you had IP traffic going into one of your protected zones, you had to have all these extra controls. If you didn't, and you just used dial-up, well the controls are basically just like kind of authentication on the dial-up device. And that's really just about it. So a bunch of the standards just went away. You didn't have to do them. So a lot of organizations went backward. And they just said fine, we're just not going to do IP and we'll just use dial-up instead. And that was, you know, clearly a counter to the intent of the legislation or the regulation. But it was effectively a measure that they could do to be compliant. It didn't help reliability, and it really didn't help security much, but it was compliant. Right? So this may have a similar effect, right? So, you may just want to say this is too much effort, we don't want to do this, we're just going to go back to dial-up, for example. So it could have unintended consequences like that.
Dave Bittner: What has been the response so far from industry, having this put on them?
Patrick Miller: The response, at least from the people I've talked to so far, there's been few surprises. Most knew this was probably coming. FERC issued what's called a NoPR, a Notice of Public Rule Making, a year before. So they let the industry know that this was coming. They kind of had a year to think about it. They even had comments on the NoPR. So when FERC says hey, we're going to make a regulation, this is what we're thinking, give us your feedback and the industry gave them all the feedback, they took in that feedback, and then they wrote the order that said based on what we wanted to do and your feedback, this is the direction we want the standard to go. So, like I say, it shouldn't be news to those that have been paying attention. So most of the people already knew this was coming. I think the biggest, I guess, challenge, not necessarily surprise, is basically now it's written. It is ordered, which means it will become a law. And the struggle is now to just pick the devices and get the things scheduled and get the work moving. It's just- it's a big effort.
Dave Bittner: Yeah. And I suppose, is it fair to say this is not something that you want to kick that can down the road?
Patrick Miller: Absolutely not. You do not want to kick the can down the road on this one. It will bite you, and it will be more work than you think. You start scratching the surface on this and peeling back the layers of the onion and you'll realize just how much effort it's going to be to do this. Especially, like I say, in your field, facilities. So like substations and especially in generation plants. This will be a bigger challenge than most. Don't underestimate the level of effort here.
Dave Bittner: Our thanks to Patrick Miller from Ampere Industrial Security for joining us.
[ Music ]
This week's Learning Lab features Dragos' Mark Urban, Principal Adversary Hunter Kyle O'Meara, and Principal Intelligence Technical Account Manager, Michael Gardner, discussing threat hunting. Here's their conversation.
[ Music ]
Mark Urban: Hi, Mark Urban once again with an episode of Learning Lab here on Control Loop. And we're going to focus on threat hunting. There are a couple of different types of threat hunting. And to kind of describe some of those differences in the context that we're going to talk about today, I'm joined by Kyle O'Meara and Michael Gardner here at Dragos. And I'm going to have them introduce themselves, a little bit about what they do today and how they got here. And Kyle, why don't you lead off with that.
Kyle O'Meara: Yeah, sounds good. My name's Kyle O'Meara, like Mark said. I'm one of the threat hunters here, Principal Adversary Hunter is what we're called here at Dragos. Let's see, career wise, I started off my career after grad school at Carnegie Mellon going to the National Security Agency for a while. Did some time there and did some threat hunting-like things there. Went off into the world to do inter-response consulting. Didn't like that. Went back to- did a stay as a contractor with what was now- I don't know what they're called now, but what was FireEye. Did sort of network defense. Threat hunting type of things. And then my previous employer was the CERT out of the Software Engineering Institute at Carnegie Mellon, which is a federally funded research and development center. And I was a threat researcher there as well. So, been doing threat research from the near beginning of all different sorts, tracking threat adversaries all over the world in different places, doing different things. And I'm here at Dragos now.
Mark Urban: Since you could walk, basically, focusing on threat hunting and adversaries. That's a good background, Kyle. Michael, how about you?
Michael Gardner: Yeah, sure. Thanks, Mark. I'm Michael Gardner I'm a Principal Intelligence Technical Account Manager here at Dragos. So, what that means is I kind of work closely with Kyle in our threat analysis team to help our customers kind of sit between the two and work with our customers to help them, as we like to say, operationalize OT threat intelligence. So that's actually taking the context out of the threat intelligence and taking action on your networks and your own environment. I kind of got here through OT security roles, started out with a large electric utility. Did a lot of blue teamwork. So was in the stock for a while focusing on our industrial control systems, doing some threat hunting there as well. Moved into threat intelligence and threat analysis role. Where I was trying to dissect some of these threats and help apply them to hunting in our environments. Also had some incident response roles. And jumped over to Dragos about two years ago.
Mark Urban: Thanks, guys. So now we're going to dive in a little bit to threat hunting. And I know there are kind of two different contexts that we wanted to cover. The one from an intelligence, service perspective. And the second like organizations doing it on their own. But Michael, why don't you talk a little bit about the differences and kind of what those are.
Michael Gardner: Yeah, yeah, absolutely. So, there's a lot of differentiators here when we're talking about threat hunting. Most of the time, when we refer to it, we're thinking a lot around hunting on your network. So looking on your environment for that malicious activity. Usually, that's informed by some sort of intelligence or some sort of hypothesis that you've developed to drive that hunt. And so when we're talking about that intelligence on the back end, we're referring a lot to some of that other threat hunting that you get from your intelligence sources. And that's some of the work that people like Kyle are doing. So they're hunting in more open internet telemetry, more targeted sources. Looking for broader adversary activity at organizations that are similar to yours.
Mark Urban: Got 'cha. So there's one context, hunting it in your environment, and your network. And then the second context is going out in the wider world, tracking adversaries, looking at what they're doing, and then basically distilling that down into intelligence that can be used for that [inaudible].
Michael Gardner: Yeah. Absolutely. It's all about how wide or narrow the aperture is, essentially.
Mark Urban: Okay. So, then let's talk about that second one, right? I think we'll probably- we have a little bit of a discussion in the neighborhood keeper segment. A couple episodes ago. About using it in that context, you know, some inside threat hunts. So why don't- who wants to start to bring us to the outside world? You know, I think as you're scouring the internet for bad guys and what they do, what does that look like?
Kyle O'Meara: Yeah, I mean, I'll take it to start, and Michael, you can jump in and add things as you see. For like, for instance, it's kind of nice here at Dragos because we have such a niche. So we're looking at the adversaries out there that have an intent to target industrial control systems and, you know, OT networks, right? So, we're, you know, our net is not super wide, our net is very narrow. But we're looking at what's their intent, what are they impacting, what are these adversary groups trying to impact, who are they targeting? Right? There's a big difference between impacts and targeting. And trying to distill that down and trying to figure that out. And then just looking from that outside in, as Michael mentioned earlier, and trying to figure out who has those intentions and who is looking at those different environments, and who might be starting in the IT environment but has an intention to pivot to the OT environment? How can we discover their network infrastructure? How can we discover their tools that they're using, and how do we discover the victims that they're targeting? Those are some of the top level things I think about day-to-day.
Mark Urban: And why is- so you talked about you don't, we're lucky because we're focused on the OT, on industrial control systems. So if you look at- talk a little bit about how wide an aperture would be for IT threats versus how specifically when you're looking at OT threats, how does that simplify things, what are some of the differences in the kind of those two perspectives?
Kyle O'Meara: Michael, you want to take that since you've kind of done a little bit of that in your past life?
Michael Gardner: I was thinking that's probably a good one for you based on how you guys are naming threat groups.
Kyle O'Meara: Yeah, I mean, so I'll take it- I'll add and you can sprinkle in because you had that background doing it in your past life. But yeah, so it makes it easier because I'm not looking at every business email compromise incidents that are happening out there, unless they're very targeted to industrial companies, like specifically by an adversary group. I'm not looking at every single sort of, you know, zero today that it might be- come out into the wild, not looking at every different piece of malware that might only be targeting IT side of the house. I'm looking for that sort of bits and pieces that- they might have a piece of that malware that might target my IT side of the house, but what are they doing to like move laterally and to have intentions to move into that OT environment? We've seen this with threat groups that we track, like Electrum, this past- back in April of 2022 when they targeted the [inaudible], again, in Ukraine. And with their [inaudible] piece of malware and what their intentions were there and how they understood the environment in that and what they were trying to do in the substations and how they were- what that malware itself looks like. So they had to come across an IT environment, but they had the intentions to impact OT environments. So looking for those, it makes it, you know, not sort of easier, but you know, the thought process narrows down because I'm allowed to think about those types of potential, you know, attacks or incidents.
Mark Urban: Got 'cha. And you were referencing a couple things, real quick, for the threat groups or threat activity groups, you know, that we track here at Dragos. And I think we have a dedicated page on that on our web at dragos.com. And these are threat groups that specifically target operational technology and industrial control systems, right? Which is, you know, kind of screwed up if you think about it. But there are specialty groups that just focus on that. Lots of other folks that focus on the broader IT world. But these are folks specializing in, you know, the systems that, you know, that drive the grid. The pipelines, the, you know, manufacturing floors and things like that. You mentioned at one point, kind of, there's a difference in intent, in impact, in targeting. You know, can you kind of click down on that perspective a little bit? I don't know, Michael or Kyle.
Kyle O'Meara: I mean, I'm happy to start. So, for me, I think about impact and targeting daily, right? I like to think about, for example, ransomware groups that do impact industrial companies and shut down production plants, as we've seen across, you know, the most recent one I can think of is Dole Foods, right? There's many that happen, Dole Foods actually shut down their production facilities. It happened with Coors back in March of 2021 where they shut down production based on a ransomware attack. So to me, that's, from the outside, they might look like they're targeted, but when you look at- when you abstract up to these different ransomware entity groups that are sort of, in my head, they're spraying and praying. They're looking for any possibility to impact whoever they can. Whereas a targeted, that means like I'm explicitly going after this specific entity to cause destruction or damage.
Michael Gardner: And when you're talking about all of this from the perspective of an asset owner or an operator, this is where kind of having a good, informed threat model comes into play. Especially in the context of threat hunting. So ensuring that you're kind of capturing what- all the specific adversaries that have impacted organizations like yours, or organizations in similar geographic regions, similar sizes, et cetera. Assessing what that intent and impact of campaigns associated with those adversaries have led to at those other organizations. And by doing that, it's sort of leading you along the lines of developing those fully informed hypotheses for your threat hunts, to make sure that you're looking for the right activities centered around the right adversaries.
Mark Urban: Got 'cha. So this starts with- this starts with some intelligence sources of our, you know, of your own. Where do you start with a threat from sourcing intel?
Kyle O'Meara: Yeah, I mean, like Michael just said, I'm looking to piggyback around that. The hypothesis. So you develop your hypothesis. And then from there, you're like well how can I start this, you know, as rudimentary, like elementary experiment as my- to try to prove or disprove my hypothesis and look at all my different sources I have? I typically break it down to five. Some might expand these out. But it's first-party data. Your information sharing partnerships is two. Your open-source intelligence, your [inaudible] out there, and then you can gather from the Twitterverse to different blogs that individuals are posting. And then you obviously have your paid sources that every company uses across the world. And then, I think the key source in all this, number five, and these aren't in any specific order, is like your individual intelligent, you know, other cybersecurity threat hunters across different, you know, vendors and past lives and things like that. These networking connections you make that you can share information with. Trusted sources that you can share information back with. And you know, help you build out a case and help you try to solve and- to either solve or disprove your own hypothesis.
Mark Urban: Alright, so those are five sources. Can you give me an example of, you said, first party data. Is that like, you know, your own internal datasets or what's first party data? Give us a couple of examples.
Michael Gardner: Yeah, absolutely. So, first party data is especially going to be something that's important to an asset owner operator. You know, we at Dragos have our own types of first party data that Kyle may hunt in. He can talk about some of that if he wants. But from an asset owner operator perspective, these are your logs. So essentially, all of your network data that you collect, all your host data that you have, your incident data, too, which is actually, you know, an important part of data for threat hunting is understanding what's happening to the organization historically. Information on what, you know, an adversary may consider open source, but information around your staffing and your employees and the people that may be targeted at your organization. All of that's kind of that first party data and really it's the most essential when we're talking about threat happening. It's the most important thing for an asset owner operator to make sure that they have good collection and a good handle around sourcing that data. So that they can carry out a fruitful hunt.
Mark Urban: So, five sources, that is kind of a starting point. Where do you go from there?
Kyle O'Meara: Then you just start digging. You start digging into- you start- you know, trying to figure out, you know, what kind of data you can extrapolate from those sources to help go after your hypothesis. So, you know, whether it's you have different types of telemetry, whether that's domains or IP addresses, you have past events from like Michael said, incident response events thank you've seen. You know, whether your company's done it or other people have posted about. From the, you know, open source. You have, man, you have your paid sources that you can leverage and look for those IOCs in. You have, you know, you have your friends. Like I said, in low places or high places that you share data with. Like hey, I'm seeing this type of thing, are you all seeing this type of thing? Do you have anything you can share on this? You know, we want to report this to our customers. Happy to keep it at this traffic light protocol, you know, TLP level. So it's like kind of going back and forth and you know, you start pivoting, you start filtering, you start looking for these nuances and then you start developing clusters. Threat clusters. That you think might be targeting, in our case, targeting an ICS entity or OT environment. And then you start distilling down there, do I have something that might be bigger? Do I have a new threat group? Or do I just have a little bit of cluster? I have- and then from there, you write that intelligence down, and in our case, you know, we produce that intelligence for our customers. Sometimes, you might not write a report right away because you're still trying to prove that hypothesis or you might have disproved it. But you still have the threat cluster. So it's kind of like a- it's a revolving door. You know, you have to make sure you don't get in that analysis paralysis. You know, type approach. You have to know when done is done and when good is good enough and I like to always- when I used to teach back in the day, I used to tell my students like perfect is the enemy of good, right? So. Forget who said that, that's not my quote.
Mark Urban: Mark Urban with Michael Gardner and Kyle O'Meara from Dragos on the Learning Lab, thanks very much.
[ Music ]
Dave Bittner: And that's Control Loop, brought to you by the CyberWire and powered by Dragos. For links to all of today's stories, check out our show notes at thecyberwire.com. Sound design for this show is done by Elliott Peltzman with mixing by Tre Hester. Our senior producer is Jennifer Eiben. Our Dragos producers are Joanne Rasch and Mark Urban. Our executive editor is Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you back here next time.
[ Music ]