Unpacking cyber awareness syndrome.
[ Music ]
Dave Bittner: It's June 14th, 2023, and you're listening to "Control Loop". In today's OT Cybersecurity briefing, Dragos concludes that COSMICENERGY malware is not an immediate threat to OT systems. The Cyberspace Solarium Commission looks at obstacles to public-private collaboration in the industrial sector. Organizations plan to increase their OT cybersecurity budgets. CISA and its partners have released a Joint Guide to Securing Remote Access Software. And the USA DoD holds its Cyber Yankee exercise. Today's guest is Will Edwards of Schweitzer Engineering Labs discussing cyber awareness syndrome. "The Learning Lab" has the conclusion of the discussion between Dragos' Mark Urban, Principal Adversary Hunter Kyle O'Meara, and Principal Intelligence Technical Account Manager Michael Gardner on threat hunting.
[ Music ]
Researchers at Mandiant last May announced their discovery of new malware that appeared it may have been designed to disrupt the electrical distribution and associated critical infrastructure. Mandiant, which called the malware COSMICENERGY, was cautious in its assessment. The version the researchers obtained, for one thing, lacked a built-in discovery capability. Mandiant said that COSMICENERGY may, in fact, have been a Russian red-teaming tool used in exercises to simulate an electric infrastructure attack. But the discovery was significant enough to place operators on alert for a possible campaign against vulnerable OT networks. On Monday, however, Dragos released its own research into and assessment of COSMICENERGY. Their conclusion is far less alarmist than some earlier evaluations of the malware had been. COSMICENERGY is not, they've determined, related to either Industroyer or Crashoverride. The researchers say after analyzing COSMICENERGY, Dragos concluded that it is not an immediate risk to OT environments. The primary purpose of COSMICENERGY appears to have been for training scenarios rather than for deployment in real-world environments. There is currently no evidence to suggest that an adversary is actively deploying COSMICENERGY. So in this case at least, caution was prudent but the initial concerns the discovery aroused seem to have been overblown. A Cyberspace Solarium Commission 2.0 report has found that the North American Electric Reliability Corporation's role in the Electricity Information Sharing and Analysis Center can discourage organizations from sharing information with the E-ISAC Utility Dive reports. The CSC states, "Our interviewees relayed that because the E-ISAC is located within NERC, which in turn is subject to oversight by FERC, in-house counsels on occasion advise electricity companies not to share certain information with the E-ISAC for liability reasons. This is an obstacle without an obvious solution. Removing the E-ISAC from NERC would likely strip it of key funding and relationships central to the services it provides the sector." The CSC concludes that the Biden administration should make the following updates to the Presidential Policy Directive 21. First, clearly identify strategic changes, assign responsibilities, and ensure accountability for routine updates of key strategic documents. Clarify CISA's role and responsibilities as a national risk management agency. Resolve questions around the organization and designation of critical infrastructure sectors and assigned SRMAs. Provide guidance on SRMA organization and operation. And facilitate accountability. These measures apply particularly to the protection of critical infrastructure and that class includes, of course, those that use OT. Twelve of the 16 sectors identified as critical infrastructure fall into that category. Palo Alto Networks Unit 42 has published a study finding that between 2021 and 2022 the average number of tax experienced per customer in the manufacturing utilities and energy industry increased by 238%. The researchers state these industries face a wide range of security threats, including malware, ransomware, physical attacks, supply chain attacks, and vulnerability exploits. A report from DNV has found that the energy industry is increasing its investment in cybersecurity. Fifty-nine percent of energy professionals told DNV that their organization had increased spending on cybersecurity in 2023 compared to last year. Sixty-four percent of respondents agreed that their organization's infrastructure is now more vulnerable to cyber threats than ever and say that their focus on cybersecurity has intensified as a result of geopolitical tensions. Despite this increased investment, only 42% of respondents believe their organization is spending enough on cybersecurity and only 36% are confident their organization has made sufficient investments in securing their operational technology. Just under half of energy professionals believe that regulation is the most likely factor that will lead to increased spending on cybersecurity. A separate survey by OTORIO found similar results with 78% of respondents saying their organizations plan to increase their OT cybersecurity budgets this year. The researchers state that organizations that plan to increase their OT security budget will increase it by an average of 29%. Additionally, 85% of organizations actively and automatically track compliance with industry regulations and standards. CISA, the FBI, the MS-ISAC, and the Israeli National Cyber Directorate have released a Joint Guide to Securing Remote Access Software. The guide centers around detecting and preventing the use of legitimate remote access software and common exploits that could be used against an organization. One of the particular concerns about this software is that it is used in normal IT tasks. This allows the remote access tools to be exploited by threat actors who typically remain undetected by antivirus tools or by endpoint detection and response defenses. Abusing remote access software doesn't require a threat actor to create a new capability. CISA explained in the guide, "Remote access software enables cyber threat actors to avoid using or developing custom malware such as remote access Trojans. The way remote access products are legitimately used by network administrators is similar to how malicious RATs are used by threat actors." The guide recommends among other things that organizations create a baseline of their normal activity and begin monitoring for unusual spikes that could indicate a compromise. For the prevention and mitigation of this threat, the guide strongly encourages organizations to implement zero-trust solutions whenever and wherever possible, adding safeguards that prevent users from accessing a large number of machines in a short amount of time can also mitigate risks. The guide states, "Use safeguards for mass scripting and a script approval process. For example, if an account attempts to push commands to 10 or more devices within an hour, retrigger security protocols, such as multifactor authentication to ensure the source is legitimate." Some of the more consequential attacks against OT systems have originated in pivots from business systems and so industrial operators would do well to attend to potential risks in remote access software. The US Department of Defense last month held its Cyber Yankee exercise. The training event simulated a cyber attack against public utilities. A press release from the Marines explains that the goal of Cyber Yankee is to train military cyber operators, local, state, and federal-level government officials, and private companies how to defend themselves from a cyber attack. US Army Lieutenant Colonel Tim Hunt, Deputy Director of Cyber Yankee and full-time guardsman with the Massachusetts National Guard stated, "The fact we exercise with cyber professionals from the private sector and utility companies, we practice like we fight. So, if there were something where we needed to get activated already knowing those people, already having relationships, it goes a long way getting soldiers and airmen into action and helping provide and support a response, take care of something that's affecting the citizens of the region."
[ Music ]
Our guest today is Will Edwards from Schweitzer Engineering Labs. Our conversation centers on the notion of cyber awareness syndrome.
Will Edwards: That's going to vary a lot, right, with the different sectors but that's kind of the beauty of a diagnosis is that it's our job to help understand where our customers are at. And so, a lot of that depends on how mature regulatory oversight or corporate security initiatives are within a company or within an industry vertical.
Dave Bittner: So, what do you see as being some of the pain points here?
Will Edwards: I mean the pain points are going to vary from, you know, a lack of awareness which is like what I talked about. And typically, folks scramble for trying to gain that sense of awareness of what are the requirements and expectations to pain points associated with knowledgeable staffing and how do I make the improvements that had been identified to, you know, the actual implementation of security controls and the lifecycle management associated with those. And so, I don't even think that the pain points are the most important aspect of helping customers as much as helping the customer learn how to express what are the pain points that they are experiencing and where we can help.
Dave Bittner: Well, let's walk through what happens to someone who's experiencing this. I mean, they find out that they -- you know, something bad has happened, what happens then? What's the emotional reaction?
Will Edwards: Well, I mean, hopefully, they're not going to start thinking about cybersecurity only when something bad has occurred. Either leadership has made it a priority or government oversight, or corporate initiatives have made this something that is important to these groups that are managing security. And so, what I found is that there is actually a psychological link for their reactions to the stages of grief that someone would go through personally. And so, really the first thing is just complete shock of who owns this, what do I do. You know, if you think about the stages of grief, it goes from shock to denial to anger. And there's a lot of confusion in those first initial steps.
Dave Bittner: And so, how do you recommend people come at this? Given that reality, what sort of approach should they take?
Will Edwards: Well, I'll tell you what I've seen happen and, you know, I think that if you think about in terms of being agile and trying to improve quickly, the first reaction that I typically see is a really wide net approach where the human nature for some reason, people want to solve problems with technology so there oftentimes is an acquisition of technology that is improving security in some aspect, then oftentimes there's an outreach to consultants for expert counsel and advice or acceleration of things like policy and procedure development. Some people want to immediately differ and so they look for insurance, you know, just to defer some of that risk. And then other people immediately start investing in what is the actual appropriate staffing that's going to be necessary to achieve the objectives we're looking for.
Dave Bittner: And are they successful with this approach? Are there any parts that work better than others?
Will Edwards: There's a lot of success. I'm a big fan of the idea that you can have micro wins with everything that you do. But typically what I see is that out of that first round of a wide net approach, they achieve something I call paperwork glory where you can have a lot of results, metrics, policies, and procedures, a lot of material such that you can feel like you've achieved something really meaningful because a lot of people want to achieve security quickly. I mean, right, security can't wait, adversaries don't just sit on the sidelines while you get prepared. And so, oftentimes there's this false sense of success through we have all our paperwork organized.
Dave Bittner: It reminds me of that old saying, you know, never confuse activity with progress, right? I imagine someone sitting there with a big pile of paperwork as you described and saying, "Look at all we've done here." But I guess that doesn't necessarily lead to a good outcome. Let's move on and talk about some of the successes here. I know you described something as being a potential umbrella of IT success. What do you mean there?
Will Edwards: Right. So, many of the organizations we work with when they see that there's a shortage of staffing will lean on the corporate security or IT security resources that exist. And so, when the CISO looks at the success of the organization, oftentimes the IT group can point to things like group policies and endpoint protection that achieve a high level of coverage and check a lot of the boxes from a corporate IT security perspective. And so, initial reports will be presented as we are really secure, we have a lot of things in a good spot. But really, they're always being viewed from the perspective of what have we done on the IT side or the information technology side. And when you get to the industrial control systems or the OT networks, oftentimes that is something that isn't measured and therefore oftentimes isn't accurately reported.
Dave Bittner: I know there's an analogy you like to use with the popular movie that I think most of us know.
Will Edwards: Yeah, this is not my -- this is not my analogy but I did run into a wise cybersecurity director who is at least honest, and he compared it to the Lion King with everything that light touches is secure but that shadowy area, that is OT cybersecurity. And you should never go there.
Dave Bittner: I love it. I love it. That's a good one. What about the CISO themselves? I mean, as you say, are they looking at this with clear eyes?
Will Edwards: I do believe there is like again a psychological bias towards having a clouded perspective. It is -- I've met with many CISOs, and initially, when cybersecurity encompasses OT, the CISO is typically from an IT background and is typically dependent on OT experts providing updates. And for some reason, there's a human-nature expectation that you don't want to present bad results or a lack of progress. And so, oftentimes, even for the first year or two, that initial wave of information reporting is that things are at a really good spot. There are no major issues, we are compliant. I mean, heck, there's even oftentimes ways that people have passed audits. And so, everything must be good. And, you know, from a CISO's perspective, you want to be able to believe that the company is secure and defended against adversary attack.
Dave Bittner: How does a CISO prevent themselves from falling into this dilemma of not seeing things the way they really are?
Will Edwards: Well, I think it really goes back to metrics. I think any leader can wisely lead their organization through solid metrics. And having an expectation that their division leaders own collecting and reporting on those metrics. And typically you have to push through the first level of metrics and start asking questions for more detail about things like show me the exact security controls that have been applied this month or the plan of action and milestone progress that has been made. Other examples could include things like when was the last time that our organization operationally exercised the incident response plan. Or, you know, for example, having expert counsel on annual improvements to policies and procedures that the CISO is actually going to be signing off on.
Dave Bittner: You know, as we're making our way through these stages of grief, it strikes me that, you know, anger is certainly on that list. And I just think it's fair to say that most people don't like change. How do those two things intersect?
Will Edwards: Yeah, there's a lot of evidence that this is an initial hurdle that you have to push through. And again, I'm really emphasizing that cybersecurity is an organizational success effort. And so, what we see is that the first security control that impacts the workflow associated with how field technicians or engineers are used to doing things is met with a lot of frustration or anger if you will. Similarly, if there's an appeared delay in the project schedule due to cybersecurity, then oftentimes we'll say -- people will say, well, if you want this project to be done on time, we can't do all of that security. But, you know, if you were to compare that to safety, there's a cultural expectation that you would never neglect the safety aspects of a project. Similarly with cost, you know, people have a hard time bundling the cost associated with security for what was originally viewed as maybe an independent project for protection or automation. And then lastly, unfortunately, human nature is such that as progress is being made, resistance will show up in the form of workarounds where people will start using secondary devices. People will start disabling administrative controls. Firewall exceptions will be added and forgotten to be removed. And so, you end up kind of taking two steps forward and one back in that situation.
Dave Bittner: To what degree do you think communication can help with this here? I mean, I think, you know, a lot of times change comes from on high and, as they say, you know, you need to do this because we said so. But it strikes me that if you can explain to people the rationale behind the changes that might soften the blow.
Will Edwards: That is a really good question. I would just say that there's a few different things I've learned about trying to influence change, and it's not as simple as you might think. There's a great book "Think Again" by Adam Grant, and another book "The Human Element" by Loran Nordgren that talk a lot about human psychology and why things like scare tactics don't work, and what change talk should look like. I think the main thing is that it has to be a plan of communication, something that has scientific backing behind that strategy because there's a lot of ways to miscommunicate why changes are being put in place.
Dave Bittner: So, how do we get beyond these stages so when people are having an emotional response to these things, to these changes, how do we move on past that?
Will Edwards: Well, I think one wise step that many organizations realize is that they are no longer looking for that silver bullet technology. And I know that at SCL, we're a big fan of looking at the root cause when we identify a pain point or a problem. And so, from a cybersecurity perspective, when organizations start looking at the root cause for why they are facing these cybersecurity challenges, that's when they'll back themselves up to the point where they realize it all starts with things like supply chain. It all starts with who are we buying services and products from, how are we making our requirements clear during the procurement process so that things are being quoted with security incorporated into the design and integration efforts, and then how during design planning and testing are we ensuring that our security objectives are being met. And when you do that, you kind of organically solve some of the challenges. And I think it's a lot different from, you know, if you were to look at something like the NIS Cybersecurity Framework where folks might say, "Oh, the first step is you just got to know what you've got and you've got to baseline, you know, where your problems are at." That's not where the problem came from.
Dave Bittner: You know, all of this is taking place within a regulatory regime. How does that play into this?
Will Edwards: Yeah, there's an analogy of regulation can freeze organizations. And oftentimes even grant funding can freeze organizations where the idea of someone else paying for your security initiatives or the idea that you're going to invest in security improvements only to have regulatory requirements change essentially the finish line target, create a situation where people do nothing. And organizations that have matured really see compliance with the regulation as a baseline of security expectations, the bare minimum. And so, they don't even blink at regulation because they know they're trying to push to industry best practice, and their organization is matured such that they know that the investments they make are achieving their goal of reducing risk and doing things like easing the burden of cybersecurity maintenance throughout the lifecycle of their systems. And so, while on the backend, they may have engineers mapping the security defense to frameworks for support of audits or compliance to regulation, they really aren't hindered in any way by grant funding or regulation that might be released.
Dave Bittner: And as an organization makes their way through, you know, these various stages, you know, when we talk about reaching the stage of acceptance, you know, and how do you know when you're seeing the light at the end of the tunnel, when things are starting to heal?
Will Edwards: There's a few really good signs that you can know that an organization is headed down the right path. And some of those are when they are eager to collaborate with experts. I mean, many of the customers that I visit, you see them at the industry sectors and industry committees actually invested in being a part of a community of experts. And so, that collaboration helps optimize the way they make decisions. They also are continuously investing in education. Like that's the sign of a good engineer is when you can see that burning passion for them continuously learning. They're always asking for others to show them how they did that, they're looking for best-known methods of either blogs or white papers, or even form of training because they always want to be getting better, and that's what allows them, again, to guide an organization in the right direction with accelerated momentum. And then there's a few more keys that you start to see where, of course, no organization has an infinite budget or infinite time to improve security to an acceptable level of risk. And so, you see these organizations starting to wisely apply things like priority filters. You know, a lot of the industry is aware of the term crown jewel analysis or business impact analysis but when you see that operationalized into how investments and security controls are being applied and how centralized management is more than a tool investment but it is actually a functional component of the workflows that people are using on a daily basis and that the reporting is no longer interns running around with Excel spreadsheets but it's actually meaningful progress towards supporting audits or reporting back to, like I said, the CISO who wants to have that visibility. And then ultimately, because of their maturity, they end up seeing the value in purpose-built solutions from vendors. So, that's really where I think, you know, the closed loop aspect happens is when, you know, the vendors of software and hardware are actually purpose-building technology to align with mature organization's expectations and recognition of value proposition.
Dave Bittner: What is your outlook here? I mean, do you have a sense that there's increased awareness here that more organizations are adopting these kinds of processes to come at these issues in a more informed and rational way?
Will Edwards: That's really what I would ask industry organizations to do is to pause and reflect and say, "Am I somewhere along these stages of grief with regards to the progress we're trying to make as an organization? Have I gotten beyond the desire for technology to solve my problems?" And I think you will see that in various organization -- or various industry verticals people are going to be at different levels of maturity. So, for example, in the electric industry in the United States, NERC CIP has mandated high-impact facilities and organizations to show proof of the progress across a holistic view of cybersecurity. But, yeah, we've seen, you know, the distribution in lower-impact facilities like renewable energy, independent power producers, even the water wastewater industries have still got a long way to go. And so, I would hope that they would reach out to industry experts to help them understand or diagnose if you will where they're at and to help make decisions from an organizational standpoint that will put them in a better position to succeed.
Dave Bittner: Our thanks to Will Edwards from Schweitzer Engineering Labs for joining us.
[ Music ]
Back in the "Learning Lab" we have the conclusion of the discussion between Dragos' Mark Urban, Principal Adversary Hunter Kyle O'Meara, and Principal Intelligence Technical Account Manager Michael Gardner. They're talking threat hunting.
Mark Urban: Hi, I'm Mark. I'm once again with an episode of "Learning Lab" here on "Control Loop". And we're going to focus on threat hunting. There are a couple of different types of threat hunting and to kind of describe, you know, some of those differences in the context that we're going to talk about today, I'm joined by Kyle O'Meara and Michael Gardner here at Dragos. So, as you're walking through these processes, what are the types of tools that you're using? You're talked about there's tools such that adversaries use, what are the tools such that you're employing to, you know, conduct these hunts?
Kyle O'Meara: Well, I mean, you know, you typically don't talk about your explicit sources but I mean, I can highlight. I mean, you're looking -- you're having, you know -- you have tools that, you know, look at network data. Look at -- like give you sort of the insight to what like of a specific IP address, insight to specific domain tools. You have tools that give you specific insights to pieces of malware or just files themselves. You have tools that give you specific insights to -- that do a collection of scanning data that you can verify and look through and leverage the IP addresses and domains that you have to compare against that. You have tools that collect different vulnerability type data and you can understand that and pivot into that and, you know, what's a landscape of that vulnerabilities across the internet and things like that. So, there is -- for every different IOC, there is a company that has correlated all that data together too and packaged that up and sells that for different threat analysts, you know, hunters, researchers across all industries whether it's inside or outside threat hunting to help you deduce what type of help, you know, build the case and help you understand what that IOC is and is it something that you can, you know, use to help solve your threat hunt.
Michael Gardner: From an asset-owner operator perspective, you know, these are oftentimes the good reason to lean on a third-party intelligence source is because, again, you know, kind of going back to that aperture point. When you're looking at someone who spends their time hunting for adversaries in a broader capacity, they have that wider aperture and can help you to start to narrow that down. So, that's one of the tools sets that you'd use that on an asset or on an operator. On top of that, you'll also be employing things like your SIM and your visibility solutions that you have in your environment. You may be looking to deploy packet captures and specific domains of the network and using packet capture analysis tools like Wireshark. If you actually identify something in your threat hunt, you know, you may have a threat hunter that also has some forensic capabilities that may be using some forensic tools to take a look at files or malicious code that was identified in an environment. All sort of things like that.
Mark Urban: It sounds like sifting through a lot of data. How do you figure out, you know, which data is important?
Michael Gardner: Not to keep harping on the point but that's the reason for a strong hypothesis. There's a lot of planning that goes into carrying out a fruitful threat hunt. I'll talk about it from the -- again, from kind of the individual company perspective. And Kyle can talk about it from that broader aperture. You know, when you're talking about let's, again, say kind of a medium- to large-sized oil and gas company or a manufacturer, you know, you may have hundreds of thousands of endpoints in your entire network. You may have distributive sites in the OT realm. So, you really can't have a fruitful threat hunt by just looking in that full solid domain, right? You can't look at the entirety of the log sources that you have in your environment and expect to identify, you know, the adversary lurking there. So, that's where you want to lean on that hypothesis development again. Looking at a specific adversary or a specific threat, or a specific tactic or technique and identifying where that would be leveraged to the most gain for an adversary. Once you have kind of that assessment, you can start to narrow down where that is relevant in your environment and take a look at those log sources specifically. Really, again, starting to narrow that aperture and make it more likely that you'll actually identify something in the timeline you've laid out for a hunt.
Kyle O'Meara: I think from like, you know, the other angle too is, you know, I think the analogy I was just thinking about is that we just don't use one tool when I'm looking at the IP address, I'm using multiple different tools to look at that. And I think a good analogy is like there's two sides to every story, the truth flies in the middle. So, I don't believe in one just, you know, third-party source, I take a different couple. I see what each are saying. And sometimes there's little nuances of each because maybe when they collected the data, when they got the data themselves, and things like that. So, you kind of make many hypotheses of what's going on of your dataset when you're looking at different -- you know, when you're leveraging different tools. And, you know, you take that truth that you believe is, you know, and you slap an assessment on that. And then you use that sort of to keep driving on, you know, to answer your hypothesis.
Mark Urban: Yeah, so hypothesis helps keep you focused and not stray too far off course. And, you know, and as you're focusing and you come across a piece of information, then you're using the number of tools to gather context. And then you're synthesizing the kind of conclusions around that particular item that you're looking at. Okay. So, a little bit of science and a little bit of art, which is why you want people doing that, you know, with a history, not only doing it but doing in a specific domain of, you know, grids and manufacturing, plants and pipelines, and water systems, and etc. Let's move to what's the outcome of a threat hunt, you know? You talked a lot about the intel sources, the process, kind of some of the art and science involved with managing that process. What's the outcome that you're looking for? What's a good outcome, what's a bad outcome maybe as part of that?
Kyle O'Meara: Sort of what my outcome from like a threat hunter standpoint, you know, the ultimate goal here at Dragos is to produce content for our customers, you know, on the intel team. As well as then what kind of detection ideations or detections can I share with our detection team to get them into the platform, right? So, there's kind of those, you know, two angles. Based on my hypothesis and my hunt and my conclusions and some assessments, you know, I put together a report that I put out, and that report might have a lot -- it might be very tactical, which typically a lot of times it is. So, it has, you know, the TTPs, the attack vectors, the IOCs associated with some type of, you know, incident or a cluster that the threat group is doing. It might be operational. A little more sort of digging into that part. Or it might be strategic level. I haven't touched much on that type of reporting. But we have those all different types of reporting that is, you know, simply those type of threat intelligence type reporting. And then that just goes out to our customers and they, you know -- I'll pivot over to Michael to how they get that into their and what they do with that.
Michael Gardner: Yeah. Thanks, Kyle. And I think that the ways that you would leverage intel reporting from a third-party source like Dragos are in many ways very similar to what you'd sort of follow up a threat hunt with internally. So, I kind of want to start with one point that I really like to make. I think that there's a huge misnomer when we talk about threat hunting where people kind of think that a successful threat hunt means you found an adversary, that you've identified malicious activity and now you've kicked off an incident response. Obviously, that's successful in its own way where you're proving security posture by hopefully eradicating an adversary. But I don't think that's entirely true because just carrying out the threat hunt is something that can lead to successes in other types of wins from a security posture perspective. So, Kyle touched on kind of the tactical. So, maybe that's developing specific indicators of compromise or ingesting them from a third-party source and leveraging those for threat detection. It might also be kind of identifying new tactics, techniques, or procedures that you don't maintain the ability to detect in your environment, and that kind of moves you into those operational wins. So, building detections in your environment. One big thing that may happen for a threat hunter at an individual organization is they may identify a gap in log sources or a report that Kyle or someone from our intel team may have pointed them to a specific domain that they know exists in their OT environment. They know they have no visibility into that environment. Also, it might help you to streamline processes for future hunts and help you sort of narrow down the way that you develop those hypotheses and carry out a hunt in more quicker successions so that you're able to be more efficient. And then, again, Kyle touched on strategic points but I think some of the strategic wins that come out of threat hunting and that come out of intelligence reporting are really key, especially when we're talking about OT and so many organizations are moving along the maturity curve and improving their security posture in OT. So, that's kind of providing analysis to strategic stakeholders like executive leaders and various security leaders, also business leaders that sort of own the environments that you are hunting in. Again, highlighting those gaps, highlighting the feasibility of some sort of malicious activity being employed in an environment like that, and talking about what mitigative actions need to be taken in order to prevent that can help sort of lead to that continuous strategic security posture improvement.
Mark Urban: Good info. And I wanted to flip backwards to something Kyle said, you said you kind of had two outputs, Kyle, one was content and that's for the subscribers of the Dragos' WorldView intelligence service, right, that you provide reports that give that tactical, operational, and strategic level intelligence for OT delivered through that. And the other thing you said is, you know, to feed the detections. And I assume you're talking about how that intel is used by the Dragos' software platform and technology platform because, you know, that platform sits on the network and it's monitoring traffic, and that fires off to detections when the intelligence that you've created that's then compiled into software code that then fires off a detection to tell a Dragos customer if they've got something that looks like a threat in their environment, is that a fair description?
Kyle O'Meara: 100%. You nailed it right there.
Mark Urban: Talk a little bit more about WorldView and, Michael, I know you kind of really focus in this area, those are reports that customers consume, including IOC feeds and other TTP feeds that can feed into their intelligence. Just talk a little bit about how that manifests. What would an electrical utility, a manufacturing, you know, company, how would they see and consume that type of information?
Michael Gardner: Yeah, yeah, 100%. So, I feel like we could play a drinking game off of my use of the word aperture. But again, Kyle has got that really, really wide scope. He's focusing on actors and techniques, and tool sets that are focused on operational environments in industrial organizations. So, you as an asset or an operator with, you know, more specific focus could begin to digest that reporting that's coming out of a source like WorldView to first off understand what the capabilities are in operational environments overall. I kind of like to think of it as layered threat landscapes. So, you have kind of the overall ICS threat landscape. And there's a lot of commonalities in technologies and processes regardless of if you're talking about an electric utility or a water utility, or an aviation manufacturer. There's always going to be similar processes there. There's also a lot of differences so that's why the threat landscape starts to kind of shift down. Then you can start to develop an understanding of the threat landscape for your industry vertical specifically. So, beyond just understanding the capabilities of adversaries in ICS overall, you start to understand what the capability is for -- or what the capabilities and intent of adversaries are in electric. And then you can start to kind of break that down by geographic region as well. By ingesting all of that information, you start to understand what your threat landscape actually is and compare that against your specific threat service. So, what technologies are used in your environment that are similar to ones that were targeted by an adversary in the past? How can we improve our architecture? How can we improve our incident response capabilities? How can we develop more intentional visibility into these environments is kind of one of the most important sort of strategic questions that you can ask out of consuming threat intelligence. And they're kind of moving down that line again. From an operational perspective, you can start to actually inform your blue team and red team security personnel on what they should be actually looking for in your environment. You're not going to have a malicious event every single day. So, you can learn from the events that we've seen in the past. And then as we kind of move down again into the tactical level, you can start to ingest things like indicators of compromise. And when we're talking about indicators of compromise that can mean a lot of things, right? But in many cases, Kyle might be reporting on a specific family of malware that is active today and being leveraged by an adversary at real organizations like yours. So, if Kyle shares indicators of compromise that are used with that malware families like CT2 channels, you know, IP addresses associated with command and control, you could start to take proactive action to block that activity and prevent the risk of a similar campaign occurring at your organization in real time. You can also just start to develop detections in your environment based on the techniques that you're seeing. So, there's really kind of an endless possibility of how you can actually operationalize an action threat intelligence. I think one of the really important point is also understanding the vulnerability research that we do here at Dragos or at, you know, tons of different intelligence providers. Understanding how they're identifying vulnerabilities, how they're successfully able to exploit them, and then ensuring that you have a vulnerability management process to take action on the vulnerabilities identified, but are able to recognize as well what some of the sort of more offensive actions that were taken by those researchers were so that you can deploy efficient mitigation techniques in your environment.
Mark Urban: That's a good segue into I think the next segment is going to be on vulnerabilities. I did want to circle back -- I'm an aligned network infrastructure guy, I came out of kind of the proxy world, endpoint security, etc., this is a unique OT area, there are unique adversaries that are focused on that. And, you know, some of the outputs of this is information around IP addresses or other kind of indicators of compromise that you can't simply, you know, drop into a firewall block list, I mean, to be preventative in that capacity, right, and that's the type of intelligence that you typically wouldn't see or you probably won't see -- there are plenty of those intelligence services on the IT side, many fewer on the OT side. And so, I thought that was an interesting thing. But you have that converged infrastructure of firewalls that, you know, can take in feeds from multiple places. Analysts and intelligence integration things that can take advantage of, you know, the different services. You know, you have services focused on IT. And if you're an industrial organization, how important it is to remember that manufacturing systems are different than IT? You know, distributive control systems are different, SCATE systems are different from the IT world. And I think that's what we want to kind of bring home as a conclusion. This is a very rich topic of threat hunting. And again, just kind of drawing the difference between, you know, the much more prevalent focus on IT versus kind of the specialty, you know, the specialty adversaries out there, the specialty tactics, and the specialty threat researchers and threat hunters that we have here. Gentlemen, Michael, Kyle, much appreciated. Like I said, this is a very rich topic. I'm Mark Urban with Michael Gardner and Kyle O'Meara from Dragos on the "Learning Lab". Thanks very much.
[ Music ]
Dave Bittner: And that's "Control Loop" brought to you by the CyberWire and powered by Dragos. For links to all of today's stories, check out our show notes at thecyberwire.com. Sound design for this show is done by Elliott Peltzman with mixing by Tre Hester. Our senior producer is Jennifer Eiben. The script was written by Tim Nodar. Our Dragos producers are Joanne Rasch and Mark Urban. Our executive editor is Peter Kilpe. And I'm Dave Bittner. Thanks for listening. We'll see you back here next time.
[ Music ]