Control Loop: The OT Cybersecurity Podcast 4.19.23
Ep 23 | 4.19.23

Unique OT characteristics and points of IT convergence.


[ Music ]

Dave Bittner: It's April 19, 2023, and you're listening to Control Loop. In today's OT cybersecurity briefing, an update on Russia's Vulkan papers and what they mean for industrial cybersecurity. Cyberattacks against Canada's agriculture sector. There's a Hitachi ransomware incident. PSA issues new cybersecurity requirements for the aviation industry. Ransomware vulnerability warning pilots supports critical infrastructure operators. And Patch Tuesday didn't leave out industrial control systems. Today's guest is JD Christopher, Dragos' Director of Cyber Risk, talking about ICS security standards and regulations and how efforts finalized in 2022 will shape the OT programs of the next decade. The Learning Lab has Dragos' Mark Urban joined by their CEO Robert M. Lee to talk about the unique characteristics of OT and points of IT convergence.

[ Music ]

Since the end of March, the media have reported on the activities of NTC Vulkan, a Russian company working on behalf of the Russian government. To recap briefly, NTC Vulkan is a Moscow-based IT consultancy that does contract work for all three of the major Russian intelligence services, the GRU, the SVR, and the FSB. Der Spiegel, one of a group of media outlets that broke the story, sourced it to a major leak of some thousand sensitive documents running more than 5,000 pages. The media consortium that received and shared the links includes German, French, British, and American papers. The Vulkan papers, as the leaks are being called, reveal that Vulkan is engaged in supporting a full range of offensive cyber operations, espionage, disinformation, and disruptive attacks intended to sabotage infrastructure. On Monday, Dragos released a study of what the Vulkan papers mean for that last class of activity, infrastructure disruption. Dragos took as its point of departure the coverage in the Washington Post and it focused in particular on one of Vulkan's tools, a malware suite known as "Amesit-B." The researchers offered four key takeaways. First, the papers represent genuine leaks. Dragos' assesses with moderate confidence that the documents reviewed are legitimate and were leaked or stolen from a Russian contracting repository. Second, it is unlikely that these tools and platforms are exclusively used for testing or training purposes. They represent a real operational capability. And finally, Amesit-B represents a clear potential threat to the rail transportation and petrochemical sectors. Modules contained in the Amesit-B platform could allow for a range of impacts in rail and petrochemical environments which could result in physical consequences, including damage to physical equipment or creating unsafe conditions where injury and loss of life are possible. And what Amesit-B seems designed to do comes from a familiar Russian military intelligence playbook. As Dragos puts it, the capabilities described are consistent with previous attacks attributed to various units of the Russian military's GRU with tactics, techniques, and procedures overlapping with multiple identified threat groups. The Amesit-B platform shows an interesting convergence of cyber operations with traditional signals intelligence and electronic warfare operations, and it's very much a combat support system, intended for battlefield use by a combatant commander. Dragos concludes with some advice to take Vulkan's capabilities seriously and to understand them in context. The researchers write, "Russian intelligence services continue to invest in the development of more efficient cyber operations at the beginning of the attack lifecycle, as shown by contracted projects from NTC Vulkan. The projects also reveal interest in using cyber operations to amplify psychological effects and target critical infrastructure, including energy utilities, oil and gas, water utilities, and transportation systems. Defenders should be aware of these capabilities and priorities to protect critical infrastructure and services." The Financial Post reports that the Canadian agriculture industry is increasingly being targeted by ransomware gangs and espionage-focused nation-state actors. The Post cites Dr. Ali Dehghantanha, head of the University of Guelph's Cyber Science Lab, as saying that these attacks have been escalating over the past four years. He said, "Every week, I would say, we are getting contacted by farmers or food companies. It's one of the soft bellies of our critical infrastructure." Many of these cases are typical ransomware attacks, but Dehghantanha says he's seen two instances in which attackers managed to access farm control systems and threatened to modify settings in order to kill livestock. Evan Fraser, Director of the Arrell Food Institute at the University of Guelph, told the Financial Post, "These are all systems that we explicitly depend on every single day and they have become extremely vulnerable to manipulation of all sorts. They're vulnerable because we haven't thought carefully about the security of how we set these systems up." It's worth recalling that farms themselves have grown increasingly automated and that systems that manage irrigation, fertilization, and livestock care are susceptible to disruption. They amount to an industrial subsector, and the surrounding industry that supports food processing and distribution is also significantly automated. There are many control systems between the crops in the field and the table they're destined for. Hitachi Energy, a subsidiary of the Japanese technology giant Hitachi, has confirmed that it sustained a data breach after falling victim to a Clop ransomware attack, Bleeping Computer reports. The threat actor carried out the attack via a vulnerability, CVE-2023-0669, in Fortra's GoAnywhere MFT. Hitachi Energy said in a press release that the threat actor accessed employee data in some countries, but there's no evidence that any customer data was breached, nor that any control systems were compromised. But ransomware remains a threat to industrial systems, and a pivot from business to control networks is always a possibility. The U.S. Transportation Security Administration on March 7th issued an emergency cybersecurity amendment for the security programs of airport and aircraft operators. The TSA says the measures are urgent due to persistent cybersecurity threats against U.S. critical infrastructure, including the aviation sector. The amendment requires that impacted TSA-regulated entities develop an approved implementation plan that describes measures they are taking to improve their cybersecurity resilience and prevent disruption and degradation to their infrastructure. This includes developing network segmentation policies and controls to ensure that operational technology systems can continue to safely operate in the event that an information technology system has been compromised, and vice versa. CISA, the U.S. Cybersecurity and Infrastructure Security Agency, has announced the launch of the Ransomware Vulnerability Warning Pilot, a support program designed to help critical infrastructure operators protect themselves against ransomware attacks. Authorized by the Cyber Incident Reporting for Critical Infrastructure Act of 2022, the RVWP will help CISA detect vulnerabilities susceptible to exploitation by ransomware and alert critical infrastructure operators so that the flaws can be mitigated before attacks occur. As Bleeping Computer notes, the RVWP is part of the U.S.'s wider initiative to defend against the rising threat of ransomware that began after a wave of cyberattacks on critical infrastructure operators and government agencies. Interested organizations can email CISA to enroll. Remember that the goal of ransomware operators is extortion and they're interested in holding any kind of system at risk. Many people tend to think of extortion as ransomware that encrypts data on business systems, but in truth, a ransomware operator could hit and deny access to any sort of data or system. Such criminals have hit and can be expected to continue to hit industrial systems. As Willie Sutton said when he was asked why he robbed banks, "It's because that's where the money is." If extortion is possible, the crooks will certainly give it a try. Industrial control systems were represented in this month's Patch Tuesday. Between them, Siemens and Schneider Electric addressed 38 vulnerabilities in their products. Keeping abreast of patches and mitigations is, of course, an important part of sound cyber hygiene, and users of Siemens and Schneider Electric products should check the vendor's announcements and apply the recommended upgrades. Security week has a convenient rundown of the fixes. They call out CVE-2023-28489 a critical vulnerability affecting SICAM A8000 series remote terminal units as the most significant of the Siemens fixes. Those RTUs are widely used for telecontrol and automation in the energy supply sector. Of Schneider Electric's six advisories, the ones that stand out are two critical and one high severity vulnerabilities affecting APC and Schneider -branded Easy UPS Online Monitoring Software. Exploitation can lead to remote code execution or a denial of service condition. So as CISA would put it, apply mitigations per vendor instructions.

[ Music ]

Our guest this week is JD Christopher, Dragos' Director of Cyber Risk. He speaks about ICS security standards and regulations and how efforts finalized in 2022 will shape the OT programs of the next decade. So Jason, you have a presentation that I know you're preparing you're going to be giving at an upcoming SANS ICS Summit, and you're going to be talking about ICS security standards and regulations. I wanted to touch base with you on that and see what sort of things you had to share with our audience.

Jason Christopher: Yeah, it's interesting because it really pulls the thread forward from our last conversation on the evolution of the CISO, and when we started looking at some of the recent trends that have taken place really over the past year, we started realizing that the OT security program of the future has already been written, and what I mean by that, so my background comes from a lot of standard development organizations working in OT cybersecurity, and when we look at the mandatory regulations that took place in the electric sector dating back to early 2000s into 2010, when things became mandatory and enforceable in 2010 for the electric sector, you can start seeing the trends that we're already seeing today in those programs. The idea of having some of that DNA baked in from the get-go starts not really showing itself in that first year or two years, even, of new standards and regulations, but really at that decade mark. So I challenged myself. I really wanted to look at all the things that we did in the past year and say, okay, what will the program in 2032, 2033 look like? And that was the really interesting aspect of looking at all these standards, sort of doing a meta-analysis of it, was seeing that there was so much similar DNA across all of them that you could really start analyzing and understanding what that growth trajectory would look like, and it was interesting from the perspective of the new things that we're going to expect, not only the executives and managers, as we talked about a little bit last time, but also what sort of capabilities we expect an OT security program to have and then does that blend with that IT side and where is that divide. And so the program of the future, I think, is going to be really interesting, and if folks weren't paying attention to the standards and guidelines last year when these things were developed, then they're going to find themselves ill-prepared for what that looks like in the coming decade or so.

Dave Bittner: Well, can we go through some of this together? What are some of the highlights you think may come to pass?

Jason Christopher: Sure. So when we look at the overall understanding of these programs, the very first thing is going to be, what are your impacts? What are -- and I'll actually quote one of the regulations that will be coming up, but with the Security Exchange Commission talking through public companies here in North America, talking about what does a material incident look like, and having some sort of reporting structure that goes directly to the board about a material incident. So we now, then, have this thread we can pull about what does it mean to have a cybersecurity incident inside of OT? Do we know what those scenarios look like? Are we tracking them in some sort of risk register? Are we able to understand the vulnerabilities we have in that system and the relevant threats to that system and therefore come up with a response criteria that sort of simulates what the scenarios would look like? We talk in broad scales about the program of the past having things like tabletops, but talking about real understanding at an executive level of what a cybersecurity incident in OT would look like really hasn't been pushed all that far, certainly not by regulations, and very few organizations have done that maturely in their own programs, too. So that would absolutely be one of the aspects, is sort of playing like it's game day, understanding what a bad scenario would look like, and working your way through those. The other aspects would then start shifting towards, what are we doing to sort of mitigate a lot of those risks that we identify? So it brings us into this new conversation in OT about let's not just have these sort of minimum base standards, but let's have a plan and enact that plan based off these scenarios, come up with a defensible architecture, and really understand what that means for overall program development. So we're crafting our own plans, which is interesting. Most programs -- most regulations don't like you creating your own programs. The new ones have been coming out saying, "Well, why don't you create it yourself and we'll look at your plans as well as how well your program is doing," and it now enforces this idea of an internal audit, a way for you to strengthen your own discussion. As a matter of fact, the major consulting companies would call this the "third line of defense," checking that you say that you're doing what you're going to do and build them into the program, and that is very rare in OT security, to be able to have that internal audit function to double check your homework.

Dave Bittner: Forgive the naivety of this question, but is there -- is there any risk inherent in overshooting the regulations? In other words, the regulations say, "You must do this," and the organization says, "You know what, we're going to do that and we're going to do even more." Is there any potential peril there?

Jason Christopher: There is, and we've actually seen that a lot in NERC CIP mandatory standards that are referenced here and [inaudible] North America where a lot of utilities are sort of finding themselves in this weird catch-22 because, for them, when they are going to go and invest in their program, they need to be able to recover that rate and that will then have its own sort of regulatory piece in there, right? How it is that you charge your customers for electricity here. You can't, you know, go for that gold-plating conversation, for sure, so you need to be able to start talking about, well, what do the requirements have us do? And sometimes that becomes a race to the bottom, right? What's the lowest common denominator, unfortunately, and you want to be able to push those where you can for better security, but you can't go too far that you're now deviating from the standard because now you're going to have an audit where that can now come in scope and now you can find yourself, even though you've done better, the minimum requirements, you find yourself getting audited to something that maybe had been that moonshot discussion that we were talking about, and so a lot of utilities will find themselves sort of straddling that middle ground, but more often than not deviating back down to that minimum baseline that is just there to be able to say we've got this unfortunate check-the-box exercise for a lot of organizations as opposed to being baked into our culture and built into our plans that we can then improve upon over time.

Dave Bittner: How strict do we find most of the regulators in this case in terms of oversight? Are they -- I guess I'm wondering, do we -- are we chasing the spirit of the law, the letter of the law, or is it somewhere in between?

Jason Christopher: I'd say for the new ones that are coming out, and when I think of the new regulations, TSA security pipelines that happened last year, I look at the NIS 2 Directive in Europe, those in particular I think will be an evolution where we are going to see how those auditors really start grappling with the OT security. They may come from an IT security sort of shared background or they may come from operations and learning cybersecurity, and so there's going to be lessons learned there for sure. Historically, in the NERC CIP sense where we have had mandatory regulations for OT security, it has really been a, you know, lessons learned over the years. There's a fantastic chart that I show when I'm teaching SANS ICS 456, which is dedicated to NERC CIP compliance, where once the standards became mandatory enforceable, the possible violations, the things that people were doing wrong, skyrocketed, and then over time, they got better, and the conversation there is that you'd expect people to get better as opposed to constantly hurting themselves, right? If you're going to get penalized, if you're going to deal with these violations, you expect people to learn their lessons and improve over time, and that I think is going to be the same case and, again, why I'm looking at this program development discussion being a 10-year discussion, not a 2-year discussion, because we're going to see violations. We're going to see people learning, and then we're going to see improvement after they learn their lessons.

Dave Bittner: Are you optimistic that looking at a 10-year timeline, that 10 years from now we're going to find ourselves in a better place than we are today?

Jason Christopher: Yes, I am, and that comes from the experience that I've seen in mandatory regulation. There are a lot of complaints and valid complaints that people can have about mandatory cybersecurity regulations and even use that NERC CIP example of where folks are concerned or this feels more like we're chasing down paperwork or that there's an overhead and burden to these things, but then I look at the benefits that have absolutely come out of mandatory regulation over the past decade where we've seen it. We've seen better conversations about access controls. We've seen better conversations about what we can do to log and identify what a cybersecurity event looks like inside of our environments. Those would not have necessarily been there if we didn't have sort of this bare minimum, this minimum threshold saying, "You must at least do this" to push us there. So I do think that there's going to be a better conversation about OT security in the future because of regulation. It has a role to play. It's not your sole purpose of the program. I don't want people just chasing down regulation, but it does give you sort of that safety net discussion about what we can do better for the entire industry.

Dave Bittner: What do you see in terms of the relationship between IT and OT over the next decade? Is this something where, by necessity, they're going to get closer together and it's going to be more interactive, or do you think we'll still have this bit of a divide that we experience today?

Jason Christopher: There absolutely is going to be a convergence. There has to be. So when I think about where the IT security program has really matured over the past 10 or 20 years, they are far more robust in things like incident response and logging and monitoring. Those are things that in OT we just sort of see not be as successful and those would be the ones that we would say prioritize that first, right? Understand what a bad day looks like and be able to identify that day. Prevention is ideal. Detection is a must, but detection without the idea to respond to it is going to be of very little value, so we need to be able to prioritize our ability to respond, our ability to detect as sort of our way of life in OT. Where I see the shrinks coming is where you already have in IT, maybe you already have a SEIM, a SOC, a SOAR, all these great programs allow us to do detection and analysis quicker. We want to now get that data from OT. We want to be able to pull that information in, and I think that information-pulling is going to force these programs to converge more and really have a center ability to be able to say, "If I were to have a reliability incident or a production outage, how long would it take me to identify that that was a cyber incident?" Because most of these incidents are going to look and feel like a maintenance event. It's not going to be the fantastic 1995 film The Hackers. I'm not going to have a virus singing "row, row, row your boat" while it's capsizing ships, right? So I need to be able to say, if there's a maintenance event or communications outage, what I know is that a security event and the only way to really have that sort of confidence behind your answer is converging those pieces together and being able to say with that overlay of IT and OT information, security information, definitively yes or definitively no, and until we get to that point, then we're not going to be able to really have that confidence from an organization.

[ Music ]

Dave Bittner: In today's Learning Lab, we feature Dragos' Mark Urban joined by CEO Robert M. Lee to talk about the unique characteristics of OT and points of IT convergence.

[ Music ]

Mark Urban: This is Mark Urban with another installation of Learning Lab on Control Loop. Here I'm joined by our Rob Lee, Drago CEO, talking a little bit about the unique focus on OT and why it's important. Thanks for coming, Rob.

Rob Lee: Yeah, absolutely, and look, we've talked before about why OT and IT are different, but just as the shortest recap possible, what I would largely tell people is that, you know, IT tends to be a lot about system security and data security, and OT tends to be a lot about system of system security and physics. You know, we're so bound by the laws of physics and what can and can't happen, but it's much more about the interaction between systems, which is also why you see a big push on network security in that kind of space and understanding protocols and things like that. But, you know, this topic of like why OT and, like, why is OT special, and like, all that comes up all the time and -- and there's so much from IT that you can learn and apply in OT, but what you apply, where you apply it, why you apply it is going to be very different, and then there's also very unique things about our team. So as an example, if we were to look at how the IT security controls have been developed, they've been developed over the years for IT systems operating in IT environments and enterprise environments with an understanding of the threats targeting them, the risks that they pose, the impact of their systems, the vulnerabilities, the methods of the adversaries. Like, that's how we design those security controls, endpoint protection, why we use application security in the way we did, fundamental operating system security that Microsoft has built in, you know, etc., etc. When you look at OT, we have different systems. We have different impact of those systems. We have different threats that target those systems and they do it in different ways. We have different risks and vulnerabilities in those systems. And so I struggle to determine why we would copy and paste security controls made for one set of systems with impact risks, vulnerabilities, threats, methods of those, and so forth, into an environment with different risk, impact, threats, vulnerabilities, etc. So just I think some people approach this argument from, "Why not?" And I usually approach the argument from like, "Why? What are you trying to accomplish?" There's a lot of IT security controls that may feel good, that may be applicable in an ICS environment or OT environment, but they don't reduce any risk and it's just not worth doing, or reduce much risk and there's -- the return on investment is pretty low. So having unique OT solutions designed for OT systems and the OT risk, impact, threats, etc., it just makes a ton of sense to me, and we've seen it work really, really well in real world against threats and risks and impact. And, you know, I think a lot about the fact that years ago in a pipeline in the U.S., there was a rupture and, essentially, one of the components of the root cause was that system administrator, and IT did a merging of a database on the SCADA environment and the merging of that database caused lag on that system and over-utilized the resources where the operators didn't get the alarms that were coming off the pipeline. There was an over-pressurization event, pipeline ruptured, and more than a dozen people lost their lives. And so nobody in IT has ever died from doing a database merger, but they literally have in operations. It's just -- it's a different world, and I -- and if I could expound on that a little bit more, I would say that's also where I get frustrated with people, and I try not to die on this hill, right? Words have meaning and matter, but you don't want to beat up anybody over word choices, but it is where I get frustrated when I hear the term, like, "convergence." People talk about IT/OT convergence. My first flippant response is that happened 20 years ago, like we have Windows operating systems in a power plant. We have Windows CP cards on the side of an ABB RTU. You know, like, we've converged, but the mission is different, the impact is different, the risks are different, and we still have some specialized and purposeful systems and network protocols and similar, so that convergence didn't really matter for what we're trying to accomplish, and I think people miss that, but they talk about convergence like it's coming, which really the next phase isn't convergence, in my opinion. It is kind of that digital transformation, digitalization, whatever you want to call it, that is true, but what's really happening is like IT/OT dependence, so a manufacturing environment, as an example, that's running their manufacturing execution system, they may depend very heavily on the ERP system in IT for scheduling, and we are dependent in the factory on IT having access and having the availability of that system in the same way that maybe at the remote field site I'm dependent on the IT backbone connecting me out to the cloud, and the idea that cloud was going to get connected to field sites used to be silly, but nowadays we're using it for data modeling and forecasting and things that are driving value to operations, and so those connections and similar, pretty important. So operations have become more dependent on IT and will become more dependent on IT, but OT and operations itself are still always going to be separate, and OT is the critical part of critical infrastructure, so the whole point of it. Like, IT is there to support the business. OT is the point of the business, like that is what you are doing that keeps you in business, and I still think there's a lot of security professionals out there that don't understand their business or understand the mission of that site, that plant, or even the company, and therefore, the security controls that you're trying to apply are [inaudible] copy and pasted, leaning on standards and frameworks or regulations. It's not going to be purpose-built for what you're actually trying to accomplish because you don't necessarily even know what you're trying to accomplish. So my biggest piece of advice for folks from IT coming into OT is first learn the mission. What are we trying to accomplish? Learn the requirements, then figure out what of your security background can translate over and drive value, and then what are the gaps you have that may be unique OT solutions to apply? I think that's where the community better going and that's where I see companies having a lot of success when they kind of get to that point, but yeah, it's not IT and OT convergence. I just try not to argue with everybody when I say that term.

Mark Urban: One question, Rob, is, as you look at cyber for OT, there are points at which that intersects with the IT security practices. Can you talk a little bit about the -- how the integration of the unique focus goes into kind of when you're investigating incidents or you're trying to triage, you know, across events, and that goes into a single key that might be a mix, is that a point -- the tool sets that are used by the SOC and by analysts, do those hard to converge get to the integration of kind of OT cyber events information into broader IT-focused thought processes?

Rob Lee: Yeah. So when you -- when you look at integrating OT cyber events into kind of security operations process, like a SOC processor, so forth, generally, what I recommend folks do is, like, in a perfect world, you'd have a dedicated OT SOC. You just would. In a perfect, fully resourced world, the critical part of your business would get attention and resources. Like, that's not -- it's not a controversial statement to make. But we don't live in a perfect fully resourced world. We're usually dealing with resource constriction and restrictions inside of IT. Totally get it. So in an ideal world, what I would like to see is a setup that respects the differences in OT and respects the different needs and impacts while taking advantage of the IT security skill sets, processes, structure, right? We don't want to gold-plate this problem. Want to be efficient. And so what I generally recommend is do your tooling, your OT tooling, right, have your OT detection tools, your prevention of focuses, the things that help you respond, whatever, and realistically, if you go back to the five critical controls at the SANS Institute for ICS published, and then we'd look at like that first requirement of what's the instant response, right, reverse engineer what we're going to need out of the incident response, and that's going to help you make a defensible architecture for control number two and that's going to help you understand what you need to collect in the first place, and that collection is going to be the data that would be helpful in an incident but is also to detect threats. So anyway, so you kind of know that. You know the requirements. You kind of understand the scenarios you're preparing your company for, and then you do your OT tooling and that could be something like create a platform and like Artech, but it could be any other choice, right? Whatever, whatever you're doing that understands network communications, ICS protocols, things like that, and once you get that deployed and kind of have that visibility, then I would integrate it into kind of those security operations processes, but I wouldn't just have the expectation that every event gets triaged in an IT SOC. It's just not scalable even for IT practices. What I would recommend is that the structure you put in place is that the OT tooling comes into the same tier one that's shared across the organization, and that tier one group is sitting there triaging events and so forth, and what I want those tier one folks to look for as it relates to the OT environment is not like indicator sweeps, because there's probably not a lot of overlap and indicators on the threats inside the ICS networks, and they can be, but it's not a reliable detection method. It's more useful for forensics than detection. Feel free to do the indicators, if you want, but the realistic value is they should understand what snares do the company care about. That should identify, out of the 81 or so tactics, techniques, procedure, MITRE Attack, that might be 20 or 30 that you care about for your organization against the scenarios and the risk -- risk scenarios you determine that are relevant to your organization. Out of that, I would then take those 20 or so TTPs and I would map the detections that we have that can come off those appliances to those, if they're not already mapped, and I'd be telling my tier one analyst, hey, when you have these detections fire, here's how you triage them and this is what you're going to focus on, instead of triage every event, every anomaly, every alert. This is not scalable. Once that kind of classic tier one escalates it up, if it comes off the IT environment, get it to your IT tier two and continue on your normal processes. If it comes off the OT environment, I want a dedicated tier two function for OT, training differently, different skill sets we may be hiring for, different development programs, measurement differently, different KPIs, things like that, right? And what I want them to be able to do is pivot back, if Taylor and Splunk or ArcSight or whatever their SEIM is or aggregator of choice, I want them to be able to pivot back into that operations technology security appliance, right? Like, not Dragos or whoever else, but pivot back into that visibility product, if you will, and try to understand the environmental context around the detection. Is this something abnormal? Got it, okay. Is this something relevant to us? Okay, got it. What's the environmental context? Is this random HMI and we've got plenty of them? Is it a workstation that is modifying the logic on a safety system? Is it the active directory that's connecting up IT to OT and could be used as a pivot point? Is it our core historian? What is this thing? What is the context? What is the context in consideration of this environment? How potentially real is this threat that we're now looking at? And then if it's valid, escalate it up to a tier three, but in my opinion, tier three should not be some rock star subject matter expert on analysis. To me, tier three is site level personnel and calling out to people who already know, hopefully we're already gotten to know, especially on tabletop exercises or prep ahead of time, and I'm calling my buddy, "Hey, Jan, hey, Joe," you know, whatever, out of the field. "Hey, this is Rob." It's not "Hi, this is security and you've never met me before." It's Hey, this is Rob. Yeah, how you been? How's the family? Cool, man. I'm looking at something down on your site. I saw this detection, saw this alert. Here's our awareness of it. Here's the context around it, what we think it is. Here's why we think it's a valid threat and risk. What do you think? What's the impact in this environment? What was the impact there if this were to actually take place? Because your tier two or anybody sitting in security can really understand threats, can really understand methods, can really understand the risk of that threat, but they'll never understand the impact and they'll never understand that site's impact. So you want somebody in operations to go, "That's just GE coming in," or "That's -- that -- that looks malicious, but it's a backup system and we can handle that on a routine maintenance," or "Hey, I don't know. That's something we haven't seen before and that's on a system that's particularly dangerous, and yeah, we could probably use some help. Could you send somebody down and help us investigate this?" So it's that tier three that really gives you the context of the impact, but it's also the fact that operations still owns and still maintains control of their own environments, so now it's not some threat, oh, we let IT in. They thought there was a risk and they came down like FBI waving badges and took over our plant, you know. Nobody wants that. Control system professionals don't want to lose control, and so it allows us to be the service-oriented professionals, bringing our unique insights in context, having the OT unique insights in there, not just random IT security stuff, and allowing the operator or some site personnel to be able to make that final decision, because again, we're there to help them and service them. And I find that approach of taking unique OT tooling, unique OT requirements, respecting the uniqueness of OT, but also then understanding that not everything is unique, and being able to copy and paste those skills, whether we're integrating into Splunk or ServiceNow or whatever for our security operations and kind of normal operations approaches, that makes it very repeatable. It makes it something that you can have success in. We've seen a lot of customers have success doing that approach, and it's something that ultimately builds bridges and culture between IT and OT instead of tearing it down.

Mark Urban: Good examples of differences in OT, IT, and have it to relate to one another, especially in the context of [inaudible]. Thanks very much, Rob. Appreciate your time today.

Rob Lee: Thank you as well. Take care.

[ Music ]

Dave Bittner: And that's Control Loop brought to you by the CyberWire and powered by Dragos. For links to all of today's stories, check out our show notes at Sound design for this show is done by Elliott Peltzman with mixing by Tre Hester. Our senior producer is Jennifer Eiben. Our Dragos producers are Joanne Rasch and Mark Urban. Our executive editor is Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you back here next time.

[ Music ]