CSO Perspectives (public) 6.17.24
Ep 90 | 6.17.24

The current state of XDR: A Rick-the-toolman episode.


Rick Howard: In the early days of this podcast back in 2021 we published a Rick the Tool Man love letter to this newfangled security tool called XDR. [ Soundbite of TV Show, "Home Improvement" ] You might have heard about it. The acronym stands for extended detection and response and I was gushing about how this tool might transform the modern day security architecture. [ Soundbite of TV show, "Home Improvement" ] Back then Gartner placed XDR at the beginning of the journey on its famous hype chart, just starting the climb, the peak of inflated expectations. And I was jumping on the bandwagon to help inflate the hype. Two years later, July 2023, Gartner placed XDR on the back end of the peak just starting the steep roller coaster ride down toward the trough of disillusionment and forecasted 5 to 10 years before it reaches the plateau of productivity. Since this is the time typically when security pros start to lose faith in a product idea because the hype surrounding it hasn't matched existing products, I thought it was time to revisit the current state of XDR because I still believe that it represents the future of security architecture that we all need. I don't want the infosec profession to lose sight of this potentially transformational tool just because it's not quite ready for prime time. So hold on to your buts.

Unidentified person: Hold on to your buts. Buts. Buts.

Rick Howard: In this Rick the Tool Man episode we're going to explore the current state of XDR. [ Soundbite of TV show, "Home Improvement" ] [ Music ] My name is Rick Howard and I'm broadcasting from N2K Cyber's secret sanctum santorum studios located under water somewhere along the Patapsco River near Baltimore harbor Maryland in the good old U.S of A. And you're listening to "CSO Perspectives," my podcast about the ideas, strategies, and technologies that senior security executives wrestle with on a daily basis. [ Music ] I can understand why the idea of XDR is sprinting towards the trough of disillusionment, though. Most of the security platform vendors have a product that they call XDR like sentinel one, Splunk, Microsoft, IBM, crowd strike, Cisco, Palo Alto networks, just to name a few. But none of their explanations about what XDR is and what it does matches exactly. Gartner says that XDR is a quote, "Unified security incident detection and response platform that automatically collects and correlates data from multiple proprietary security components." Unquote. [ Soundbite of TV show, "Home Improvement" ] That's accurate, but you could also say the same thing about SIEM tools, security information and event management tools. [ Soundbite of TV show, "Home Improvement" ] I'm looking for something a little more descriptive. What makes XDR special? A subtle difference between a SIEM tool and an XDR tool is how the two technologies collect the data. With SIEM tools the monitored system, let's say a Fortinet firewall, generates logs as part of its normal operation. The firewall administrator configures the system to automatically send the log data to the SIEM tool or storage and processing. The XDR tool is different. XDR administrators configure the tool to directly connect to the Fortinet firewall via an API, an application programming interface. The API allows XDR administrators to interrogate the firewall for the specific data they need, not just general purpose log data, but any information on the system, and transports the data to the vendor provided XDR data lake or storage and future processing. Both methods allow, as Gartner says, a platform to collect data from varied sources. Log data in the case of the SIEM tool and any kind of data in the case of the XDR tool. But the evolutionary step of using APIs to collect the data is what makes XDR tools so transformational. It gives us some options. [ Soundbite of TV show, "Home Improvement" ] [ Music ] Rick Doten is an old friend of mine, the security VP at Centene and a regular contributor here at the N2K CyberWire hash table. This is how he describes it.

Rick Doten: Like zero trust, it's not a thing. It is an approach. And so when someone says there is an XDR tool then it's like, well, that's how everything works now. I mean to me it's about the difference between waiting for logs to be written and then consuming logs and reading logs and deriving things from those logs as opposed to connecting directly with the API and having instant access into things that are happening and then sending alerts and be able to do responses based on that. I mean that's the fundamental to it. And, you know, I talk to a lot of vendors and a lot of start ups and all of the posture management tools whether it's cloud posture management, data posture management, or dating match management, asset management, you know, run time, all of them this is how it works. It's like everything is just -- everything is API based so let's just plug into the APIs and pull the stuff we want to pull and be able to set rules around it.

Rick Howard: In order to understand what I mean by this, it might help to understand that XDR arrived on the scene in 2018 by merging two different security tool sets, logging and antivirus. These were the prequels to XDR, you might say. So let's start with logging. Raffael Marty over at the Venture Beach website says that you can trace the origin of the logging piece all the way back to the original email send mail program on BSD Unix in the 1980s. Eric Allman was building sendmail to be one of the first to implement the simple mail transfer protocol. He needed a way to log what was happening as the various pieces and parts of the sendmail system banged against each other. When he wrote the first syslog D program for BSD Unix to do that he birthed the first logging system that we all know and use today. For the uninitiated, syslog stands for systems logging and the D stands for daemon. In the Unix world daemons are little standalone programs that start up, do a task, and then disappear again until needed. In this case syslog D receives a log message from a monitored system like the Fortinet firewall and stores it somewhere. As an aside, I did a word notes podcast on the word daemon back in 2020. For the nerd reference in the show I highlighted one of my favorite sci fi novels. It's called "Daemon" and was self published by Daniel Suarez in 2006. Here's Suarez describing the book at a Google TED talk in 2009.

Daniel Suarez: So for those of you who haven't read it I'll give you the high concept that I gave the Hollywood folks that seemed to work okay. It is the story of a highly successful online game designer who creates a program that monitors the web for the appearance of his own obituary. And when that appears, this program activates and cascades in activating other programs that begin to tear apart the systems supporting the modern world. [ Music ]

Rick Howard: As the years went by, though, we started collecting logs on everything. The amount of stored data started to become unmanageable. In the late 1990s and early 2000s SIEM tools emerged to help us corral the volume of messages. Instead of collecting logs separately for each application, and trying to manually correlate the information with homemade databases, administrators could dump all the logs to this centralized SIEM system and use some of the vendor provided functionality to scrub the data. But these SIEM systems were expensive. You had to provide local storage, hard disk space, to accommodate the volume of data. I remember it was a constant struggle to keep ahead of the demand. Every time we added more disk space we filled them up with data quickly. The vendors of course made their money by selling more disk space so they were only too accommodating to help us upgrade. But, like I said, upgrades were expensive. Infosec professionals were making trade off decisions about what not to save to disk or how long we would store things before we would overwrite them. That was counter to what we were trying to do with the logging project in the first place. You wanted to use the logs to trace bad guy activity over time. If your logs only went back three weeks, or if your analyst needed log data on systems you weren't watching, that was a problem. It was also a major task to manage the storage system. Unless you were a Fortune 500 company or your vertical had strict compliance and reporting requirements, most of us couldn't afford to buy and maintain them. That all started to change when Amazon rolled out AWS in 2006. AWS made it possible to store all kinds of data relatively cheaply and they handled all of the administration. Bonus. [ Soundbite of TV show, "Home Improvement" ] There was another big problem, though. All vendors used their own proprietary logging format. If security professionals tried to correlate their Cisco firewall logs with their Symantec antivirus logs, that represented a ton of low level grunt work normalizing the data so that the SOC analysts could make sense of it all. By normalizing I mean they had to match the fields of the Cisco firewall data set to the fields of the Symantec antivirus logs. That normalizing task was and is an intermediate step that provides no value. Google's site reliability engineers call that toil. We needed to do normalization to get to the thing that was valuable, but the normalization thing itself wasn't. The vendor community took a swing at addressing that issue back in the mid 2000s. They started working on something called the common event format, CEF. According to Splunk's Stephen Watts, it's a standardized logging format designed to simplify the process of logging security related events and making it easier to interrogate logs from different sources into a single system. Today many vendors use the CEF format, but other competing standards have emerged too like JSON, Javascript object notation, windows event logs, the NCSA common log format or CLF, the extended log format, ELF, the W3C extended log file format, and the Microsoft IIS, internet information server. The logging landscape is still a bit of a tower of Babel, if you get my drift. [ Soundbite of TV show, "Home Improvement" ] The vendors can't seem to agree on what log files should look like and so SOC analysts still execute a lot of toil to normalize the data. It's all in one spot and the administrative burden is lower than it was back in the 1990s, but SOC analysts are still sifting through multiple piles of data haystacks looking for needles and they spend a lot of time making the haystacks look the same. So why is logging a prequel to XDR, you might ask. Well SOC analysts sifting through reams of machine generated log files looking for bad guys has been the standard operating measure since the 2000s. When XDR tools hit the market in 2018 the tool gave the infosec profession a chance to upgrade that process. The other prequel to XDR is the evolution of antivirus software and EDR, endpoint detection and response. In 1987 a German hacker and computer security expert named Bernd Fix wrote software he designed to remove the infamous Vienna virus from his system, thus becoming the first documented author of antivirus software ever written. Soon after, the notorious John McAfee created the first antivirus commercial product called Virus Scan and the infosec profession gained a must have tool for the security stack. By the late 1990s if you had any budget at all your security stack had a firewall and an intrusion detection system at the network level and at least one antivirus system deployed on every endpoint. When I was working in the Pentagon in the early 2000s we had two deployed on each endpoint because we didn't trust just one to get the job done. The idea behind antivirus systems was that the vendors would write signatures for known viruses and malware designed to detect their deployment. Once detected, the engine could remove it or render it benign. It was a constant battle to get the latest signatures deployed in a timely manner. But in the late 2000s a new technology emerged that looked at endpoint behavior to detect malicious code. Instead of just using signatures of known malware behavior, the engine watched the entire operating system looking for anomalies. If the endpoint started communicating with servers in Tajikistan when it previously never did before, that might be an indicator that something was amiss. This model allowed the system to detect previously unknown malicious code, a big benefit over signature based antivirus. Anton Chuvakin was working for Gartner in 2013 and he gave the new technology its name, endpoint threat detection and response, ETDR. Now we all just call it EDR. According to crowd strike, EDR acts like your old TV's DRV recording relevant activity to catch incidents that evaded prevention. While EDR was an innovative and disruptive technology, it was limited because it only dealt with the endpoint on the adversary attack campaign. It didn't see the entire picture. The Lockheed Martin research team had just published their now famous intrusion kill chain paper in 2010 and the infosec profession was just starting to get their head around the idea that bad guys had to navigate the entire kill chain undetected and unstopped in order to be successful. EDR was just one piece infosec professionals could use on the kill chain. To have control and visibility on the entire kill chain, SOC analysts dumped the alerts from their EDR engines as well as all the other -- and that's our show. Well, part of it. There's actually a whole lot more. And if I do say so myself, it's pretty great. So here's the deal. We need your help so we can keep producing the insights that make you smarter and keep you a step ahead in the rapidly changing world of cybersecurity. If you want the full show, head on over to thecyberwire.com/pro and sign up for an account. That's thecyberwire, all one word, dot come slash pro. For less than a dollar a day you can help us keep the lights and the mics on and the insights flowing. Plus you get a whole bunch of other great stuff like ad free podcasts, exclusive content, newsletters, and personal level up resources like practice tests. With N2K pro you get to help me and our team put food on the table for our families and you also get to be smarter and more informed than any of your friends. I'd say that's a win win. So head on over to thecyberwire.com/pro and sign up today for less than a dollar a day. Now if that's more than you can muster, that's totally fine. Shoot an email to pro@n2k.com and we'll figure something out. I'd love to see you on N2K pro. Here at N2K we have a wonderful team of talented people doing insanely great things to make me and this show sound good. And I think it's only appropriate you know who they are.

Liz Stokes: I'm Liz Stokes. I'm N2K CyberWire's associate producer.

Tre Hester: I'm Tre Hester, audio editor and sound engineer.

Elliott Peltzman: I'm Elliott Peltzman, executive director of sound and vision.

Jennifer Eiben: I'm Jennifer Eiben, executive producer.

Brandon Karpf: I'm Brandon Karpf, executive editor.

Simone Petrella: I'm Simone Petrella, the president of N2K.

Peter Kilpe: I'm Peter Kilpe, the CEO and publisher at N2K.

Rick Howard: And I'm Rick Howard. Thanks for your support, everybody.

Everybody: And thanks for listening. [ Music ]