CSO Perspectives is a weekly column and podcast where Rick Howard discusses the ideas, strategies and technologies that senior cybersecurity executives wrestle with on a daily basis.
State of security automation.
Rick Howard, N2K CyberWire’s Chief Analyst and Senior Fellow, turns over hosting duties to William MacMillan, the Chief Product Officer at Andesite, to discuss the Cybersecurity First Principle of automation: current state and what happens now with AI as it applies to SOC Operations.
Check out Rick's 3-part election mini-series:
Part 1: Election Propaganda Part 1: How Does Election Propaganda Work? In this episode, Rick Howard, N2K CyberWire’s Chief Analyst and Senior Fellow, discusses personal defensive measures that every citizen can take—regardless of political philosophy—to resist the influence of propaganda. This foundational episode is essential for understanding how to navigate the complex landscape of election messaging.
Part 2: Election Propaganda: Part 2: Modern propaganda efforts. In preparation for the US 2024 Presidential Election, Rick Howard, N2K CyberWire’s Chief Analyst and Senior Fellow, discusses recent international propaganda efforts in the form of nation state interference and influence operations as well as domestic campaigns designed to split the target country into opposing camps. Guests include Nina Jankowicz, Co-Founder and CEO of the The American Sunlight Project and Scott Small, Director of Cyber Threat Intelligence at Tidal Cyber.
Part 3: Election Propaganda: Part 3: Efforts to reduce the impact of future elections. Thinking past the US 2024 Presidential Election, In part three of the series, Rick Howard, N2K CyberWire’s Chief Analyst and Senior Fellow, discusses reducing the impact of propaganda in the future elections with Perry Carpenter, Chief Human Risk Management Strategist at KnowBe4 and host of the 8th Layer Insights Podcast, Nina Jankowicz, Co-Founder and CEO of the The American Sunlight Project, and Scott Small, Director of Cyber Threat Intelligence at Tidal Cyber.
In a 2020 episode of the CSO Perspectives podcast, Rick cites German historian Friedrich Klemm’s research that the idea of operations centers goes back as far as 5,000 BC. Think about that. For as long as humans have needed to coordinate complex actions and make decisions as teams, we've needed operations centers. Give the episode a listen. Rick gives a fascinating tour d’horizon – from NASA's Mission Control managing the Apollo 13 crisis to Intelligence Community ops centers briefing Presidents during the Cold War, operations centers have been crucial to handling our most critical challenges.
By the late 1980s and early '90s, this need for coordinated analysis and action extended into cyberspace. As cybersecurity operations grew more complex, organizations in both the commercial and government spaces started standing up Computer Emergency Response Teams, or CERTs. Fast forward to the early 2000s, and Cybersecurity Operations Centers -- SOCs for short – had become standard practice for enterprises of any size defending against an explosion of cyber threats.
Today's SOCs are as diverse as the organizations they protect. You've got large enterprise SOCs, like those in financial institutions and government agencies, that have specialized teams in-house handling everything from threat intelligence to incident response. Then there are the smaller operations – picture a solo analyst or a skeleton crew juggling every aspect of security, with the help of outsourced expertise through Managed Security Service Providers. In between, you'll find hybrid SOCs that combine in-house talent with outsourced expertise. But whatever the size or whether their analysts work from the same room or are spread across the globe, all SOCs share the same mission: protecting their organizations from cyber threats that could cause material impact.
As anyone in cybersecurity knows, the challenge facing SOCs today couldn’t be higher. We're looking at cyber attacks projected to cause $10.5 trillion in damages by next year. Just look at the recent United Healthcare Group ransomware attack earlier this year – the financial impact from that could hit $2.3 billion. The industry's response has generally been to pour money into more sophisticated tools, procure more threat intelligence streams, and build platforms that automate security work.
But here's the irony -- all these well-intentioned solutions have actually made life in the SOC more difficult. Analysts are now drowning in an everywhere data environment, struggling to interpret and prioritize neverending indicators, as close as possible to the speed of threat. Forget finding needles in haystacks. We're asking analysts to find specific needles among stacks of needles, with these stacks spread across countless disconnected data islands. Some organizations now run more than 100 different security tools, forcing analysts to bounce between screens and portals, each with its own query language, trying to piece together a cohesive investigatory narrative.
Meanwhile, SOC leaders face mounting pressure to deliver on metrics like Mean Time to Resolution and prove ROI of their growing security budgets. But these metrics often miss the real threat landscape their teams are facing.
So why have these solutions failed to tame the cyber chaos? I believe our industry has a blind spot. We've focused so much on software and hardware that we've forgotten about the "humanware" of security workflows. We've overlooked the front-line analysts, threat hunters, and managers whose judgment and intellectual horsepower are the real engine of modern security operations.
Let me provide an example. It's 5 PM on a Friday (because when else would it happen?), and a threat intelligence report – say, from a government agency like CISA, a vendor, or an internal team – lands in an analyst's inbox. The CISO needs to brief the board on Monday morning. That analyst is now looking at canceled weekend plans and more time away from family. Just another brick in the wall of cyber burnout, the human toll of all the cyber chaos that we’ve unfortunately internalized over the past 20 years. Even with the most shiny technology and all the data feeds in the world, SOCs are overworked, understaffed, and unable to retain their best and brightest.
As a former Air Force officer, I like to use something called the OODA Loop to describe SOC workflows. It stands for Observe, Orient, Decide, Act – a decision-making model developed by Colonel John Boyd in the 1950s. While I'm by no means the first to apply this term to cybersecurity, I think the OODA Loop is crucial for understanding how we can use technology and automation to help our analysts instead of overwhelming them.
The thing I'm interested in is how the industry has attempted to use automation in the past to improve the efficiency of the SOC OODA Loop. If we understand that, then to get this right in the future, we can use emerging technologies like AI, not to add additional toil to human analysts, but to meet them where they are. We can amplify their capabilities and not only make their jobs more effective, but actually enjoyable again.
Phases of SOC automation.
Before we dive deeper into the future, let's take a quick journey through the evolution of security operations automation. It's a story that unfolds in three major chapters over the past two decades – the SIEM, SOAR, and XDR.
Picture the early days of security operations. Analysts were basically digital detectives with barely any tools – just their wits and determination. They'd review system logs by hand, something that sounds almost unthinkable today. When something suspicious popped up, they'd be reactive, conducting painstaking manual investigations to track down and stop intrusions.
Then came the first game-changer in the early 2000s: the Security Information and Event Management system, or SIEM. Think of it as the first real command center for security teams – a central hub that could pull in and aggregate data from all sorts of security tools. Great idea, right? Well, yes and no. As more data poured in, analysts found themselves spending less time investigating threats and more time managing false positives and trying to find the signal in the noise. Not to mention, all that data storage started costing organizations a fortune. If you put 100 CISOs in a room nowadays, you’ll get approximately 100 complaints about the costs associated with their SIEM.
By the mid-2010s, we saw the rise of SOAR – Security Orchestration, Automation, and Response. SOAR tried to capture human-style reasoning in playbooks to automate repetitive tasks. When implemented well, it really did free up analysts for more strategic thinking. But these playbooks turned out to be brittle. They needed constant attention from highly skilled personnel to keep up with rapidly evolving threats and changing enterprise environments. Both the enterprise environment and attacker techniques are rapidly moving targets, making it difficult for playbooks that only use simple reasoning rules and “if-this-then-that” automation to keep up. These limitations were so significant that by 2024, people started talking about the "death of SOAR" and the need for something better.
That brings us to XDR – eXtended Detection and Response – which I see as the bridge to our post-SOAR future. Instead of just pooling more data like SIEMs do, XDR connects directly into security tools in real-time, looking for threat patterns across endpoints and networks that might be missed when viewing data in isolation. It's a more sustainable approach that helps organizations break free from the burden of ever-growing SIEM storage costs. On the surface, this seems like a faster, more sustainable approach, that at the very least gets CISOs out from under the burden of growing SIEM storage costs.
Along this journey, we also saw the rise of Threat Intelligence Platforms. These platforms promised to make analysts smarter and faster by connecting security tools to contextual data and threat intelligence feeds. The idea was solid – automatically enrich security alerts with additional context to help analysts make better decisions more quickly. But in practice? It often created yet another data deluge. Instead of making analysts' lives easier, many found themselves drowning in a sea of alerts and intelligence feeds, struggling to separate the truly important signals from the noise.
Post-SOAR automation and AI: the current state of SOC automation.
You know what's fascinating about all these previous attempts at SOC automation? They all shared a fundamental flaw – they tried to structure something that's inherently unstructured. SIEMs tried to centralize all your logs but couldn't handle the scale of big data. They eventually split off into data aggregation and case management tools. SOAR platforms attempted to automate analysis and reasoning processes that really needed human judgment. It's like trying to create a rigid playbook for a game where the rules keep changing.
But now we're on the cusp of something truly revolutionary with the introduction of artificial intelligence into the SOC. For the first time in human history, we have systems that can reason over unstructured data and draw semantic meaning without explicit programming. Meaning, these systems can make connections between words and concepts in ways that feel almost human-like. It's a dramatic shift from traditional technology that required everything to be neatly structured and categorized.
Let me paint a picture of what this might look like. Imagine an AI system that notices an emerging threat and proactively suggests updates to your detection rules, even drafting changes and simply asking your analyst to click to approve and deploy. Or picture a junior analyst getting stuck during an investigation, and the AI steps in to suggest strategies that experienced threat hunters typically use in similar situations. Think about a high-priority alert or report coming in, and an AI-powered system investigates it at machine speed, serving up a comprehensive assessment with suggested actions ready for your review. Or, to return to our poor analyst facing down another missed anniversary dinner due to a late-breaking threat report on a Friday, instead of leading to yet another canceled plan, an AI-SOC platform could help analyze that report in seconds or minutes and help the analyst decide if action could wait until Monday.
This isn't science fiction -- the building blocks are already here. Recent advances in generative AI and reinforcement learning have fundamentally changed how humans can interact with data. In the SOC context, we can now wrap technology around our analysts to radically accelerate their decision-making process. Remember that OODA Loop I mentioned earlier – Observe, Orient, Decide, Act? With AI-powered automation, this decision making process is about to get mind-bogglingly fast and chock full of context.
The key difference from previous automation attempts is that with AI, we're not just trying to replace human reasoning – we're amplifying it. Instead of "if-this-then-that" automation, we're moving toward true human-AI collaboration and expert reasoning systems. This is the golden era of SOC automation we've been waiting for, where technology finally helps analysts achieve better outcomes faster, regardless of their experience level.
The potential for security operations is tremendous. But there's also a risk of going too far– of seeking to use AI to make decisions or to replace humans. The solution is a happy medium – a bionic coexistence that combines the capabilities of humans and technology. The successful marriage between humans and machines involves each side playing to its strengths.
This is where we need to have a thoughtful conversation about automation and AI in security operations. It's not about replacing humans -- it's about empowering them. Think of it like incorporating an autopilot into an aircraft’s cockpit. We don't want end-to-end automation that takes humans out of the cockpit. In the SOC, we need to build better cockpits that give analysts the controls and context they need to make better decisions – not end-to-end automation.
As we ride this wave of AI innovation, we need to be thoughtful about how we implement it in the diverse SOC environments I discussed earlier. No black box AI making decisions we can't understand. No automated remediation that exceeds an organization's risk tolerance.
These platforms need to adapt to the specific needs and constraints of different industries and organizations. The goal shouldn't be to automate everything – it's to automate the right things in the right way, always keeping the human analyst in the decision-making loop.
Four parameters for AI-powered automation.
Overall, there are four parameters that I think will be required for an AI-powered automation platform to be truly transformative and broadly accepted by security teams.
First, human-AI collaboration must be at the center of the workflow: Any automation or AI we introduce should accelerate the OODA Loop. It should enhance the analyst's decision-making process, not become another burden or take away any human agency in security workflows. The AI should both teach and learn from analysts at all skill levels in a symbiotic relationship.
Second, it needs what I call a "Safe AI" architecture: Security teams must have absolute confidence that sensitive data stays within predefined boundaries and isn't used to train external AI models. Safe AI includes what I’ll call evidentiary AI -- meaning every step the AI takes is auditable and can be compiled into comprehensive reports about investigations and outcomes, to give the CISO peace of mind that due diligence was done. No black boxes allowed.
Third, a platform needs to be modular and integrate seamlessly with existing environments: We need to get security teams out of the "rip and replace" cycle that's become all too common in this era of tool sprawl. The solution should optimize existing ecosystems without requiring expensive log aggregation strategies.
Fourth, there should be no requirement to centralize all data in one location: The platform should be able to work with data where they live, extending analytical workflows across different data islands without complex data pipelines. This type of platform will leave the data islands where they are while extending analytic workflows across all of them.
What does it look like in practice?
When we incorporate these four parameters into our new AI automation platform, what does it look like in practice?
Let’s return to the cockpit analogy one last time (sorry, I can’t help myself). Just as a pilot wouldn't program their autopilot to "take off, engage in combat, avoid gunfire, and return home with half a tank of gas,” we shouldn't expect SOC automation to run end-to-end without human oversight. Instead, imagine a system where AI augments the analyst's capabilities, providing enhanced situational awareness across multiple "data islands" just as modern avionics give pilots a comprehensive view of their battlespace.
Analysts constantly need more context to make good decisions, but that context is scattered across data islands, buried in PDFs and wikis, and spread throughout the organization. No human can possibly reason over all of that at scale. That's where AI comes in -- to gather and present the context that humans need to make better decisions.
Overall, our guiding principle should be simple: Every piece of technology we bring into the SOC must be wrapped around the human analyst, not the other way around. Because when you strip away all the tech and tools, it's still the human analyst who stands between attackers and their targets. The human needs to be in charge, but significantly upgraded. That's the future of automation in the SOC and the promise of AI.
Final thoughts.
Let me close with a personal observation that I hope drives home why this matters so much to me. Back when I was a special operations helicopter pilot, I used to say I would have literally paid to do my job. Imagine being in your twenties, flying fifty feet off the ground with night vision goggles, working alongside elite military teams. I never took much time to think about the risks or sacrifices involved, because I had the best equipment in the world and the unquestioned support of my chain of command. It was pure adrenaline and purpose rolled into one incredible package.
Now, fast forward. A lot. More decades than I want to admit. During my time as a senior leader in government and in the private sector, I had brilliant young security analysts – the same age I was as a pilot – coming into my office. They had what I thought was the coolest job on the planet in the 21st century. In fact I think of these folks kind of like the pilots of the digital age. And I saw in them the same enthusiasm and sense of mission I had as a pilot. But I also saw burnout – a word I don’t recall ever uttering when I was an Air Force pilot. These analysts in my office were motivated and passionate, yet. But they were also overworked and overwhelmed – and many were looking for the exit.
It’s my belief that we don't actually have a security talent shortage. What we have is a burnout and retention problem. Analysts are leaving the field too early, before they can become the world-class threat hunters and industry leaders we desperately need. And throwing more shiny tools at the problem won't fix this fundamental issue. We've tried that with SIEMs, SOAR, and the failed point solution, “rip and replace” era in security that I believe is now, finally – thankfully – coming to an end.
When we talk about this new, golden age of AI and automation in the SOC, making the analyst's job not just more effective – but genuinely exciting and fulfilling – should be our north star. This is where AI can truly transform the industry – not by replacing analysts, but by removing the soul-crushing parts of their job and equipping them for success from day one. No more drowning in false positives. No more manual correlation of endless data points. No more missing context that's buried somewhere in a disconnected data island. Instead, imagine analysts focusing on the intellectual challenge of outsmarting adversaries, supported by AI that handles the heavy lifting and amplifies their expertise.
The future of the SOC is about empowering digital pilots with the best technology imaginable, making their jobs as thrilling as flying a special ops mission. Because when we get this right, we won't just solve the talent problem – we'll create a profession that this country’s best and brightest are lining up to join.
We need security analysts to feel that they have the best tools, resources, and support imaginable, and that the industry is building with them top of mind. Security operations teams – no matter where they work or their role in the SOC – keep us safe. We owe them our support, the same way we support our military and first responders. In fact, I think we should all start thanking our cyber defenders for their service. Let’s start by bringing security automation to the next level, taming the cyber chaos, and making security jobs fun and rewarding.
References:
Bob Violino, 2022. 7 top challenges of security tool integration [Analysis]. CSO Online.
Michael Cobb, 2023. The history, evolution and current state of SIEM [Explainer]. TechTarget.
Robert Lemos, 2024. SOAR Is Dead, Long Live SOAR [Analysis]. Dark Reading.
Timbuk 3, 1986. The Future’s So Bright, I Gotta Wear Shades [Song]. Genius.
Timbuk3VEVO, 2009. Timbuk 3 - The Future’s So Bright [Music Video]. YouTube.