Vulnerability Management: An essential tactic for Zero Trust from the Rick-the-Toolman Series.
By Rick Howard
Mar 7, 2022

CSO Perspectives is a weekly column and podcast where Rick Howard discusses the ideas, strategies and technologies that senior cybersecurity executives wrestle with on a daily basis.

Vulnerability Management: An essential tactic for Zero Trust from the Rick-the-Toolman Series.

Listen to the audio version of this story.

I know that the phrase, “zero trust,” causes a sour taste in the mouth of most network defenders these days. The main cause, I think, is that the bulk of security vendors have incorporated the buzz word into their marketing material by now.

“The Galactic Zeroium paper clip. It slices. It dices. It collects those naughty runaway zeros scattered across your house into trusted paper piles, and all for the minimum price of just $19.95 in 27 weekly installments.”

If I saw that ad on late night TV, I’d buy that in a heartbeat. I don’t know what it does, but clearly, I need to get all of my zeroes into my paper piles right now.

And that seems to be the way for all potential trends in the tech security space. It starts by somebody having a good idea, like John Kindervag back in 2010 with his original zero trust white paper, “No More Chewy Centers: Introducing The Zero Trust Model Of Information Security.” Slowly but surely, everybody gets excited about how it will solve all of the world’s problems to include world peace. At some point though, the crowd turns on the idea because it dawns on them that a practical solution to deploy this great idea is really hard to do and the commercial offerings that claim they’ve solved it fall a bit short.

These changes to expectations are captured beautifully by the famous Gartner Hype Cycle. According to the Challenging Coder web site, Gartner’s Jackie Fenn created the concept in 1995. She noticed a repeated pattern of expectation attitudes from consumers of tech and security products as new and innovative products emerged in the marketplace. The expectation starts with a product announcement and then rises through the “peak of inflated expectations” as consumers realize the potential of the new idea. From there, expectations begin to diminish through the “trough of disillusionment” as these same people begin to realize that the new tech is not quite ready for prime time. From there, expectation rises again through a much gentler “slope of enlightenment” and finally, once the product has matured, reaches the “plateau of productivity.” Fenn published a book on the concept in 2008.

Even though network defenders have put the idea of zero trust squarely in the “trough of disillusionment” today, Gartner analysts see a change. According to the September 2021 Hype Cycle, products that promise zero trust features have just moved out of the trough and have begun their slow climb up the “slope of enlightenment.” 

But I would say that the capabilities these zero trust products offer generally land in the bucket of restricting access to resources based on need to know. And that’s a good thing. But, I want to elevate this conversation a bit and consider zero trust as a first principle strategy, not as a feature to a security product. 

As a zero trust strategy, we want to reduce the attack surface of everything in our digital space by limiting access to workloads (services and data) for people, devices and applications on all of our data islands (mobile devices, data centers, SaaS applications, and cloud deployments. This is 180 degrees opposite of what we used to do from the 1990s up until just recently. Back then, we created an electronic perimeter around all of our digital assets. Once you got into the perimeter though, you had access to everything. Today, the perimeter has disappeared in the traditional sense, and zero-trust solutions give us the ability to reduce access permissions wherever the data or service may reside.

Tactically, we directly pursue this zero trust strategy with robust identity and authorization tools and maybe couple them with an easy-to-use software defined perimeter product. These are the things that Gartner is talking about on their hype chart. But indirectly, one of the ways we reduce the attack surface and reduce access to workloads is by closing the doors and windows to our digital house that we have inadvertently left open. These metaphorical portals manifest in the real world as software vulnerabilities and misconfigurations in commercial and open source software as well as the code that we develop in-house. The indirect tactics that we use to pursue this strategy is a collection of people, process, and technology lumped under the banner of vulnerability management.  

History of vulnerability management (three phases: confusion, easy and hard).

In the early days, in the 1990s, vulnerability management was more about understanding the bugs and exploits discovered and eventually getting around to patching the issue. Exploits didn’t happen that often and we didn’t have armies of nation states, criminals, and hacktivists attacking us around the clock like we do today. We would patch the issue when it was convenient. 

Back then, most of us were running some version of Windows on the desktop and some flavor of Unix on the servers. When issues popped up, we were more concerned about how to decide about the prioritization of all the things. Do I install the new printer in the lab today, or do I roll out the newest version of the vi text editor to fix that new vulnerability? We didn’t even have a common language around vulnerabilities and exploits to compare notes with peers and pundits. According to Tripwire, back then, every software vendor had their proprietary method of tracking vulnerabilities in their own products. Security professionals had no way to know if vendor A’s vulnerability was the same as vendor B’s or if they were two separate issues. We were kind of on our own. That would be phase one of vulnerability management: confusion.

That started to change in 1999 when MITRE’s David Mann and Steven Christey wrote the white paper, “Towards a Common Enumeration of Vulnerabilities.” But hold onto your butts, there are more acronyms involved in this story than you can throw a stick at: NIST, CVE, CISA, NVD, CVSS, SCAP, and E-I-E-I-O (I made this last one up but it feels like after reading that list of acronyms you should sing E-I-E-I-O in the tune of that classic “Old MacDonald Had a Farm” song).

Mann and Christey proposed creating a Common Vulnerabilities and Exposures (CVE) list that the entire community could use and the idea quickly gained traction. The very first CVE list they published contained 321 vulnerabilities chosen after careful deliberation and consideration of duplicates. By 2002, the CVE List contained over 2,000 software vulnerabilities and the National Institute of Standards and Technology (NIST) recommended that the U.S. government only use software that used CVE identifiers. 

By 2005, the Cybersecurity and Infrastructure Agency (CISA) built the National Vulnerability Database (NVD) designed to enrich the CVE list with risk and impact scoring using the Common Vulnerability Scoring System (CVSS) and provided other references like patch information, affected products, and Security Content Automation Protocol (SCAP) mappings. A SCAP scanner compares a target computer or application's configuration and/or patch level against that of the SCAP content baseline. Both CISA and NIST sponsor the NVD.

I know this sounds complicated, but this was just phase II, the relatively easy phase compared to phase III. It was easier because, for the most part, not many were using their personal laptops and mobile devices for official work and cloud deployments hadn’t transformed the industry yet. Vulnerability management was still relatively contained to devices residing behind the perimeter. That started to change sometime around 2014. That's not a precise date. Some organizations were doing it sooner and others did later. Governments did it much later and some are still not there yet. But, the complexity of vulnerability management in phase III is exponential compared to phase II. For example, NIST last year (2021) tabulated a record fifth straight year of newly discovered vulnerabilities: 18,378. And when you consider that these vulnerabilities are scattered across multiple data islands, it’s no wonder that a young CISO looks like he’s 107 years old. It ages you.

As the complexity skyrocketed, as with most things in the security space, network defenders reached a point where they couldn’t manage organizational software vulnerabilities with a spreadsheet any more.

Vulnerability Management is an intelligence task.

According to the Cybersecurity Canon Hall of Fame candidate book, “Practical Vulnerability Management,” by Andrew Magnusson, vulnerability management is not simply patch management. Managing patches within an organization is a subset; a key and essential piece, but not the whole thing. There is another set of activities that must happen before we can even think about applying patches:

  • Continuously monitor all of the software assets running on the network in terms of version control, nested libraries for open source packages, current configuration (who and what can access the asset, what can the asset access itself), the history of who and what have accessed the asset in the past, and exposure to newly discovered vulnerabilities and exploits. 
  • Using the zero trust strategy as a guide, regularly check and recheck that all of the software assets only have access to what they absolutely need to get the job done.
  • Prioritize the most material software assets (the software that would cripple the business if it stopped functioning for even a second or if customer data is exposed because of it).

When new vulnerabilities and exploits pop up: 

  • Determine if the organization is exposed.
  • Forecast the probability that some bad guy will leverage it.
  • Forecast the probability that if it is leveraged, that it will be material.
  • Determine if there is a reliable patch or other workaround that will mitigate it.  
  • Decide which actions to take to mitigate the risk (This could be many things or nothing depending on the risk forecast).

Once all of that is done, then you have to implement whatever you decided to do (the risk mitigation plan). Every single bullet listed above is a critical information requirement to aid in that decision for a newly discovered vulnerability or exploit. Over time, that collection of intelligence will enable you to achieve some success in a vulnerability management program. 

And I referred to intelligence collection on purpose. This is a perfect task for your intelligence team to own, regardless of whether you have a robust team, just two guys and a dog in the broom closet who do this part time, or everything in between. The task lends itself to the intelligence life cycle:

  • Create Critical Intelligence Requirements (CIRs - they are called Commander’s Intelligence Requirements in the military; these are all the bullets above).
  • Data Collection (Decide if the data you have on hand will answer all the CIRs. If not, go get the data you need).
  • Intelligence Production: Convert the collected raw data into useful information.
  • Build intelligence products designed to convey the essence of the transformed intelligence complete with recommended courses of action.
  • Disseminate the intelligence products to the decision maker and all interested parties. Get their feedback.
  • Go back to the top and do it all again (Or as I like to call it, the intelligence do-loop).

Vulnerability management is a DevSecOps task.

A short while ago, I got a question from one of our CSO Perspectives listeners, Raffi Jamgotchian. He was wondering about SMBs trying to follow the first principle strategy of DevSecOps when they barely have the resources to keep the printers working and the coffee brewing. Raffi asked this question (my paraphrase): “If there is no dev to speak of, how do you do DevSecOps? It’s a great question. I hate to admit this but, for startups and very small businesses, this might be a bridge too far. But as you grow and start to edge toward the medium sized business with a few more resources, automating big chunks of the vulnerability management program will save you in the long run. 

The automation requirements are, for the most part, stated in the CIRs above. Keep in mind though that DevSecOps is infrastructure as code, not utility scripts written by our brand new tier 1 SOC analyst, Kevin. I'm not saying Kevin can’t write the code. I’m just saying that whatever Kevin writes needs to fit into the mindset as being part of the security understructure and not running as a cron job on an experimental Raspberry PI computer sitting under Kevin’s desk in the SOC. The code needs to be as stable as the power coming into the data center, flexible enough to accommodate change and improvement, and resilient enough to survive when Kevin gets promoted to that cushy management job over at headquarters and abandons us in the SOC. Thanks Kevin.

This is a commitment to the DevSecOps philosophy. And to answer Raffi’s question, this is a DevSecOps project that the security team can own and manage.

Vulnerability management will add key evidence to your risk forecasting model. 

I often talk about risk forecasting in these essays. I’ve said over and over again that calculating probabilities is much bigger than many security professionals think it is. In our mandatory stats 101 course that most of us took in college, we learned to count colored marbles falling out of urns and used crazy hard math to predict what the next color would be with some accuracy (well, the math was crazy hard for me). Calculating these kinds of problems with known quantities (the number of colored marbles hiding in urns) is definitely part of our probability understanding , but it’s just a small part. And network defenders have been befuddled with trying to forecast cyber events with this small subset view because there are way too many variables to contend with inside the cybersecurity space, too many different colored marbles falling out of our cybersecurity urn. And, we’re not even sure how many marbles are in the urn. But there is a bigger and more useful way to think about this.

According to Dr. Ron Howard (no relation to me by the way and not related to the actor/director), who created decision analysis theory back in the 1960s, “A probability reflects a person’s knowledge (or equivalently ignorance) about some uncertain distinction … therefore, probability is nothing more than our degree of belief that a certain event or statement is true … Probability, however, does not come from data. It represents a person’s state of information about an uncertainty.” Framing probability that way expands our view of how to think about forecasting risk in our own cybersecurity problem domain and specifically how to apply new evidence to the forecast in terms of vulnerability management.

The CIRs mentioned above specifically ask questions about what we know and what we don’t know about the impact of a newly discovered vulnerability or exploit. From an infosec program viewpoint, the atomic first principle we are all trying to answer is what is the probability of material impact to our organization due to a cyber event in the next three years? We do that by asking questions about our uncertainties regarding our environment and collecting new evidence as it manifests. 

If, in November 2021, we forecast a 20% chance that the organization will be materially impacted in the next three years due to a cyber event, and then in December, the log4j vulnerability popped up, how does that change our forecast? What did we know and what didn't we know at that moment? The CIRs above were the right questions to ask. 

Were we exposed? With the ubiquity of that software module, chances are that we were. 

What was the probability that some bad guy would leverage it on our systems? Since exploiting the code was and remains almost trivial, the chances were high that some bad guy would at least try.

By leveraging the log4j vulnerability, could the bad guy get access to, or cause damage to, anything that’s material to the business? We didn’t know for sure. It would take us a few days to determine that. There’s that uncertainty again. But, let’s assume that at least some material data was exposed. In that case, how does that affect our risk forecast?

In the early days of the log4j vulnerability crisis, we hadn’t yet developed any mitigation strategies other than finding all instances of log4j running and patching it. And most of us didn’t know where it was all running. Just with a back of the envelope calculation, all of that evidence had to at least raise our risk forecast to above 90% that we would get hit by a material event not in three years but in the next 90 days. The one thing we had going for us was that all of our peers were in the same boat. Unless the bad guys were specifically targeting us, it might be a while before they find us. Then again, we might be first in the barrel. Damn! I hate uncertainty. But, as Dr. Howard might say. That is the way of the world in most organizations and specifically in the cybersecurity field. If this gives you the heebie-jeebies, maybe cybersecurity is not the field to go into. You might be better suited sorting apples coming off an assembly line.

Contrast log4j with CVE-2021-46608 that NIST published in February 2022. The CVSS score for the Bentley MicroStation software categorizes the vulnerability as low. Even if we had the Bentley software in our environment, which most of us don’t, how would that change our risk forecast that we made in November last year? Going through the same CIR logic that we did for log4j, we are most likely not exposed. The odds that some bad guy will leverage it even if we were are low. And if we were running it, the chances that the Bentley software is connected to something material in our business is extremely low. This vulnerability doesn’t impact our risk forecast in the least. We should shove the mitigation plan for this vulnerability so far down the priority queue that it’s likely that we will never get around to fixing it because we don't have to.

Zero trust (strategy) - vulnerability management (tactic).

In order to reduce the probability of material impact, zero trust is a key strategy to pursue (along with intrusion kill chain prevention, resilience, and risk forecasting). Tactically, there are many direct ways that will improve the zero trust posture and they mostly deal with identity and authorization. Indirectly, the tactic that most have not associated with zero trust is vulnerability management. But that’s where it sits in my mind. Vulnerability management is not some independent set of activity that exists by itself that all network defenders need to do. I don’t consider it as a first principle strategy. It’s not atomic enough. That said, It’s an important first principle tactic that supports zero trust.

Reading list.

11 MAY 2020:

CSOP S1E6:: Cybersecurity First Principles

18 MAY 2020

CSOP S1E7:: Cybersecurity first principles: zero trust

08 JUN 2020:

CSOP S1E10:: Cybersecurity first principles - DevSecOps

15 JUN 2020:

CSOP S1E11:: Cybersecurity first principles - risk

22 JUN 2020:

CSOP S1E12:: Cybersecurity first principles - intelligence operations

31 AUG 2020:

CSOP S2E7:: Identity Management: a first principle idea.

07 SEP 2020:

CSOP S2E8: Identity Management: around the Hash Table.

  • Hash Table Guests:
  • Helen Patton - CISO - Ohio State University (2)
  • Suzie Smibert - CISO - Finning
  • Rick Doten - CISO - Carolina Complete Health (2)
  • Link: Podcast
  • Link: Transcript
  • No Essay

16 MAY 2021

CWX: Zeroing in on zero trust.

  • Guests:
  • John Kindervag, Cybersecurity Strategy Group Fellow at ON2IT 
  • Tom Clavel, Global marketing director at ExtraHop (sponsor)
  • Link: Podcast
  • Link: Transcript
  • No Essay

17 MAY 2021

CSOP S5E5: New CISO Responsibilities: Identity

  • Hash Table Guests:
  • Jerry Archer, Sallie Mae's CSO (4)
  • Greg Notch, the National Hockey League's CISO (2)
  • Link: Podcast
  • Link: Transcript
  • Essay: None

References

2018 - Vulnerability Management: You’re Doing It Wrong,” by LASCON, YouTube, 21 January 2019. 

5 Phases of the Threat Intelligence Lifecycle,” by Recorded Future, 3 January 2018. 

"7 Ways AI Can Automate and Improve Vulnerability Management Operations," by SecureWorks, 2020.

A History of the Vulnerability Management Lifecycle.” by Rhett Glauser, Vulcan.io, 2019. 

Common Vulnerabilities and Exposures (CVE) (Noun),” Host: Rick Howard. Word Notes Podcast, The CyberWire, 31 August 2021. 

CVE - towards a Common Enumeration of Vulnerabilities,” Mitre.org, 2017. 

Exploring the Origins and Evolution of Vulnerability Management,” by Brian Drake, Igicybersecurity.com, 2020. 

Foundations of Decision Analysis,” by Ronald Howard and Ali Abbas, Published by Pearson Education, 2013.

Framework for Improving Critical Infrastructure Cybersecurity (Version 1.1),” National Institute of Standards and Technology, 16 April 2018.

Gartner Hype Cycle: Everything You Need to Know,” by VAIBHAV PAL, Challenging Coder, 26 August 2020.

"How to Measure Anything in Cybersecurity Risk,” by Douglas W. Hubbard and Richard Seiersen, Published by Wiley, 25 July 2016.

Information Security: Threat and Vulnerability Management Standard.” UW Policies, 5 January 2021.

Mastering the Hype Cycle: How to Choose the Right Innovation at the Right Time,” by Jackie Fenn and Mark Raskino, Published by Harvard Business Review Press, 16 September 2008.

"Measuring and Managing Information Risk: A Fair Approach,” by Jack Freund and Jack Jones, Published by Butterworth-Heinemann, January 2014.

Metrics and risk: All models are wrong, some are useful,” By Rick Howard, CSO Perspectives, the CyberWire, 30 March 2020.

NIST Special Publication 800-53 (Revision 5): Security and Privacy Controls for Information Systems and Organizations,” NIST, September 2020.

"No More Chewy Centers: Introducing The Zero Trust Model Of Information Security," by John Kindervag, Forrester, 2010.

Old MacDonald Had a Farm,” Bounce Patrol - Kids Songs, YouTube, 21 August 2021. 

OpenVAS - Open Vulnerability Assessment Scanner.” Openvas.org, 2022. 

Practical Vulnerability Management,” by Andrew Magnusson, Published by No Starch Press, 2020. 

Site Reliability Engineering: How Google Runs Production Systems,” by Betsy Beyer, Chris Jones, Jennifer Petoff, Niall Richard Murphy, Published by O'Reilly Media, April 2016. 

Number of vulnerabilities reported in 2021 hits record high,” by BY DUNCAN RILEY, SiliconANGLE, 9 December 9, 2021. 

Statistics 101 - Probability,” by Murtaza Haider, Assistant Professor at Ryerson University, Cognitive Class, YouTube, 7 July 2017. 

"Superforecasting: Even You Can Perform High-Precision Risk Assessments,” By Rick Howard, David Caswell, and Richard Seiersen, Edited by Deirdre Beard and Benjamin Collar. 

"Superforecasting: The Art and Science of Prediction,” by Philip E. Tetlock and Dan Gardner, Published by Crown, 29 September 2015.

Super Prognostication II: Risk Assessment Prognostication in the 21st Century,” by Rick Howard and Dave Caswell, 2019 RSA Conference, 6 March 2019.

The Evolution of Vulnerability Management,” by Jack Daniel, Security BSides, 2017.

The Foundations of Decision Analysis Revisited,” by Ronald Howard, Chapter 3, 060520 V10.

The History of Common Vulnerabilities and Exposures (CVE).” The State of Security, 17 September 2020. 

The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win,” by Gene Kim, Kevin Behr, George Spafford, Published by IT Revolution Press, January 2013. 

Top 10 Vulnerability Management Tools for 2021,” by Toolbox, 9 October 2020.

Vulnerability Management Explained,” by Nick Cavalancia, Att.com, 2 July 2020.

What Is Vulnerability Management?” by Cezarina Dinu, Heimdal Security Blog, 14 January 2022. 

What Is Vulnerability Management?” by Toolbox, 19 April 2021.

Zero Trust and UES Lead Gartner’s 2021 Hype Cycle for Endpoint Security,” by Louis Columbus, VentureBeat, 6 October 2021.