OpenSSL issues patch for critical flaw.
OpenSSL patched today.
Today, November 1st, OpenSSL is releasing a patch for a critical vulnerability in OpenSSL versions 3.0.0 and above. While the OpenSSL Project hasn’t released details about the flaw, Akamai notes that observers are taking it very seriously due to the rarity of a critical flaw in OpenSSL:
“This vulnerability has caused concern in the security community because it is unusual for the OpenSSL team to rate a vulnerability as critical. There has only been one in the past, in 2014 – Heartbleed. When exploited, Heartbleed led to a memory leak from the server to the client or the other way around.
“According to the OpenSSL team requirements, in order to be rated as critical, a vulnerability has to affect common configurations and is likely to be exploitable. An example for such vulnerabilities might include ‘significant disclosure of the contents of server memory (potentially revealing user details), vulnerabilities which can be easily exploited remotely to compromise server private keys or where remote code execution is considered likely in common situations.’”
Researchers at Nucleus point out that while the vulnerability may be severe, the threat may not be as widespread as some headlines suggest, since most organizations are still running OpenSSL 1.x or 2.x.
Nucleus states, “According to many prominent voices in the space, not a lot of organizations are going to find themselves in OpenSSL 3.x+ (the versions of OpenSSL affected by this vulnerability), unless they have machines spun up with newer technologies, such as RHEL 9 and Ubuntu 22.04 which already have OpenSSL3.0 bolted on. If that’s the case and you’re currently running OpenSSL3.x in production, the critical rating of severity determined by the OpenSSL team strongly indicates the possibility that this could be a remote-enabled exploit of the OpenSSL software.”
Industry comments on the OpenSSL patch.
Alex Spivakovsky, VP of Research, Pentera, provided the following comments:.
"The fact that OpenSSL is self-labeling the vulnerability as a ‘critical flaw’ means that companies would be wise to pay attention. With OpenSSL taking care of the patch, the most important thing security teams can do at this point is try to inventory their instances of OpenSSL and prioritize future remediations based on organizational impact. This will ensure that once the patch is issued they can systematically remediate their most critical instances.
"I’m really impressed with OpenSSL’s handling of the process and not shying away from admitting to a flaw on this level. Software bugs and vulnerabilities happen, and it’s a natural byproduct of the software development process. OpenSSL’s proper handling of this disclosure will likely help many companies mitigate the potential impact of the flaw."
Tim Mackey, Principal Security Strategist, Synopsys Cybersecurity Research Center, stated:
“The critical flaw identified in OpenSSL is an example of why software composition analysis (SCA) is so fundamentally important for any AppSec and InfoSec team. The results of SCA will show exactly where a component is used and which version is in use. This allows response teams to quickly respond to any new vulnerability independent of its severity, version, and where the patch needs to come from.
“Specifically, in the case of OpenSSL, there are many thousands of forks (also known as branches) of code for OpenSSL. Each of which may have a different set of requirements for each compilation.
“In preparation for the release of the patch on November 1st, response teams should perform an SCA analysis for all software they create, acquire or consume, independent of origin or function (this includes commercial and open source software). The origin points for that software are where those patches need to come from. It's important to note that not all origin points will respond as quickly as others. This is an example of why embargoed vulnerabilities are embargoed."
How to interpret OpenSSL's downgrading of the vulnerability from "Critical" to "High Severity."
Added, 1:45 PM, 11.1.22.
Victor Wieczorek, VP of App Sec, Threat & Attack Simulation at GuidePoint Security, notes that OpenSSL has changed its rating of the vulnerability from "critical" to "high severity," and thinks this gets it right, given the challenges of exploitation:
"The move from 'critical' vulnerability to 'high severity' is appropriate, given the analysis that the OpenSSL Project provided. Exploiting this vulnerability requires quite a bit of set up and a number of factors to fall into place before it could be leveraged. Organizations should perform analysis to see if they are impacted, although there are relatively limited affected systems, as the attack primarily impacts the client-side, not the server. Technologies like SCA (software composition analysis) tools can help organizations identify where these components are so they can create an inventory and then a plan for remediation based on risk."
Added, 8:45 PM, 11.1.22.
Yotam Perkal, Director of Vulnerability Research at Rezilion, wrote this morning to add some further perspective on the seriousness of the problem OpenSSL issues present:
"Some proportion. However critical the new OpenSSL vulnerability will be, consider its scope. Currently, under 16,000 publicly accessible servers worldwide are running potentially vulnerable versions of OpenSSL (3.X) while ~238,000 servers are STILL vulnerable to HeartBleed, more than 8 years after it was first published! A breakdown of publicly accessible servers running OpenSSL 3.X by version (according to Shodan):
- "OpenSSL/3.0.0 - 215
- "OpenSSL/3.0.1 - 7,315
- "OpenSSL/3.0.2 - 1,512
- "OpenSSL/3.0.3 - 413
- "OpenSSL/3.0.4 - 66
- "OpenSSL/3.0.5 - 5,638
- "OpenSSL/3.0.6 - 39
"Needless to say this doesn’t represent the full potential attack surface, yet it does make the case that the scope for the new OpenSSL vulnerability (to be published in a few hours) will be relatively small compared to Heartbleed. Mainly due to the fact that OpenSSL 3.x isn’t yet very common in production environments."
Added, 9:15 PM, 11.1.22.
Brian Fox, Co-founder and CTO at Sonatype, sent over his key takeaways from the OpenSSL patch: "While memory overflow bugs can lead to worst case scenarios, the details of this particular vulnerability seem to indicate that the level of difficulty for an exploit is very high. The vulnerability requires a malformed certificate that is trusted or signed by a naming authority. That means that authorities should be able to quickly prevent certificates designed to target this vulnerability from being created, further limiting the scope."
He thinks it's difficult to assess the severity of the vulnerability the patch addresses, but that the uncertainty shouldn't be taken as grounds to delay, still less ignore, patching. "This is a potential memory corruption issue and these types of things can be very hard to predict when skilled attackers have infinite time to explore the side effects. Therefore I think it is appropriately scored to drive attention to patch."
The early announcement of the patch was, he thinks, a good thing. "Given how broad the conversation has become, I think it can only have helped to drive awareness which should lead to faster patching. In fact, in our recent State of the Software Supply Chain report, we have data that clearly indicates the more widely publicized a vulnerability is, the faster people respond." And rating it, initially, "critical" and announcing it early didn't, in his opinion, amount to over-hyping. "I think some people might be let down that it’s not as bad as they hoped. I think that is evidence of success, because the opposite is that everyone is scrambling and woefully unprepared. Recent studies have found that there are still several hundreds of thousands of servers still running versions of OpenSSL susceptible to Heartbleed which was disclosed 8 years ago."
And, finally, if you find that you were affected, once you've patched, get ready for the next vulnerability. "After you patch, prepare for the next vulnerability. Start building your organizational bill of materials for all your applications, both internally developed and those that you acquire and run, so that next time, when you don’t have a week to prepare, you can respond immediately."
Added, 9:30 PM, 11.1.22.
Neal Humphrey, AVP of Security Strategy at Deepwatch, was also gratified to see the vulnerability downgraded from Critical: “The news is out on the OpenSSL front, and thankfully things have been downgraded from Critical to High. While there is a remote code execution (RCE) aspect to the exploit, it is not at the level of the Log4J issues from last year. Log4J was an issue due to its spread and the access that it provided. The OpenSSL issues can be seen as widespread as Log4J but it just isn’t as dangerous. That being said, users should still look to upgrade based on the exploit due to the distributed nature of OpenSSL and it’s ability to modified, different from log4j.”
Jerry Caponera, General Manager of Risk Quantification for ThreatConnect, agrees that the prudent course of action is to address the vulnerability now--the potential consequences remain very significant:
"Despite the downgrade of the OpenSSL vulnerabilities from "critical" to "high severity" the fact of the matter is these points of exposure in the past have cost enterprises billions of dollars in data breaches. Given we don't yet know the ultimate impact, organizations need to evaluate the potential exposure to these vulnerabilities and infuse that into their overall cyber risk. With the Board now calling on CISOs to translate cyber risk into dollars, many cybersecurity leads will be determining what OpenSSL instances they house and if these are in need of patching. This incident is another reminder why a software bill of materials (SBOM) and mature cyber risk quantification strategy needs to be in place."
Added, 10:00 PM, 11.1.22.
David Klein, director, and cyber evangelist at Cymulate also sees the downgrade in severity as an encouraging sign:
“Looking at OpenSSL’s announcement today regarding the vulnerability CVE-2022-3602 found in OpenSSL versions 3.0.x – 3.6.x, we can breathe a sigh of relief. The issue was initially rated CRITICAL before release. Considering OpenSSL’s severity categories, this would have meant the problem would have affected standard configurations and be widely exploitable. Fortunately, upon release, OpenSSL downgraded the severity to HIGH, which meant it only affected fewer, less common configurations and would be less likely to be exploitable.
"Details give reasons for the downgrade. Tracked as two bugs, CVE-2022-3602 and CVE-2022-3786, both patched in OpenSSL version 3.07, details of the vulnerabilities point to buffer overruns that can be triggered by email servers running X.509 certificate verification. This buffer overflow could cause a crash, causing a denial of service, or potentially remote code execution. This means only those running OpenSSL versions 3.0.x – 3.6.x on email servers, email security gateway appliances/apps, and email clients doing X.509 certificate authentication are susceptible. Another mitigating factor is that many platforms implement stack overflow protections which would mitigate the risk of remote code execution. With that said, upgrading is advisable.
"What about all the hype and the initial CRITICAL rating? As a cybersecurity researcher, I’m happy it was a downgrade rather than the opposite. Furthermore, I applaud OpenSSL for its quick turnaround from announcement to fix.”
Do the OpenSSL vulnerabilities affect certificates? Not really, according to Sectigo's Chief Compliance Officer, Tim Callan. “The new OpenSSL vulnerability does not affect the issuance or use of certificates," he wrote. "No organization needs to revoke or reissue certificates based on this vulnerability. Because affected versions of OpenSSL do allow a buffer overflow attack, they should be patched immediately.”
Added, 6:45 AM, 11.3.22.
Mike Turner, VP WW of solution engineering at AppViewX, wrote to observe that, as is usually the case, public disclosure of a vulnerability renders criminal exploitation likelier and so increases the urgency of patching. “Despite the change in the assessment on the criticality of the vulnerabilities and the likelihood of an attacker being able to exploit, OpenSSL’s rankings make sense. However, this doesn’t mean that organizations should not immediately take action and remediate these vulnerabilities. While they are no longer considered critical, they are now publicly announced and that increases the risk of an outsider weaponizing them."
He believes nonetheless that the early heads-up OpenSSL gave about the patch helped security teams prepare for action.
"Although some question OpenSSL’s decision to pre-disclose the details of the vulnerabilities before the patch, the good news here is that the early disclosure gave cyber defenders an opportunity to be prepared for the worst. A “high” classified vulnerability should still be taken seriously and in some cases, this vulnerability could be of a critical nature depending upon how buffers were arranged on the stack and if remote code execution could still be possible. Applying the patch and monitoring for threats should be part of your immediate cyber hygiene plans."
"That said, from the initial announcement, some compared this instance to the Heartbleed bug. The reality is, the disclosure of the OpenSSL vulnerabilities as “high” was a big relief along with the release of the patch. With the pervasive use of OpenSSL in all types of commercial and embedded software applications, we were expecting much worse if the CVEs were critical as last week’s warnings were suggesting. The likelihood of exploitability is now much lower, but there is once again a spotlight on OpenSSL and other widely used open source components. Developers continue to open source components like OpenSSL and Log4j because they don’t need to create code for functionality that is widely available. While this does accelerate the development process, it can put consumers/users at risk. As with what we learned with Log4j, documenting the open source code used in the software supply chain is needed more than ever to know where potential vulnerabilities may be hiding in the applications that we use and rely on.”
Added, 7:15 AM, 11.3.22.
Jon Geater, Chief Product and Technology Officer at RKVST, shared some thoughts on what the incident may have to teach us on software bills of materials:
"As organizations scramble to discover whether they are affected by the OpenSSL vulnerability, SBOMs have yet again been thrust into the spotlight as a panacea to software supply chain security. But it’s the automated, permissioned distribution of SBOMs that is essential to their effectiveness. Manual collection and searching and verification of software lifecycle artifacts doesn't scale to meet this challenge effectively. All parties need a robust and reliable automated process, where the provenance and integrity of the information can be trusted and shared to exactly the right people and places, so that organizations can accurately assess and address risks in a timely manner. Going forward SBOMs need to seamlessly integrate with the systems organizations use today so that the work of identifying exactly what is at risk can be automated,”