The Log4Shell vulnerability is considered to be one of the most significant software bugs in recent years, because of its severity, pervasiveness and long-lasting impact on organizations.
The Log4Shell vulnerability is present in several older versions of the Apache Software Foundation's Log4j, a logging framework that is almost ubiquitously present in Java application environments. The flaw gives remote attackers a relatively easy to way to execute arbitrary code on an affected system. Apache disclosed the vulnerability in January 2021.
More than a year after the initial revelation, Log4Shell remains a potent threat to government and private entities. A study that Tenable released in November 2022 showed that as of the end of October last year, 72% of organizations remained vulnerable to the threat. Despite efforts to address the issue, many organizations remained vulnerable because of challenges involved in identifying and remediating the flaw in legacy application environments.
Because the vulnerable component can often exist several layers deep in an application, enterprise security and development teams have often had a hard time identifying vulnerable assets—or even just knowing if they are vulnerable. Even today, more than two years after bug disclosure a startling 25% of all Log4j downloads from the Maven Central java repository continue to be of vulnerable versions.
In a recent report summarizing their major takeaways and lessons, a team of senior software architects and engineers from eBay, Fidelity Investments, T-Mobile and Tasktop released what they learned from novel vulnerabilities like Log4Shell.
Here are four lessons from the Log4Shell vulnerability for software teams.
1. Automated processes can make a huge difference
The extent and quality of automated processes that an organization has in place has a direct impact on its ability to detect and remediate novel vulnerabilities such as Log4Shell. The organizations that responded the most effectively to the Log4Shell challenge were those that had automated processes — such as Software Composition Analysis (SCA) tooling — for discovering vulnerable applications, as well as the teams responsible for those applications.
These organizations also were able to automatically inform the owning team what they needed to do to remediate the issue. The study found that organizations which had the best outcomes could automatically rebuild, test and redeploy applications. Importantly, they were also able to verify that remediated applications in production were free and remained free of the vulnerability.
Matt Rose, Field CISO at ReversingLabs, said that a security solution that analyzes the final deliverable package prior to deployment is the best way to deal with Log4Shell-type issues.
"The deployable package is what is exposed to the public so it would be the source of truth for risk."
—Matt Rose
Automating software supply chain scanning prior to execution of CD pipelines gives development organizations the ability to differentiate between releases, Rose said.
"[Automated scanning] provides you with a historical narrative of the production instance of your software or application and verifies vulnerabilities like Log4Shell indeed are remediated and stay remediated."
—Matt Rose
Organizations need to have the ability to update all production applications—including legacy systems — on demand.
Often, an organization that updated to the new remediated version of Log4j had no way of reliably validating that the update did not break any existing applications. This was a problem especially with legacy applications that were still in production. The researchers found that at many organizations, these applications often had missing or unclear build processes, minimal automated testing or verification, missing or unreliable deployment pipelines and minimal or no monitoring at all. The researchers found a lot of this to be the result of organizations making the conscious decision not to invest in robust pipelines for legacy applications because the applications are no longer changing or evolving.
That is a mistake because while the applications themselves might not be changing, their underlying dependencies almost always need upgrading over time. Organizations that have a continuous delivery pipeline for all their production systems, whether legacy or not, are in a better position to respond to novel vulnerabilities such as Log4Shell the researchers found.
2. Updated, accurate inventory is a must for SCA
An active, dynamically generated inventory of software and endpoints is critical to assessing and mitigating the threat posed by novel vulnerabilities such as Log4Shell. Regardless of whether the inventory comes from a configuration management database (CMDB) or inventory of code, SCA requires an active and updated list that is based on reality, the researchers said. An inventory of vulnerable endpoints can help organizations prioritize remediation efforts based on factors such as exposure, criticality of the asset and whether it impacts customer data or other critical business data, the researchers said.
CI/CD pipelines must include a capability to inventory all the components that go into a software product. With a large and growing percentage of application codebases in enterprise environments comprised of third-party and open-source components, these inventories are crucial to identifying novel vulnerabilities such as Log4j. Organizations that don’t yet inventory the components used in their products should look into building a Software Bill of Materials (SBOM), the researchers said.
Mike Parkin, senior technical engineer at Vulcan Cyber, said Log4j, along with a number of other recent vulnerabilities in widely used libraries, has helped push more organizations to using an SBOM to help deal with the issue.
"An accurate SBOM makes it much easier to check which applications are affected when a library vulnerability comes to light, for example, so they are much easier to mitigate."
—Mike Parkin
3. SBOM's are good — but only if they are actionable
Log4Shell showed organizations that they do not have a good and accurate handle on their application architecture and inventory, says Rose from ReversingLabs. Many organizations spent days trying to identify and remediate vulnerable infrastructure because their CMDBs and SBOMs were out of date and inaccurate. Far from helping, these inventories caused considerable confusion because of how inaccurate they were.
"Log4Shell made organizations realize that they need better understanding of their SBOMs."
—Matt Rose
SBOMs have received a lot of attention over the past two years at least partly because of a May 2021 Executive Order from the Biden Administration that requires all federal civilian agencies to obtain an SBOM for any software they acquire from a commercial software provider or other third-party. Many see the requirement as critical to ensuring that federal agencies at least know what open-source and third-party components are present in their software.
But SBOMs alone are little more than a product ingredients list for software. If organizations are not able to act upon the information many have contended, they are not effective, said Mark Lambert, vice president of products at ArmorCode.
"Events like the Log4Shell vulnerability highlighted the need and value of SBOM at the enterprise level. But for many this has not yet translated into how development teams will be able to leverage them without slowing down software delivery with manual tasks."
—Mark Lambert
4. Automating SBOMs is key to software team workflow
Like Security Operations Center (SOC) teams, DevSecOps teams are already facing alert fatigue, and SBOMs can essentially end up as simply more data for the development team to manage as they deliver software, Lambert said.
Organizations are going to need ways to automate generating, publishing and ingesting SBOMs, he said.
"They will need ways to bring the remediation of the associated vulnerabilities into their current application security programs without having to adopt whole new workflows."
—Mark Lambert
- Tags:
- Dev & DevSecOps