With software supply chain attacks surging, dev and application security teams should shift gears from legacy vulnerabilities to open-source repos, DevOps tools, and software tampering.
Sophisticated threat actors are shifting to new ways to compromise systems, targeting the accounts of privileged employees, or vulnerable, public-facing applications. Malicious actors have found that they can leverage vulnerable development pipelines to further sophisticated attacks by tampering with developed code to introduce malicious backdoors or other features.
A new ReversingLabs report, NVD Analysis 2022: A Call to Action on Software Supply Chain Security, shows, the National Vulnerability Database (NVD) is still dominated by flaws in a handful of legacy platforms. Vulnerabilities in the NVD represent a minority of threats to software supply chains. That's because the database does not represent the full scope of emerging attacks.
As adversaries intensify their focus on the software supply chain, development and security teams need to shift their focus beyond the risks posed by vulnerabilities found on legacy platforms to emerging risks found in open source repositories, CI/CD tools, and code tampering. Attacks on open-source repositories such as npm and PyPI have surged 289% combined since 2018, the report found.
The report paints a clear picture: It's time for organizations to reassess their application security regimen to take into account the new supply chain risk landscape. Here's are six reasons why it's time to go beyond traditional vulnerabilities.
1. Trusting code within the supply chain has become problematic
Many tools designed to help secure software-development pipelines focus on rating the projects, programmers, and open-source components and their maintainers. However, recent events—such as the emergence the “protestware” that changed the node.ipc open source software for political reasons or the hijacking of the popular ua-parser-js project by cryptominer—underscore that seemingly secure projects can be compromised, or otherwise pose security risks to organizations. "
Tomislav Peričin, co-founder and chief software architect at ReversingLabs, noted how in the case of SolarWinds, the trusted source was pushing infected software. Catching those kinds of mistakes requires a focus on how code behaves, regardless of where it came from.
"As long as we keep ignoring the core of the problem — which is how do you trust code — we are not handling software supply chain security."
2. Vulnerabilities need context to be effectively analyzed
Vulnerabilities viewed in isolation might be labeled “low risk,” but those same vulnerabilities viewed in the context of an interconnected system might lead to an exploit that could be labeled “high risk,” explained Caroline Wong, chief strategy officer at Cobalt Labs, a penetration testing company.
"It’s critically important that technology practitioners consider security vulnerabilities in the context of how they are used and how they might be misused."
3. Zero Trust needed to protect the software supply chain
Zero Trust — a model that assumes every connection and endpoint on the network is a threat — plays a role in trust decisions. "I could see that concept being extended to software supply chain concerns," observed Daniel Kennedy, research director for information security and networking at 451 Research.
"For example, just because an open source library is heavily used or well-maintained doesn’t necessarily mean you should trust it implicitly. Just because code is on a build server doesn't necessarily mean it's the same code from your source repository. Nothing should be implicitly trusted based on its source alone."
Identities, too, need to be reviewed with a measure of distrust. "You can do all the vulnerability scanning that you want, but if you have an immature identity practice with legacy orphan accounts everywhere and overexposure of privileges, you're going to be hacked," said Garret Grajek, CEO of YouAttest, an identity auditing company.
4. Greater care is needed to prevent software tampering
In another recent ReversingLabs report, Flying Blind: Firms Struggle to Detect Software Supply Chain Attacks, based on a survey conducted by Dimensional Research, found that software organizations engaged need to be able to detect tampering at any and all stages of development, including post-build and post-deployment. The report noted that while scans for tampering were fairly common during the build process (53%) and after build but prior to deployment (43%), much lower percentages of survey respondents said they scanned code post-deployment (34%), or that they scanned individual components prior to build (33%).
As recent supply chain compromises indicate, the report continued, such spotty checks leave a great deal of room for threat actors to operate and exploit a publishers’ privileged access to customer environments to push malicious executables or exfiltrate sensitive data.
5. The next zero day vulnerability is just around the corner
One of the most interesting subplots of the Log4Shell attacks is that within days after the initial vulnerability was patched, another high-severity vulnerability was found in the same code. The only reason the second one was found was that the Log4J code was undergoing intense scrutiny, said Larry Maccherone, DevSecOps transformation evangelist at Contrast Security, a maker of self-protecting software solutions.
"What this tells me is that the open source libraries that make up 80% of modern applications are literally littered with yet undiscovered vulnerabilities."
Once you recognize that, though, Maccherone said, you need to fundamentally change the way you think about keeping your dependencies updated. "Only patching when an attack happens is a recipe for disaster and long exposure times."
He recommends creating a robust integration test suite for applications. "Having a robust test suite is considered good engineering practice, but it’s surprising how many applications don’t have one," he said. "You may think of this as a quality issue, but it’s clearly also a security issue. Teams that don’t have one require days to release a fixed application. Teams that do, minutes."
6. SBOMs are a critical step toward ensuring supply chain integrity
Software Bills of Materials (SBOM) are like ingredient lists for software. But as valuable as SBOMs can be, adoption of the practice by software teams is "meager," the ReversingLabs survey report found. It pointed out that only 27% of the IT pros participating in its survey noted their employers generate and review SBOMs before releasing software. Half the respondents said their companies didn't generate SBOMs. And nearly half of those (44%) admitted they didn't do so because they lacked the expertise and staffing to do so.
"If you have requirements for a gluten-free diet, you want to really understand exactly what is in the cheesecake you’re eating. Similarly, in order to properly perform security testing and manage the cumulative risk of using different components in a tech stack, it’s helpful to have an accurate, up to date SBOM."
The tools are there. Now on to implementation
Tim Mackey, principal security strategist with the Synopsys Cybersecurity Research Center, said that an SBOM is a key component of a governance plan that encompasses not only open source usage in a software supply chain, but also becomes an anchor point for other operational elements. "The most common operational element being discussed today is the usage of SBOM to communicate new vulnerabilities within components in the software supply chain for the given application."
But he noted that the software team needs to be able to operationalize an SBOM.
"Such knowledge is obviously valuable, but if your organization doesn’t have a process to consume it, then there is minimal benefit from having an SBOM and associated vulnerability information."