One of the most important lessons I’ve learned in security, is that it’s always better to push security problems back to the source as much as possible. For example, a small number of experts (hopefully) make cryptography libraries, so it’s generally better if they put in checks to prevent things like invalid curve attacks rather than leaving that up to applications, so that we don’t get the same vulnerabilities cropping up again and again. It’s much more efficient to fix the problem at source rather than having everyone re-implement the same redundant checks everywhere.
Now consider how we currently manage security vulnerabilities in third-party software dependencies. Current accepted wisdom is to lock dependencies to a single specific version, often with a cryptographic hash to ensure you get exactly that version. This is great for reproducibility, and everyone loves reproducibility. However, when there’s a security vulnerability in that dependency, every single consumer of that library has to manually update to the next version, and then their consumers have to update, and so on. The fix is done at source, but the responsibility for updating cascades through the entire ecosystem. This is not efficient. Two years after log4shell, around 25% of vulnerable consumers had apparently still not updated.
To solve this problem we have created an industry of automated nagging software: SCA tools that alert you to all the “risk” you are carrying, and the ever-watchful Dependabot, which will automatically upgrade everything for you. Combine this with CVSS severity inflation (CVSS 4 is not helping in this regard) and the acceleration in production of CVEs, and it’s not surprising that many developers find the whole situation demoralising and stressful. It’s an almost constant churn of new must-fix CVEs to address, especially when only about 1% of CVEs will ever go on to be exploited (rising to about 4.25% for critical CVEs). This is not a sustainable or efficient situation.
What would better look like?
There’s clearly a problem, but what would a solution look like? I have some ideas, but this is a complex problem where it is easy to introduce unintended side-effects. So take these suggestions as just that: suggestions. To provoke discussion, not as a perfect fully-baked solution. There are lots of competing factors to balance here, and I’m not going to claim that I’ve considered them all.
Also, many of the suggestions I make below are not currently actionable. It is an idea for what the future might look like, not something you can implement right now.
Ultimately, I think that locking to specific versions is a mistake. And by locking, I mean not just explicit lockfiles, but also things like Maven where dependency versions are (usually) uniquely determined by the POM. This feels like such heresy to utter in 2026, and I’m sure there will be lots of angry reactions to this post. But in my opinion, it would be much healthier in general if software builds always pulled in the latest patch version of a dependency (and transitive dependencies), and specified only a particular major version and minimum minor version. (Although even that can be problematic).
“But, but, but, …”, I hear you scream. What about supply chain attacks? What about deterministic builds and reproducibility? What about unintended breakages in patch versions? Locking to a particular version lets you be more controlled in applying updates: Dependabot automatically upgrades, yes, but it raises a PR and lets you run your test suite first. This is surely better than just automatically pulling in the latest thing every time. What if someone publishes a malicious version of the package? I don’t want to just pull that in straightaway!
These are all completely valid concerns, but I believe they can be addressed by changes to dependency resolution:
- Firstly, just as you should implement a time delay for Dependabot to give some leeway for supply chain attacks to be discovered, the same should happen here: dependency resolution should have a built-in time delay, so that new versions are not resolved until they are at least N days old. (I believe most repos already track version publication time). This can be controlled by setting a policy, so that e.g. you can have a canary CI pipeline that always builds with the latest to flag any incompatibilities early.
- It should be possible to shun versions that are known to cause test failures or other incompatibilities. Ideally such shunning information would also feedback to the central repository so that frequently shunned versions can be investigated. A sudden version update breaks your PR for reasons unrelated to your changes? Shun it! We change the default from opting-in to security updates to opting-out.
- Building from source should always produce a detailed SBOM, that lists exactly which versions of which libraries went into that build. It should then be possible to specify the SBOM when (re-)building to have it resolve exactly those versions, giving us back reproducibility. Essentially, this is producing the same information as a lockfile, but at build-time rather than commit-time. This allows retrospective rather than proactive reproducibility. (If you want to be a bit more deterministic around releases then it seems reasonable to me to switch to SBOM-locked builds at code-freeze).
(You could implement some of these things right now by having your build scripts run e.g. “uv lock –upgrade” or “mvn versions:use-latest-versions” before each build, but again this is shifting the responsibility onto consumers to implement).
How would this be better? It means that the default shifts from pulling in fixed insecure versions to always pulling in newer, more secure versions. It’s based on an assumption that the overwhelming majority of software patches are good. It also shifts work away from downstream developers: for the most part, updates will happen automatically and without any manual intervention. And it happens for everyone, not just the projects mature enough to be running Dependabot. And it happens on every active release branch, not just on main.
A further advantage of this approach is that most low and medium severity issues (and probably a fair number of “high” ones too) could be fixed without a CVE being issued at all. The whole CVE process exists largely so that vendors can scaremonger and sell tooling, and security researchers can make a name for themselves. I frankly find it one of the most embarrassing and immature aspects of software security. Many smaller projects don’t have the time or inclination to issue CVEs, so just silently fix any security bugs in the next release. And frankly that should be the norm. The only reason it isn’t the norm is that we’ve got ourselves into a situation where CVEs have to be published because nobody updates without them. The default is to stick with the older insecure versions, so you have to scream loudly to overcome that inertia. Because updating is work and not updating is free. Switch the default and perhaps we can all start to calm down a bit. Maybe.