Sigstore - A new standard for signing, verifying and protecting software
sigstore.devThis looks quite interesting, and is sponsored by the Linux Foundation and several other orgs. Code signing is definitely a mess in the Linux world.
One thing I'm less happy about is how these sort of projects always tend to build up a whole parallel universe, dragging along a whole suite of dependencies and related projects (Cosign, Rekor, Fulcio, etc.)
I understand why we might want to fill gaps in existing open source tools, but it makes adopting these platforms a massive migration effort, where I need to go to several project's documentation to learn how everything works. Naming wise, I would also much prefer boring, descriptive names over the modern fancy project names.
I only started digging into this space last week, but I think cosign, rekor, and fulcio are not related projects but rather critical components of sigstore. Cosign is the cli for signing and verifying artifacts, Rekor is the transparency log, and Fulcio is the certificate authority.
As an outsider it's not super appealing to look into a project and immediately be overwhelmed with 3 other projects I've never heard of. Maybe I'm just not the target demographic, or maybe the project is fragmented?
but why not just call it sigstore-cli, sigstore-log, and sigstore-ca?
Google [0] and GitHub [1] both released blog posts recently describing how to use Sigstore with GitHub Actions to sign build artifacts.
[0]: https://security.googleblog.com/2022/04/improving-software-s...
[1]: https://github.blog/2022-04-07-slsa-3-compliance-with-github...
Does this standard prevent unsigned portions, a la Dropbox/Chrome telemetry with Authenticode?
https://docs.microsoft.com/en-us/archive/blogs/ieinternals/c...
> the signature blocks themselves can contain data. This data isn’t validated by the hash verification process, and while it isn’t code per-se, an executable with such data could examine itself, find the data, and make use of it
Sigstore maintainer here. I'll try to answer questions!
Are there plans to integrate it with something like Crev[0] for tying trusted code/security reviews to the binary artefacts?
I suppose the people you trust to audit some code will likely not be the same people you trust to do build verification for you, but it might be nice to manage those trust relationships in a single UI/config.
We use the generic in-toto Attestation data model which could capture crev style reviews, but there are no other concrete plans that I'm aware of.
To be honest, crev is pretty elegant but I find manual code review like this to be pretty ineffective in stopping attacks.
Perhaps we need better tools for helping manual code review. Detecting high-entropy strings would be a useful semi-automated check to find obfuscated code and accidentally-committed secret keys.
I think there should also be a culture of ensuring that a new patch release of some software passes the acceptance tests of the previous patch release (without changing or removing the tests).
A similar test for linting rules should also help (especially if those rules are designed to prevent Unicode homoglyph attacks), and a check for new uses of dangerous APIs like filesystem or network access would assist reviewers too.
Of course there is almost unlimited potential for underhanded code, if an attacker is skilled and patient enough to carefully introduce subtle bugs over time, but I think that a meaningful number of attacks could be avoided with these measures in place.
There is nothing limiting crev manual code review is there?
An ecosystem of useful bots would seem like a natural addition to it.
It'd be wonderful to have a quick what-it's-for and what's-it-not-for, eli5 style, on that home page. The current page is a bit light on details.
Is sigstore relevant only for signing Linux distributions, or do you see it being relevant for language specific package managers, like rubygems/npm/pip/...?
Is there an npm story yet? How about Deno?
The RFC trying to introduce sigstore for RubyGems is an interesting look at this in practice: https://github.com/rubygems/rfcs/pull/37
Two hard facts are: 1) You need to get Microsoft onboard 2) It doesn't mean much without developer ID verification and financial cost
Short of those two, it just becomes a way to maintain walled gardens by app stores or a means of replacing opensource gpg package signing with centralized web-of-trust? I guess the cosign part means some decentralization like GPG ? I am not bashing it, it can help with Supply chain attacks, but I predict adoption woes and being used by malicious actors a lot without those two items. Is Firefox signed by Mozilla legit or is Firefox signed by Mozilla Corporation legit?
> 1) You need to get Microsoft onboard
Given the work they are (ironically) doing on open source supply chain security[0], it would be embarrassing if they didn't end up implementing something similar for apps in the Windows Store.
> 2) It doesn't mean much without developer ID verification and financial cost
Even without verifying an ID, tools will be able to accumulate trust in long-standing identities, and flag when you are installing a package made by an identity that no one has ever heard of (which could be a sure sign of a typosquatting attack[1], for example).
You're right, though, that in some reductionist sense, "all we're doing" is moving the trust problem from binaries to (source code to reviews/audits to) pseudonymous digital identities. Closing the gap between those identities and the legal system is a cultural/political question that needs to be thought about separately, but I do think that having a decentralised web-of-trust system would greatly increase the cost for attackers and make attacks significantly less frequent.
[0] https://news.ycombinator.com/item?id=27930594
[1] https://www.theregister.com/2017/08/02/typosquatting_npm/
This is all great. Signing and verifying software is important. But it is woefully inadequate in a post Solarwinds-Notpetya-FLAME world. We need something that allows an organization to verify that code has not been maliciously tampered with. I can only think of a combination of sandboxing to detect detonation and C&C comms, and reverse engineering to compare the updates with previous versions. The last is problematic because most licensing bars reverse engineering but oh well..
I agree. There are projects such as https://github.com/ossf/package-analysis and https://github.com/step-security/harden-runner that do behavior analysis. Disclosure: I’m maintainer of the second one.
NotPetya delivered itself via an official update, but then did nothing for a month. It was triggered using a response to a standard update check message. Seconds later it was compromising everything in sight.
My point being that sandboxing, etc. would not have helped you at all.
If there was a way to know the behavior of NotPetya and realize that it has code to do things that M.E.Doc (the tax preparation program that was backdoored) was not supposed to do, that could have been used as a way to reject its installation. We are far away from being able to do such analysis at scale, but my point is that it is ultimately the behavior of software that makes it malicious or not.
I don't think that is possible for any software that isn't completely trivial. It's related to the halting problem.
If you are relying on detecting behaviour, then you have to run it.
NotPetya did nothing abnormal until it was triggered by the response to a normal network call. The first opportunity to block it would be when it was triggered.
So you could not have blocked the install by this method.
You can detect likely malicious behavior and contain those systems, which would have helped.
I found this blog article to be a good introduction to sigstore and the related projects (such as Fulcio): https://www.giantswarm.io/blog/securing-the-software-supply-...
Sigstore and cosign are so simple to use. I setup all the containers I maintain to be signed (This is done within the Github Action).
I know this might seem random and a bit of a big ask, but would you consider publishing some of your website's front-end bits on your github, too? It's a really nice job and seems highly optimized and I am curious about how it was made and delivered.
In other words, kudos?
Is that what you're looking for?
Yoo thanks! That is indeed what I was looking for.
(I am pretty bad at finding things after a long days work of development)
I just recently investigated cosign for signing and verifying local container images. It seems very useful.