Docker Hub Two-Factor Authentication
docker.comI really wish places like Docker Hub and package managers (pip, npm, apt, etc) would articulate the edges of their security boundaries better.
Like, can I expect that every docker image I pull has been audited for at least obvious and intentional backdoors? What about if I update a trusted image, and some time down the line the original posters account is breached? Has the code had an in-depth audit that at-least indicates that it's /somewhat/ resilient to being poked with a sharp stick?
2Factor at least improves security, but my security model must still rely on the Dockerfile maintainer not getting breached - I've never met any of the maintaners, and cannot independently verify their operational security at all.
I know there's at least some foil covering my hair right now, but the security of these kinds of services means anyone who's using them is effectively building a whole bunch of infrastructure right on top of a handful of houses of cards.
If anyone has any solutions that would solve this, I'd be interested in hearing and maybe implementing them. I've long thought of some sort of centralized community driven independent code-review/pentest notes to at least provide some level of assurance, but I don't think there'd be enough interest to hit critical mass, and the project would just die out.
If this is your threat model, fork the repo with the Dockerfile and build it yourself.
That doesn't solve anything. I use countless docker images. My PII/other data of value is also stored on a bunch of hosts who I have no oversight or control over, I'm sure at least some of those rely on random fragments of code sourced from repos.
It'll only be a matter of time before there's a huge breach as a result of tainted software ending up in a popular docker build, github repo, or packaged into a mainstream Linux distro repo.
I have no idea if distros inspect package source, last time I googled it, I couldn't find any indication either way.
> I use countless docker images
Which is why you need to build your own if this is the threat model.
> My PII/other data of value is also stored on a bunch of hosts who I have no oversight or control over,
Correct, this is why companies perform security assessments of vendors before granting access to sensitive data, and have contracts in place that help hold the vendors accountable.
> I have no idea if distros inspect package source, last time I googled it, I couldn't find any indication either way.
It would depend on the distro and maintainer of course, but I'd expect they do to some degree as their personal and professional reputations depend on it.
> Which is why you need to build your own if this is the threat model.
I am an individual, I cannot review every container I have control over, and version pinning might lead to unpatched publicly disclosed vulns. I could code review a few, but there's no way I could cover everything I run on my own. Add in $random_distro_packages and $random_git_clones and the ratio of review to functional use would be 10:1.
> Correct, this is why companies perform security assessments of vendors before granting access to sensitive data, and have contracts in place that help hold the vendors accountable.
I am an individual, I have virtually no control over which companies hold my data. Security in most medium to large organisations leaves a lot to be desired. I highly doubt project teams in large companies that don't care about security evaluate docker containers for security. I have no control over it, and laws in most/all countries to enforce professional negligence are toothless.
> It would depend on the distro and maintainer of course, but I'd expect they do to some degree as their personal and professional reputations depend on it.
That's awfully optimistic of you. Their reputation depends more on them delivering quality, functional releases in a timely manner. Larger distributions probably have some level of security audit but I don't know what level that is, and haven't seen any public details which indicate it, either.
It is expected that you can't review everything you use, but you can still control the amount of trust you put in random blobs downloaded from the internet. If you're using dozens of docker images, perhaps you're trusting too much? You're trusting the security of the base the image is built on, the quality of the image, the builder, and the security of whoever pushes the images. That's a lot to trust.
Personally, I trust my distribution maintainers, because I know that they build packages from sources that could be audited if I wanted, and the build process is such that injecting malware into it is nontrivial (builds are done without internet access).
Backdooring upstream projects is possible, but any individual project in wide use is likely to have at least some sort of review, so it's not all that likely (compared to hijacking a docker hub image) that a backdoor would make it all the way into a distribution before it's noticed.
well it seems you answered your own question.
There is a reason why security conscious software houses vendor (and vet some of ) their dependencies, despite being pain in the ass.
Some individuals have good opsec, most don't. And no repo for any language i know of does security audits on all it's contents. they might do for targeted libs like crypto or similar, or run some automated software that might find some edge cases, but I wouldn't put too much trust in general.
Generally speaking your safety lies in using popular libs, on the theory that if something bad happens there is higher chance of somebody noticing.
But situation is not good from security perspective.
Of course in proprietary world in my experience situation is even worse.
That said, Docker files are usually simple, and I have no difficulty in inspecting the ones I care for. I do however always clone their repos, so i can simply diff the differences, so keeping up with updates is not that big of a deal.
Of course you still have to trust upstream so ...
Not really, it's a bandaid that solves the immediate problem for me but not even for my data. Even then it's not so much about 'my data' but more about herd immunity.
It'll only be so long until a major resource is poisoned that has severe outcomes for many organisations across the globe. Until that happens, status quo, I guess.
The current process of reviewing everything you use isn't maintainable, but outsourcing the reviewing is equally bad. My original post was _intending_ to ask for suggestions that solve the issue on a more widespread approach, but I guess either nobody understood me or nobody is interested.
I'm not saying that what you're suggesting is theoretically impossible, I'm suggesting maybe there's a better way of going about it.
Open Source is not like a normal product. It's hobby software put together by volunteers for free. You get what you pay for. If you ever use open source software that I published for free, I am promising you right now, it is not secure, it has not been audited, you should not trust it.
That said, a lot of open source software actually follows pretty good security practice in general, and is eyeballed by some smart people, and so you generally get a little better quality than you'd get from a bunch of devs locked up in a cathedral that don't have to fear people pointing out their embarrassingly insecure bugs. But all that does not stop kernel.org from getting compromised, which has happened. So, don't count on auditing, but the code isn't terrible, but that also doesn't mean anything.
The best practice I've found to deal with that is to run old-yet-supported-and-patched software and hope it's old enough that if there's something wrong with it, hopefully somebody has caught it. You don't really get that benefit if there's one single source for all the source/build artifacts, but it can work for artifacts that have many mirrors and have good signing practices.
On your end, you need to check for and update known-vulnerable versions of software. You have to do that for any software, but especially OSS, where there is no sales or support associate that's going to e-mail you to tell you to update your applications. You can easily be running OSS software which is known to be vulnerable.
Now I'm waiting for FIDO U2F key support. Also waiting on Cloudflare on doing it as well!
Same here. The only 2FA method I trust is FIDO U2F. If only sites would let me federate my authentication so I have one place to manage my tokens instead of having to manage them on each site.
I think WebAuthn is going to be the thing that finally pushes this into the mainstream. The browser integration is fantastic.
Can folks just build their own internal repo from scratch? Like use a docker file to build a base image?
Yes. You can say "FROM scratch" at the top of your Dockerfile to build an image from nothing (it's actually a no-op). You can also set up a docker registry using open source software not distributed as a docker image. There are several variants but the one I've had my eye on is Nexus OSS[1]. It can manage lots of things, not just Docker.
I really like the mockups in this post. It's so rare that you see discussion of authentication UI/UX.
Speaking of, are there any UI/UX design resources (books, tutorials, blog posts, etc.) on for modern authentication flows (2FA, OAuth, etc.)?
I've been searching for a while and can't seem to find anything substantial. All the design discussion I can find is around protocols/handshakes/security, but nothing about UI/UX best practices and possibilities.
We did mockups for the whole ux for webauthn before implementation, to get the concept right:
https://meta.discourse.org/t/webauthn-support/126454?u=falco
I abandoned Docker Hub when the stopped allowing you to manually setup automated builds, but instead required that you link your GitHub account that gave them too many privileges. There really isn't a need for Docker Hub since they are many alternatives, including GitLab.
TOTP is vulnerable to phishing.
U2F is not.
This has nothing about designing the solution. Very misleading title.
Agreed, it's basically a feature announcement combined with a little bit of user-side how to.
Agreed disappointing. Esp from docker I was expecting better.
Ok we've taken out the bit about design above.