I don't like Docker or Podman
blog.liw.fi 1. I got hurt by Docker.
2. I don't want to learn Docker.
3. I got hurt by Docker more.
4. I don't trust DockerHub.
5. Podman is just like Docker.
6. I prefer VMs because I understand them, even though I know they are slower.
7. Don't try to explain Docker to me.
A rant, nothing to see here, move along.I fully agree, and I don’t know why this is at the top. I find these “this thing sucks and I won’t bother explaining why” so utterly useless.
I actually use nspawn for a few things.
Why would I want to transition to Docker or Podman?
Edit: "I’ve used systemd-nspawn fairly extensively to run things in containers. It’s a much simpler container system than Docker, and I do not find it objectionable."
If your VM is slower, why are you emulating?!
There is an 8...
8. Docker doesn't provide complete separation
And a 9...
9. Just use PaaS
In author’s defence, I think you didn’t understand:
> If your VM is slower, why are you emulating?!
Which I think refers to this part of the article:
> They’re slower to set up, and start up a little slower too
And the part about the documentation is missing.
I think the community would be better off if you had a less condescending attitude and let people themselves decide if the submission is interesting or not.
If you don't like the submission then downvote and carry on.
So here's the thing: Docker is the best way we have to document how to set up a project/application in a way that can be repeated on arbitrary computers. The alternative was "have a README where you list all of the things you need to do/install in order to get this project running".
That failed. Miserably.
Developers always assumed things like "well naturally, if you're playing in the XYZ space, you've already got meson installed. What, do you expect me to teach you basic arithmetic in this README too?" Developers across the board, across programming subcultures, showed themselves unable to get past this sort of thing.
So now we have Docker. You may not like it, but this is what peak install guide looks like. An unambiguous file that describes the exact shell steps required to get the piece of software running, starting from a base distro. The developer can't omit any steps or the container won't work on their machine.
It sucks that this Hegelian situation calls for such a draconian solution, but that's where we're at. Developers as a whole can't be trusted to handle this on their own. If you don't have a better solution to this problem, I'm not sure there's much point in complaining.
I think for the development story, we had vagrant in the 2010s which IMO provided a much better experience for developers to set up reproducible dev environments.
Docker excels at bundling up all the dependencies of a piece of software for deployment.
Devcontainers definitely work these days, but I miss vagrant.
I disagree completely. Vagrant worked for your org or your setup but people hardly ever (in my experience) delivered the recipe, or the steps to setup.
Yes, sometimes the vagrant-configure thing had a few lines, but most people shipped an iso with stuff installed. It could have been done, but wasn't being done.
Speaking as someone with similar views to the OP: my “better solution” is to write an idempotent shell script targeting a specific Debian release/ISO that handles system setup end-to-end.
It is for nearly all intents and purposes functionally equivalent to docker, and it’s pretty trivial to port to Dockerfile in minutes. I use docker plenty for work and am fully aware of its benefits. Like the OP, I just dislike Docker’s iptables fuckery and CLI design as a matter of personal preference.
Of course, context is king, and I only do this for things I’m designing and running myself - but the larger point I’m trying to make is that you can do the whole “unambiguous file that describes the exact shell steps required to get the piece of software running, starting from a base distro”-thing without Docker in the picture.
Fully Agree.
Dockerfiles were an excellent way of sysadmins getting developers to write down their build steps.
The fact that they're not deterministic was helped by the fact that we can just copy/paste tarballs around (all a docker image is, is just a pile of tarballs in a tarball after all).
In theory, Nix should be slightly better, but it has too many rough edged for now
I think there is a point to the authors remark on user-friendlyness.
It should be possible to improve the containerization experience by providing a better UI and maybe even a different syntax for docker files.
All of the author's complaints are correct, of course, but the question is simply whether the juice is worth the squeeze. For the most part, I think, it is.
Exactly
The most evil pattern is when application developers force users to install Docker to use their applications.
Why force? Docker image is a bunch of files wrapped in a compressed archive. If there’s a container file, there’s a manual what dependencies are used, and how to set it all up. Silly argument. People use containers because they’re tired of managing a dozen of configurations.
As a tangent… do you mean „developers of open source software available for free download”?
Forcing a software development dependency manager (e.g. pip, npm, etc) is even worse.
Both are bad, compared to Linux distro packaging.
It may be easier to run apt-get install or yum install as a user but having done both I'd say that creating new OS level distro packages is a lot harder work than Dockerfiles to setup and is likely to run only on the specific system for which it was built and will need to be rebuilt and maybe modified every time you upgrade the OS. Docker images will run pretty much everywhere and tend to stay stable for a long time modulo security upgrades. Distro packaging is great for managing things that come with a distro where all the spec files are already written. That's likely not your webapp. Plus it's not a layer cake like Docker. That's a huge advantage that lets you leverage the expertise required for the different layers of your app or reuse publicly available base images.
pretty sure based on this blog post if the app required you to run a VM, that would be much more evil
Why? You can treat Dockerfile as documentation for the most part
Unnecessary complexity makes debugging and understanding the system much harder.
This is particularly common with CLI tools written in some languages. I was looking at Antora the other day (not intending to single this project out, it's just the one that came to mind). I found two ways to run it:
1. By installing Node: https://docs.antora.org/antora/latest/install-and-run-quicks...
2. By running it in a Docker container: https://docs.antora.org/antora/latest/antora-container/.
The amount of complexity here is shocking. This is a tool that could just as well be a single binary, with the only dynamic linkage being to libc and maybe OpenSSL.
This also means that if something goes wrong, black-box debugging tools like system call tracers are much harder to use. I rely on system call tracers all the time, and it really sucks when they stop working.
It's just "complexity" if you aren't super comfortable with docker.. it's super easy to do everything you describe with docker. Like for me it's much easier to debug in a self contained system, because even a binary can have issues with dynamic linking, etc. So for me the complexity is reversed. I don't want to pollute my actual machine with stuff when a docker container is just as easy to use. I don't want my distro's OpenSSL to be slightly incompatible with something that the package is using. A dockerfile removes all of that.
Well, distributing CLI tools as Docker images came about in part due to environments like Node, which made it harder to ship a single statically linked binary.
Imagine, like, your source control tool being shipped as a Docker container.
I agree yes, CLI tools should not come in docker packages. I'd also blame python for that, it's harder to package.
Don't get me wrong, I absolutely would love for everything to just be statically linked and packaged in a single binary (an approach that works great on windows usually). And you are right that "over using" docker is kind of a trend, but I think it's due to a problem (packaging apps in Linux) rather than being a problem by itself.
Because despite most devs opinion on the matter, we don't live in a Linux only world.
Your Linux VM instance is Linux, and I don't think it's an unreasonable request to run a VM on your desktop machine, using the virtualization software provided by the OS.
you have access to docker on both windows and macos
Under virtualization (or emulation if amd64 on arm64). May as well spin up that VM.
I'm not sure what your point is. virtualized or not you can run docker on any mainstream operating system using any mainstream hardware and get near native performance.
Outside of development, running containers on macOS/Windows doesn't make sense. And macOS is using emulation via Rosetta, not virtualization on M-series.
Only if there is no arm variant of the image you want to run.
Even though I despise containers, this is not a good take for open source. They developed an app, they got to decide how it is distributed.
Hell, even if you're paying customer, if a product has only Docker as installation method and seller is not interested for providing .deb's and .rpm's, go find another solution.
I like your post, I still prefer zones and jails over anything in the Linux ecosystem. Building and admining an on-prem k8s cluster has only enforced this opinion.
Posts where the author proclaims they don't even want to try and understand why something is the way it is just lose all credibility to me.
The author self-reportedly has been using Linux for decades. After having taken the time to understand why something is the way it is for a hundred things or more, you eventually lose interest in doing so when the thing in question is a shitty experience, because knowing why doesn’t change its shittiness (pardon my French).
> I prefer to use virtual machines. They’re slower to set up, and start up a little slower too [...]
LXD containers solved a lot of the problems inherit to virtual machines for me, though I don't like their reliance on global configs (something like Docker compose for LXD containers would be ideal).
> Docker is very popular software to build Linux container images and running software in them. I don’t like it.
> Podman is a re-implementation of the concept, command line interface, and file formats that is very close to identical to Docker. I don’t like that either.
> I’ve used *systemd-nspawn* fairly extensively to run things in containers. It’s a much simpler container system than Docker, and I do not find it objectionable. I built a CI engine on top of it. But I don’t use it either, any more.
This person is actually insane, but huge respect for doing things differently!
I tried using systemd-nspawn as an alternative to Docker (not because I don't like Docker, but because trying alternatives is cool) a few years ago and I failed miserably. The docs were hard to grasp and at the time the concepts of namespaces and cgroups were a bit obscure. I guess there are more docs and blog articles making use of systemd-nspawn nowadays, I'll have to look into it.
This showed me how great and easy and well designed Docker was as an abstraction layer though. I know purists don't like it, but it made reliable deployments standard and easier than the alternative of not using Docker.
> The command line interface is really badly designed. It’s ugly, hard to learn, difficult to remember, illogical, inconsistent, and just makes no sense to me at all.
I wonder if the author could elaborate here.
I wish they would elaborate a bit more on each single point. I'm quite happy about docker, so I'm especially interested when someone has a negative opinion about it. But here I feel like there's no meat to this article.
I kinda disagree about most of the points (but I don't love a lot of things about Docker, but don't see them being worse than other tools) - but I 100% agree on the networking.
It's kinda badly described and unintuitive. The amount of people who were surprised by the firewall rules messing up their existing firewall setup is very high. And it also just grabs a subnet and you have to dig why it would use that one and not another. Not sure about conflicts. But it's a bit of "it works until it doesn't".
I didn't have many "wtf just happened?" moments with docker, but 100% of them were network-related and half of them were hard to troubleshoot.
That's my impression, too.
While I agree that developers should support and have documentation for hosting without Docker, I think the arguments in the article are very poor.
It's completely fine to dislike a technology - hell if I don't! - but here it seems like they are arguing against Docker just for the sake of it.
not author but
`cmd containers` does nothing, `cmd images` does `cmd image ls`,
`cmd rm` does `cmd container rm`, `cmd ls` does nothing, its of course `cmd ps`,
`cmd rmi`, no `cmd lsi`,
no easy way to extract the resultant hash from a `cmd build`, can use a tag I suppose
Thank you for the examples.
Maybe I'm in the minority, but I've never used those commands... I'd rather write `docker container ls` than an obscure short-hand alias.
Funny enough, ls is short for list :)
Very true, but it's a well-known command so I wouldn't qualify it as obscure! At this point everyone expects `ls` to display a list of something.
Eye of the beholder. There's a ton of Windows folks that use docker and wouldn't have been exposed to ls before. If they used a command line at all, they're using dir on dos.
Granted, The linux subsystem stuff has expanded it's scope, but there's a few folks I know that have never touched linux and are using docker to run stuff.
This stood out to me, too. Git's CLI was very confusing for me to learn, but on Docker the metaphors made sense. An image, like a file system image. A container, something with walls. Exec to execute commands. Rm to delete. Some of the networking stuff took a little to learn (exposing ports to other containers vs outside of docker) but I think that's necessary complexity.
Podman beeing mostly compatible with docker was a wise choice. If you run rootless no way to break fw/network like docker can.
With podman in mind, one ought to try buildah and skopeo. Again, buildah can run Dockerfiles, but you are not constrained to the weird Dockerfile syntax.
I agree with the general point - there are a bunch of things I don't like about Docker much. (Podman inherits the same issues but it is just copying the interface). Definitely agree on the licensing thing. It's quite a trap if you have some copyleft surprise and they could do more - like just require an SPDX identifier on each repository / image.
I've used systemd-nspawn before. I didn't find it notably simpler and did find lots of weird edge cases where things didn't work (most recently something between ~249 and ~253 giving 'permission denied' errors on mounting /proc into new sub-namespaces within it, boy was that not a fun or easy time to try to work out). Maybe that makes their final point a fair one, that VMs avoid a lot of this without so many awkward subtleties.
> I prefer to use virtual machines...They also behave more like a real Linux system running on bare metal hardware than containers do.
People don't seem to be noting this here yet but if this is why you "prefer" a VM to a container, you dont really understand what containers are used for
People use containers for different things, some of them for the same things they used VMs in the past. This has nothing to do with understanding, just your or the author's preference.
if you are looking for init scripts / systemd / kernel modules / etc in a container, that's not how you should be using a container.
You know, I don’t like it either. I don’t like the network config, or the way the commands work. But I do love the overlayfs, and the somewhat ease of documenting the installation.
I love orbstack however. Every container gets his own ip and host, no need to map ports.
It feels like the docker people didn’t really understand the complete network stack.
I kind of abused docker/orb stack to create easy adhoc chrooted containers. They let me just try out stuff, and they get shutdown when I’m not on there anymore. Check https://github.com/jrz/container-shell
Docker is very "less is more" and it shows.
Containers shine when you are deep into enterprise territory and testing one application means booting five more and like four Postgres and three Oracle DB and a JMS node with plumbing and so on. You don't want to figure that out with VM:s, and you're likely to deploy to application servers or something Kube-like anyway.
So I fully agree with TFA. It's a nuisance, but certain niche situations that are unavailable to most webstuff devs are exceptions.
> I prefer to use virtual machines. They’re slower to set up, and start up a little slower too, but they’re convenient for me, and I understand them well. They also behave more like a real Linux system running on bare metal hardware than containers do. There are fewer limitations that get in my way.
> This blog post is not a request for you try to explain Docker, Podman, or containers to me, or for you to tell me how I can learn more about them. I am not interested.
Then I will simply tell you don't understand virtual machines well either, like you said you did. I was going to explain Podman to you, but I won't. I might not understand virtual machines well either FWIW, but I haven't claimed that I do.
For anyone else reading this, Podman has a nice, clean design, that unlike Docker is free from a required daemon or something like Docker Hub. However it can be tricky to use, because it gives you a choice between rootless and rootful as well as non-remote or remote. However, once you get going, it is quite likable, and it's quite impressive how powerful rootless containers are. I recommend trying them on Fedora or Rocky Linux with SELinux, and reading some articles. Here are a few:
- Podman rootless tutorial https://github.com/containers/podman/blob/main/docs/tutorial...
- With a socket activated container, you can have a container listen on a socket while having a --network of none https://www.redhat.com/en/blog/socket-activation-podman
- Using Buildah to build images in a rootless OpenShift container https://github.com/containers/buildah/blob/main/docs/tutoria...
Sysadmin, the old way.
If encapsulating an entire operating system into a single file is more comfortable (ISO, VDI, VMDK, VHD, HDD) then a potential compromise might be SIF: https://github.com/apptainer/sif
You get the performance of containers without the complexity of micro services.
I always wonder what kind of setups people have where docker destroys their network config. I have used Docker on so many systems over so many years and several distros and not once have I encountered that. Same with people who say that systemd made their system implode and wayland makes their baby cry. What are these people doing?
Docker didn't support nftables for years (idk if they even support it now). I moved my personal machine to Podman because of it!
Also port forwarding in Docker (and Podman!) still bypasses ufw/other firewalls, which is really annoying and surprising (though it doesn't in rootless).
Big part of the reason of Docker's existence is that it's more lightweight than running a VM. That's like on the first page of a Docker tutorial.
It's ok to struggle to learn Docker, I'll admit took me a while to understand the benefits.
Also no need for the font to be bold, we can read normal font.
I had to use Docker at a job or two, I think around 2018. I hated it.
One class of issue: It made interacting with the file system slower, sometimes by orders of magnitude. Stuff like watching files, or statting a large number files, didn’t have the same performance characteristics. So you have a situation where you (probably) already have too many components that are too complicated or poorly understood to install them all on a developer’s machine, but they work on this exact machine snapshot, but now you have to figure out what process dared to stat a few thousand files.
Docker was also just always… there. In the menu bar. Doing stuff. Running system-wide. Updating itself, constantly. Like it’s Steam or Battle.net (which for some reason downloads updates to Warcraft III, an old game, multiple times a day on my kids’ PC, and sometimes breaks and you can’t play the game; this is the level of enshittification we are at).
The command-line experience… similar to git (that is, poor). There’s an underlying conceptual model that’s sort of half abstracted away by the tools and hard to find a good explanation of.
Developer tools like this have a tax: You spend at least half a day a week Googling for issues with them, forever. Same with NPM. All it takes is five such tools in your stack and every weekday morning is gone. And that’s disregarding the fact that you were probably in the middle of actually trying to get something done.
> It made interacting with the file system slower, sometimes by orders of magnitude. Stuff like watching files, or statting a large number files, didn’t have the same performance characteristics.
Just as a FYI, This is only an issue running on Windows or Mac. They setup a vm behind the scenes to run docker on and the vm doesn't have direct access to the filesystem. On linux, it's just namespaces and it's native performance.
> Docker was also just always… there. In the menu bar. Doing stuff. Running system-wide. Updating itself, constantly.
lol wtf? This tells me everything I need to know. If that’s what “docker” is to you, sounds like a major skill issue.
I find that although docker is heavier that it integrates well with everything and is far more practical to use so I stick with it. Trying to switch gave me a lot of headaches and burned a lot of time.
podman very much reminds me of subversion
subversion intended to be a better version of CVS
which it certainly delivered, but no-one really stopped to think if that was such a good thing in the first place
It’s like r/linux all over again. For some reason people really think others want to hear their failed journey. It’s weird.
Don't use any containers or even VMs then.
Just dedicate one physical machine for one application. Problem solved ;)
Don't like Docker either. Why? As an absolutely unnecessary entity, it doesn't correspond with Occam's Razor. It's overengineered. It's clumsy and slow. It utilizes a lot of resources and leaves a lot of garbage in the filesystem. It's not secure. It's overhyped. docker-compose is an abomination. The same goes for Kubernetes.
OP's next post title: 10 techniques I used to make it to HN front page
"The best way to get a good answer is to post a confidently wrong one" at work here...
Who cares? Not every software must be likeable for everyone.
Cool story but at least try to give some argumentation when you say stuff like:
>The design of the language in Dockerfile is ad hoc in a bad way. It’s difficult to understand, for me, and easy to make mistakes.
Because that reads like a skill issue to me
Not writing all software in machine code is also a skill issue.
The question is not if it's a skill issue or not, but if the complexity of the solution matches the complexity of the problem domain. And also: if there are easier and clearer ways to reach the same goals.
Dockerfiles provide a small set of operations to create a reproducible software artefact. I have been doing this for a while, I’ve seen POM files, Groovy Pipelines, endless shell scripts, Makefiles, and lots more. From all those, Dockerfiles do not seem like an absurdly complex solution to the problem to me.
Bingo. Which is why that k8s fad was inappropriate for almost everyone out there. Your app could have run under cron every hour once an hour on an i586 but you had to spin k8s because it was "cool".
I really hate "skill issue" as a response
agree, not everyone has / had a mentor
Dockerfile is one of the worst parts of docker — it's a primitive shell-like DSL that didn't have to exist and that feels like it was designed by a person with a couple of hours of experience in writing shell scripts.
Instead of providing a set of separate tools to be glued together with a proper shell or a full programming language, they designed this nonsense that can't even do 1/10th of what busybox is able to do, and have been in the business of adding the missing pieces (like `COPY --chmod`) for the past 10+ years.
It has taken them about a decade to add HEREDOC support, for example. Most dockerfiles still use
instead ofRUN foo && \ bar && \ baz
I avoid dockerfiles and prefer using buildah for building containers. Since they're all using the same specification, it doesn't matter what runtime is then used to run them: it can be docker, podman, k8s, whatever.RUN <<END set -eu foo bar baz ENDHere's the official example of building a lighttpd container:
https://github.com/containers/buildah/blob/92015b7f4301d7eb8...
You can eschew bash and call these commands however you want — from a python script, or Go, or even assembly.
Fair but there are some specific mechanisms that are particularly poorly documented and designed, like exposing gpus and cifs/nfs mounts as a couple examples.
> skill issue
liw is Lars Wirzenius, so I assume we can rule that out.
Trivial things should not require skill.
Honestly, the title should be "I don't like Docker because I don't know how to do certain things, so I prefer to do things the way I know better".
... "because I don't know how to do certain things and don't want to spend any time at all to learn how" ...