My personal reasons to not run my Nginx reverse-proxy inside Docker
ewaldbenes.comI do use Nginx in docker for (personal hobby) dockerized applications... but I didn't fully understand some of your reasons:
* AFAIK "docker exec NGINX_DOCKER nginx -s reload" works to 'hot reload' configurations
* You're right that "in place NGINX binary upgrade" won't work. The "bright side" of this problem is that the "NGINX container binary" is immutable and can be easily "rebooted" if any corruption occures (instead of having to reinstall everything). For hobby websites (including the condo mail server), the downtime is acceptable (not much traffic, not really 24/7, fast restart, few versions of NGINX every year...)
For me: having an isolated (dockerized) NGINX is easier to manage (like a dockerized mail server) because it limits the amount of processes "on bare linux with files everywhere" and make is easier to backup/replace/upgrade (just start a new docker with a new version). YMMV
* Docker containers can be modified at runtime, it is just more involved to do so. Hot-reloading inside Docker works best if you mount a host directory.
It drives the container philosophy to an ad absurdum. When I serve paying customers then I refrain from what is possible. I try to stick to what appears to be the simplest thing.
* I haven't encountered a corrupted Nginx binary, so far. I think that it is very unlikely to happen. I consider my Nginx binary "almost immutable" even without Docker. Since I am the only one working on my VPS I also know who to blame if that's not the case :D
I see Docker as an amazing fit for isolating business applications. They tend to have many dependencies (often less stable than evergreen libraries like libc), get continuously update and deployed.
This a good analysis.
However, for a quick skim of the article, all of cited issues with Docker as such are in fact solved by services such as ECS[0].
Thus, the question of whether to manage nginx directly may be more of a business decision (am I dealing with a cloud provider?) than a technical one (are these requirements delivered by a service?).
Just run it in LXC. Then the container is not immutable but the process will still be isolated with it's dependencies. And infrastructure as code, with Ansible etc, all possible.
Also the talk about docker being itself a dependency I can't really follow. I guess you would have many other dockers also running, otherwise being nxing the only thing within docker.. that doesn't make much sense..
The section about the "Docker as an additional dependency" didn't transmit the message correctly which I tried to convey.
It is about:
Everything that something depends on makes it a little more likely to introduce failures which I want to minimize if reasonably possible.
I corrected the section.
Oof, in his reasons to use docker (which it seems like he uses as a synonym for containerization), misses probably the most important one -
6. Provides an additional layer of security between the application and its host
As far as his availability concerns, I don't see why running or not running docker would help this problem. You can certainly build highly available nginx reverse proxies in docker, and can do in place upgrades with no downtime. This has been solved for a very long time.
Docker (and containers in general) adds namespacing, but not security, for applications. If anything, docker can add MORE attack vectors.
No one should use containers primarily for security ever.
Correct. Docker adds minimal isolation, basically what Linux provides through cgroups, which is quite porous and lacking hard resource limits. It's definitely not even close to the security and resource isolation guarantees provided by virtualization. If one wanted manageability and isolation, then they could run k8s and Kata Containers which uses virtualization instead of containerd. If it were me running 1 little app on a pair of HA boxes on the world wild web, I would probably use FreeBSD and jails or runj.
> If one wanted manageability and isolation, then they could run k8s and Kata Containers which uses virtualization instead of containerd.
There is a world that exists outside your own use cases (and gke). EKS provides isolation by default, just as one example I can pull from my own career. you can even get warnings out of the box if any process gets access to the host, and the typical convention is to lock these hosts down entirely. No need to access the machine at all and everything is isolated within it.
Generally the type of infra advice AWS provides is fairly sound depending on what you want to pay, but I assure you what you just said is not true everywhere.
I'm no expert in hardening for server security like SELinux.
As far as my understanding goes containers per se are no security mechanism. Rootless containers are as good as rootless processes with chroot.
So this comment resonates with my understanding.
My feeling is that containers give you more possibilities to introduce security holes if you aren't diligent.
Sorry but what you said makes absolutely no sense. The security implication I am talking about is that in a typical container the application cannot escalate privilege out of the container and touch the host, with the exception of obvious things like shared file systems, etc. This is a known benefit of containerization and not at all controversial.
The thing about your comment is that it seems that you believe containers are something you use for added security, and that is very much not the case.
But I don't disagree about rootless containers being more secure than rooted ones, just as much as any process not running as root has less privileges then one running as root.
Bold of you to assume that most devs are running containers with a non-root user.
Devs maybe, but the author is talking about production environments which are definitely moving to rootless.
'Production' is an interesting appeal to authority, if I may risk using terms improperly.
My 'toy' controller/replica are production but due to planning - have no weight. Two people care in the 'oh that's mildly inconvenient' sense when they break. Everything moves on.
To lean into hyperbole a bit: for every shop that's rootless, there are nine that are rootful.
I don't say any of this to be defeatist, more... realistic. There's:
These can be, but don't have to be, the same.1) what people say they do 2) what people actually doIt's surely trending positively, but I'm also not interested in arguing about window dressing while the house is on fire.
What does SELinux or AppArmor look like in this hypothetical, disabled per usual? Plenty of work to do. No pats on the back yet.
I say all of this a bit heavily - not OP. Closely related to some heated talks I've had this week. It's easy to bike-shed endlessly
rootless has been the standard convention for a long time. if you’re in any industry that gets regulularly audited this is one of the first things they look for.