The demise of Docker and the rise of Kubernetes
thehftguy.comThe real world is still very much using Docker. Infact a few companies I've interviewed with this year aren't even using any kind of container setup and they're doing just fine and making money.
This kind of article just adds to the Jonesing-for-shiny-things mentality that really doesn't do the engineering world any favours.
Pretty much
I worked for an e-commerce, leader of their market, that used no Containers for the past 3 years.
CI produced agnostic packages, which were deployed with Ansible in any environment and auto-scalable using Golden AMIs, EBS Snapshots and AWS ASG. Almost the same concept, but way less complex.
We contemplated moving to ECS/EKS to stay "current" and to make test/development environments easier to create, but in the end, with a small team, it would add an unnecessary burden and "re-invent" the wheel on something that was working fine for the time being.
I can't help but feel that the drive towards docker-style containerisation was fueled by nothing more than the fad for developing on OS X. If you've got an environment where the devs are working on the same OS as production, it becomes much easier to adopt OS packages (whatever flavour that is) as the deployment mechanism, and as a side benefit it becomes easier to set up the dev environment in the first place.
I've done this with Debian, it worked great.
I work on exactly the same OS as production (ubuntu 18.04) and still need (not want) containers to do my job. One project uses ROS along with different detectors with incompatible dependencies. One takes forever to build from source, and it's way easier to ship around images by pulling from a repo than tranferring binaries and libs.
And even if I didn't have those constraints, I'd still use containers in production because it just makes testing and deployment so much less of a headache.
I think there is still a good idea to use the same environment on your own machine.
This doesn't mean it's wrong to use containers to isolate various projects from each other and your main setup.
Edit: I highly recommend using lxd on Ubuntu. It works very well if you have one persistent container per project or large component in a project.
> CI produced agnostic packages, which were deployed with Ansible in any environment and auto-scalable using Golden AMIs, EBS Snapshots and AWS ASG. Almost the same concept, but way less complex.
I like this, i wish more orgs would work like that.
Kubernetes feels mostly like a very opaque layer to abstract the underlying cloud vendor... Yet the cloud vendors are managing to get their lock-in back into kubernetes.
Serious question: while I totally agree with most of this, why bother building e.g. RPMs or DEB packages when you can just stamp out a docker container? I've had a lot of success using docker as _just_ a packaging mechanism and nothing else. Is there some fundamental bit I'm missing for why that's a bad idea?
At my last company we implemented a couple of different services as just an AWS provided AMI with a small init script that would install docker if necessary, pull our container, and run it with the ports mounted properly. That worked great in ASGs and totally obviated the need to think about machine provisioning beyond 3 or 4 simple steps that everyone understood. I feel like Docker gets a bad rap _because_ of Kubernetes, when in fact Docker on its own solves a real problem in a pretty elegant way.
Docker isn't just a packager--it adds a whole nother layer of runtime to your app.
Access to network/disk/other IO, logging, init systems, configuration, etc are all different under Docker. There's more pieces of software which can crash (docker engine, etc). Not to mention potential performance issues and/or other subtle differences which may occasionally occur.
This isn't to say Docker has no value--it does, and we use it in production on a number of applications. But it is certainly more than just a package.
The strength from having a reusable medium (docker or otherwise) is when there are integrations on top to manage deployment, failover, DNS, load balancing, certificates, etc... none of which is handled by docker. You must have noticed since you had to implement some of these.
Kubernetes is getting adoption because it's helping with that. That's where the hard work and the value is.
Docker by itself is not notable, it's a fancy zip/rpm file.
I would be interested in hearing you expand on this if you don't mind :-)
Not even docker, mostly AWS or dedicated servers or even VPS...
No small site needs kubernetes/docker orchestration and that's fine.
Docker isn't just for orchestration. I've used it in a very small company and even use it for personal project just because it provides me with an easy to set up and portable development environment.
For my small personal projects, xampp is sufficient.
Docker can also be useful to create reproducible deployments, especially for languages like python when your dependencies may change between `pip install`s
Having said that, I agree 100% that for most small/medium sized companies kubernetes is an overkill which creates more problems than it solved.
The problem with AWS etc is that you need external servers (i.e. no good if it needs to run on the local intranet). Also the cost of adding even a new VPS is much higher than spinning up another docker instance.
I'm pretty sure a lot of companies use docker simply to ease deployment of internal apps.
My previous employer is a major gaming/media company and they don't run Kubernetes and largely (almost completely) no containers either. Focus is on uptime and profitability and it has really paid off!
Care to explain what that means?
> If you’re (only) a docker expert, you’re in troubles right now. There are no more jobs looking for docker expertise and you’re dangerously close to unemployable.
This is such a silly and unrealistic argument - no one is a docker-only expert. And docker is a desirable skill, just not on its own. It's like saying that git-only experts are unemployable.
As a "docker expert" (I wrote a book on it [1], was one of the first people to get a Docker certification, and have gave multiple talks on it): please send help!
Except not really... The original article's writer doesn't know what they're talking about. If anything, knowing Docker has increased my employability in the new K8s-centric world.
1 - https://www.amazon.com/Deployment-Docker-continuous-integrat...
"docker-only expert" makes me LOL :)
Just speaking for myself, I have done a lot of work with docker and container orchestration but I have not worked on k8s. I see recent job ads are more for k8s experience.
I'd love to see a comparison between job ad terms and popularity on stack overflow & co. From the, admittedly little, experience I have with job postings, I've felt that a lot of those that are actually looking to hire (and aren't just using job ads as cheap PR/marketing) tend to be a mix of "it'd be nice if the person knew about X", "let's make this look interesting, so people actually bother to apply" and "hey, I've read an article about Y recently, that's a thing, right? put it in there".
Author here. I appreciate that it can seem silly but it is a very real thing I see happening.
I tried to illustrate with some concrete examples. Like being asked about pods and ingress in interviews, which are kubernetes specific. Experience with docker doesn't help much here, gotta keep up with Kubernetes. You could de-facto fail the interview if the interviewer realizes you didn't work with kubernetes but docker alone.
Companies are super biased toward kubernetes. They're really looking for the unicorn with years of kubernetes experience in production.
> If you’re (only) a docker expert, you’re in troubles right now. There are no more jobs looking for docker expertise and you’re dangerously close to unemployable.
> Kubernetes has succeeded where docker failed. Management buy-in.
This must be one of the silliest articles I've read in a long time. Computer science and engineering does not revolve around the latest devops flavour du-jour. It will be something else in three years time anyway.
The real innovation around Docker was taking existing building blocks which were not straightforward to use on their own (linux cgroups, overlayfs) and bringing them under a cohesive package that's accessible to any developer.
I would say docker's real innovation was the introduction of reproducibility to system software at the OS-level. Or, it was a vote of no-confidence in the old way of patching/upgrading/deploying/building software. Or, static linking won.
The Linux features like cgroups/overlayfs etc that were used to deliver reproducibility at an acceptable performance cost are more of an implementation detail than the actual innovation, imo. I think one of the co-founders of docker might agree [1].
[1]: https://twitter.com/solomonstre/status/1111004913222324225
> Computer science and engineering does not revolve around the latest devops flavour du-jour.
Job postings most definitely do.
This is a silly article really. If a company looks for a specific skill set like Docker or Kubernetes, I think the company is hiring people who would become irrelevant after 5 years because you never know what newfangled orchestration tools would be the "right way to do it" then. Fortunately, companies do not hire in a silly way like this. Sure they write "Kubernetes" as one of the many things in their job descriptions but all companies I have interviewed with would be equally okay to hire someone who demonstrates strong Docker skills or strong sysadmin skills or even strong OS skills.
These skills are transferrable from one implementation to another. It makes no sense in this age to put all our eggs in one Kubernetes basket to be hireable. Expertise of the underlying computer science and engineering is far more important.
And yet I still have to deploy my first Kubernetes setup and have used Docker multiple times ...
Is there some kind of "Kubernetes-light" out there? So something like in-between running services like NGinx and Postgres on bare Linux machines and having this (I think complex) Kubernetes setup? It's important to say that I don't need any scaling capabilities (apart from maybe some load-balancing in case of a machine error).
Docker Swarm mode. Honestly it's great, I'm surprised I don't see it spoken about here more often. It's maybe 30 minutes to get started, much less of a brick-wall learning curve than Kubernetes. It does basics well and you can still do complex things like cluster storage etc. I'm using it to deploy a couple docker-compose stacks across a tiny two-region Droplet cluster and it's been excellent.
Anecdote : we had a much better experience in terms of reliability with k3s than Docker Swarm. The latter was slightly more complicated to admin and was much more prone to random lockups or needing all containers scaled down to nothing then back up.
K3s (run on same hardware) had a similar Api to full k8s which we had previous experience with and handled container lifecycles much more robustly.
I would give Nomad a try. After reading pages on pages of Kubernetes setup and trying the many half-working setup tools de-jour, setting up Nomad was a breeze and keeping it running over many months took almost no effort at all. URL: https://www.nomadproject.io
Nomad is great, and easier to grasp than Kubernetes, and with Consul you also have have DNS-based service discovery for your Nomad programs.
I agree with this 100%. Nomad can orchestrate anything. Docker containers, executables, jar filed etc.
Nomad+Consul is the definition of easy.
Has been posted here a few times.
Thanks, that looks interesting. I hope I don't get toooo many different suggestions. ;)
easiest way is just start experimenting with GKE on google cloud, and see if it has value to you. Don't bother trying to deploy and manage it before you've kicked the tires.
Out of the box its quite good, depending on what you are doing, once you have cert-manager issuing you free certs, linkerd managing a service mesh, and stackdriver giving you an entire ops stack, its a bit hard to go back
Thanks for the Linkerd shoutout! For those who aren't familiar with the project, Linkerd will give you per-service metrics (success rates, RPS, latency distributions), mutual TLS, load balancing, and a bunch of other cool stuff, on almost any Kubernetes app, right out of the box. No config necessary.
Is there a way to do this without spedning a fortune on Google cloud costs?
How big is the vendor-lockin though? Let's say I want to use both Hetzner Cloud and GCP instances - I am sure it's possible, but the question is how much of a hassle it is ...
That depends on your app portfolio (naturally), how impacted it is by k8s version changes, and how much cloud specialization it requires.
You're probably better off on k8s than just about anything you didn't write yourself wrt to vendor lock-in. The true lock-in, IME, are when you use cloud features that aren't portable. Humble apps are fine, but ambitious apps are bound to those ambitious capabilities.
Seen next to comparable parallel installations it's a pretty minimal of per-environment specialization, and you can generally tweak the environments to support transparent parallel deployments. It's the differences in how load balancers work and which resources are immediately available that create work since the abstraction layer may not fully abstract away the environments.
Most of the issues I've faced (Ingress cough) have been solved by reading through the relevant pages in the Kubernetes documentation. A lot of the docs/tutorials provided by GCP link back there.
If you want to use both then you should investigate rancher - that's the problem they're trying to solve.
https://kubernetes.io/docs/setup/learning-environment/miniku...
Note: I’ve not yet tried it, but when we switch this was one of the things I was going to try first.
Good point, there is still a barrier between how people deploy (kubernetes) and how people develop (docker-compose). And it's not just different syntax, developers typically never have to deal with concepts like ingress and persistent volume claims. Things like converting compose to k8s manifests don't really help developers to understand how kubernetes works in production. As a result, Kubernetes tends to help to reinforce the need for ops people, for better or worse.
learning kubernetes here too. helm seemed to be the answer but...
i spent hours trying to make helm work on my mac and in the end gave up. the original error was with comparing floats in the latest version. that error seems to pop up every now and then in their issues list. now, there is no way to install an earlier version. not with brew anyways. installing from binaries or sources is nightmarish. and when you do, it might not like your minikube setup. and yaddi yaddi yadda...
it feels odd and strange to me that the trend nowadays is to accept these complicated systems and be amazed by how complicated they are...
AWS ECS (it's what we use)
This type of propaganda is an insult to human intelligence. What, the author is hoping that the readers underwent a lobotomy?
Or maybe, its so obviously wrong, it must be pure clickbait.
Docker is still pretty embedded in a lot of workflows, thanks in part to its use by-default in many Kubernetes distributions, and the popularity of Docker Hub - not to mention various tutorials and scripts which refer to docker tooling.
But yep, I'd agree with the general premise here - with the emergence of tools like cri-o[0], podman and buildah (which let you build and ship container images without the need to run a background daemon like docker at all, avoiding the associated operational/security/system overheads) - docker may need to evolve or it'll quickly become less favourable.
Project Atomic[1] runs a good PPA with many of these packages for anyone interested and using Ubuntu.
Thank you for pointing out these emerging tools. They seem to be the next steps beyond Docker, built on the lessons learned.
Project Atomic's website is down at the moment - checking their GitHub, the site hasn't been updated in a while? https://github.com/projectatomic/atomic-site
Links for future reference, for myself and others:
Podman - https://podman.io/
Buildah - https://buildah.io/
Open Container Initiative - https://www.opencontainers.org/
From what little I've looked at podman, the "no daemon, rootless" story isn't quite as straightforward as it might appear.
For starters, you couldn't expose ports as a standard user running podman last time I used it . Also every container got it's own conmon process, so there's still an overhead, it's just done differently.
Fair point re: conmon, and yep, even in the strictest sense of the word, it is a daemon now that I read up on how it executes the container.
I guess it's better to say that only a monitoring daemon is required with this setup (rather than all of the additional daemon services that docker provides).
Re: rootless podman, it looks like there's a good resource to track progress here: https://github.com/containers/libpod/blob/v1.6.2/rootless.md - that must be a common ask, could be interesting to track.
(I'm definitely guilty of being overoptimistic about these tools, but do hope they improve because the principles behind them seem very sound)
A badly written, rambling article that seems to provide no insight and declares Docker dead because k8s is the flavour of the month. I felt stupider the longer I read on.
Docker has not built support for cgroups v2 - which has been available in linux for 3 years. https://github.com/opencontainers/runc/issues/654
in order to force this issue, Fedora has made cgroups v2 as default and mandatory in the new upcoming Fedora 31 causing docker to fail to run. https://github.com/docker/for-linux/issues/665
Podman (and other docker equivalents) have supported cgroups v2 for years.
I suspect that k8s will move away from docker to recommending one of the alternatives pretty soon.
Out of curiousity, have you seen anywhere using CRI-O/podman in production in place of Docker anywhere?
https://github.com/cri-o/cri-o/blob/master/awesome.md
there are sections on using cri-o on EKS AWS
GKE already provides containerd for new deployments.
Me not using any of them, still doing old style on-premises deployments, VM based.
I bet in about 5 years time we will be reading a similar article about Kubernetes.
Already happening: just use serverless!
I guess you mean CGI.
xinetd
For 99.9% of the uses, k8s is a cannon, while the problem you're trying to solve is a fly.
Use the right tool for the job, please. Trying to force something, just because it's thw buzzword of the day, will only waste money and bring suffering.
Kubernetes is a thick, complex wrapper around deploying Docker applications.
My opinion: Kubernetes is a simple wrapper around Docker containers, that has a tremendous number of gotchas.
The overall concept is pretty simple: You create a deployment that spins up pods which are your containers. You create ingress and services to direct traffic to the pods. You configure it all with environment variables through ConfigMaps and Secrets.
However, there are still so many one-line commands you need to add to YAML or weird networking issues, or set of commands you have to type in each time, or permissions that are hard to configure and manage... And creating a cluster is a pain, unless you use something like kops. Great tool but it too takes a few hours to figure out even the basics.
I think in time, Kubernetes will get worked out. I love the core of Kubernetes. It took so, so long to figure out the rest.
Virdings law...
That's my impression too. Kubernetes and Docker work very well together, so Docker knowledge is not out of date. It's just that Kubernetes is now relevant too. And more in demand because fewer people are fluent in it yet.
I'm not particularly fond of Docker but I don't see it going away any time soon. I acknowledge there has been alternatives around which were better in some aspects than Docker yet haven't worked in a single company that didn't use Docker with Kubernetes or other orchestrators (e.g. AWS ECS). Not saying they don't exist, but it's very rare on my experience (both as fte and as a consultant).
Also I have never met a dedicated "docker expert" as the article calls it. I mean, is there any company out there who's hiring people that only knows Docker? Does that make any sense?
Docker may get replaced by alternatives as they start getting more traction over time but I don't think this will happen all of the sudden - Docker is still relevant for good or for bad.
It would be worrying if something as basic as Docker required an expert's worth of arcane procedures, workarounds, tricks, ancillary tools and so on.
In my experience Docker is almost as trouble-free as it should be, with straightforward tools to make mistakes and undo them; it requires good engineers who know what they want, not wizards who know how to get it.
They should have accepted the buy out offer when they had a chance, but were arrogant and insisted that their valuation should be greater than VMWare at the time. Of course this never really made much sense if you understand the behind-the-scenes tech Docker uses.
This is a great precautionary tale to founders and an awesome example of hubris at play.
Docker's biggest problem was that they provided tremendous value with their opensource product, leaving few to have any justifiable reason to pay them money.
They courted Riot Games for years, until finally they flat out told them they would never see a penny from them. There are many things that can be learned from a business perspective here...
The best reason to use Kubernetes (and in many cases the only reason) is to boost your employability.
It gets harder and harder to find a stack that doesn't rely on it.
At my company, we chose to use ECS/Fargate when possible. It integrates nicely with SSM Parameter Store for config and secrets, and has a simple service discovery feature based on DNS.
A few services run on EC2 + ASG, using AMIs build with Ansible and Packer.
Are we missing something by not using Kubernetes? Is the experience so amazing, compared to ECS? I don't care about vendor lock-in.
Author here. It's all about employability indeed and it's sad many other commenters don't seem to grasp that.
DevOps/SRE jobs are full on discriminating for kubernetes experience, not docker, and preferably on their exact stack AWS ECS, EKS, GKE, etc... it can get real tough as a job seeker if you're not on it.
This article just sort of meanders without going anywhere or providing any insight. Certainly no new insight. Docker the company didn’t manage to solve the right problems in time. K8s hype is through the roof.
But the irony is that the Docker infrastructure is a critical dependency for the vast majority of K8s users. And if it falls apart, a lot of stuff is going to break. I hope someone has some contingency plans for Docker Hub going away.
I tried to cover that near the conclusion. It's likely that someone would offer docker a reasonable amount because of the docker hub and the registered users. Don't think they would sell though.
AWS, Google, Azure should already have mirrors in place for their own offerings, should be ready to substitute to docker hub.
This article makes the classic assumption that deployment of web services is the entire world.
We use docker as part of our CI, because that's what Gitlab uses for our CI system. It works very well. Of course we could use podman locally (and I do on some machines), but Gitlab will still be using docker for us.
On a side note most of these systems use YAML and as a dyslexic i really have a hard time spotting indenting issues and issues where i should have been using lists. Getting an ide with support for it doesn't make it better.
This is like saying "The demise of Wheels and the rise of Automobiles"
I think that Kubernetes can run with CRI-O as the container engine instead of docker, with a nice performance increase since docker is more than just a simple runtime. Iirc it even has slots ...
Well my firm, a F200 organization, is going with Docker. We looked at Pivotal Cloud Foundry and Red Hat OpenShift and chose Docker. Why? For one it's cheaper but the killer was with Docker EE 3.0 we get Kubernetes and Swarm. We have vendors who are now deploying software to us using Docker and some are using Swarm and some are using Kubernetes. With Docker we get the best of both worlds. So it may be a bit too early to go ringing the 'Docker is Dead' bell.
OpenShift and Docker are apple and oranges.
I'm actually talking about Docker EE
Coming from someone who only uses docker for development purposes, Kubernetes feels like an overly complicated solution for a problem that 1% of development teams need.
Docker is too simple and limited to make a living off it
thehftguy sometimes has very good articles, but it's obvious he doesn't know/understand much about the container landscape. He would be correct if he said, "The demise of Docker swarm and rise of Kubernetes"
For those that don't know, Kubernetes is a container orchestrator. That means when you have lots of containers, hundreds or thousands and lots of servers to run them on. Instead of wiring them manually and deploying them manually, kubernetes make's it easy. kubernetes will decide which server to run them on and wire them together, if a server goes down, it will restart the down containers on new servers provided you have the capacity.
Imagine that docker is a computer program, kubernetes is the operating system.
Author here. Thank you.
The point is really about docker, not docker swarm. Kubernetes is integrating the whole ecosystem vertically and it's being leveraged to push out docker. There are lots of actors at play incentivized and actively working against docker (not just docker swarm).
I guess it's more of a business and marketing lesson if anything.
Docker swarm never really took off.
random networking issues
> If you’ve run a “apt-get install mysql” in the past decade, high chances it setup MariaDB instead, getting aliased and substituted transparently.
Is that true on Ubuntu/Debian? I couldn't find a source for this.
True on Debian. I don't remember much articles or noise about it, it sort of just happened.
One article here, debian jessie 8 is from 2015. https://mariadb.com/kb/en/library/moving-from-mysql-to-maria...
kube will die before docker. kube is not fun. VMware should buy docker. We should write a much better container orchestration platform with layer 7 in mind and multi regional availability.
What are you using to build your images for kubernetes?
Nixery ;-)
Thanks - i havent seen this!
a docker image
However at work I am explicitly disallowed to run Docker locally, and instead I'm expected to build my images directly in my dev OpenShift environment, yes, from a Dockerfile, but no docker-build for us ...
Exactly :D
Don't most people use both together?