Settings

Theme

Containers Don't Solve Everything

blog.deref.io

98 points by kendru 5 years ago · 98 comments

Reader

superkuh 5 years ago

It depends on the context, I don't know about corporate persons with profit incentives but if we're talking human persons then containers don't solve anything. They're just the symptom of the disease that is future shock. The underlying libraries we depend on just change too fast now and no devs care about forwards compatibility so we end up with all OS/Distros having libs that stop working in about a year (or more like 3 months with Rust/JS/etc).

The solution has to either come in the form of static compilation, or, even less feasible, getting devs to actually care if their software runs on platforms more than a year old. Containers just make everything worse in all cases beyond the contrived "it just worked and I never need to change anything".

  • colechristensen 5 years ago

    Containers halfway solved some big existing problems that most people don't seem to see very well.

    Packaging is hard, and both debian-based and rpm-based (and really most other's I've seen) are pretty awful. (except BSDs, which I've had a lovely time with)

    They're slow, they're stateful, writing them involves eldritch magic and a lot of boilerplate, and they're just frequently broken. Unless you're installing an entire OS from scratch you're probably going to have a hard time getting your system into the same state as somebody else's. And running that from-scratch OS install is definitely possible in a as-code way, it can take an hour.

    Containers came along and provided a host of things traditional packaging systems didn't and they took over by storm and with them came a whole lot of probably unnecessary complexity from people wanting to add things. Adding things without ending up with a huge mass of complexity is hard and takes a lot of context knowledge.

    So we ended up solving a host of problems with containers and creating a whole new set along the way.

    • 5e92cb50239222b 5 years ago

      (Almost) nobody is using Arch Linux on servers, but I find its package system to be very good (not surprising since it was mostly copied from BSD ports).

      A few random examples (not the best you could find, just something I've used recently):

      - re-packaging pre-built binaries:

      https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=visua...

      https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=nomad...

      - building C from source

      https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=tinc-...

      - building Go from source

      https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=yay

      - patching and building a kernel

      https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=linux...

      • messe 5 years ago

        Alpine (apk) and Void (xbps) have similarly nice packaging systems.

      • Topgamer7 5 years ago

        I was having problems building wine. So I used the arch pkgbuild and just didn't do the install phase. Made compiling pretty simple. And all the outputs are nicely defined in the aur repo locally.

      • curt15 5 years ago

        Does Arch support installing multiple versions of libraries?

        • tomjakubowski 5 years ago

          Yes, but only if they're packaged separately. PKGBUILD is easy, so it takes very little effort to repackage older library versions under a new name (and patch dependents to use the new name) if you need them.

        • KingMachiavelli 5 years ago

          Not really. There are a few applications that can be installed because the install path different for each version, things like Java support this. But libraries like glibc, etc. are tied to one version so anytime those libraries change it triggers a rebuild of many packages.

    • the_duke 5 years ago

      > Containers halfway solved some big existing problems that most people don't seem to see very well

      A big reason for that in the past much fewer developers were confronted with this problem domain.

      In larger companies packaging and deployment was often the responsibility of ops, with some input from and interaction with development. That of course also meant much longer lead times, arguments about upgrading versions of libraries or other executable dependencies, divergence of production and development/test environments, and the associated unfamiliarity with the production environment for developers and hence often more difficult debugging.

      Ever since Docker (+ Kubernetes and various cloud specific container solutions) became so popular, a lot of devs now at least partially deal with this on a regular basis.

      Which is mostly a good thing, due to the negatives above.

      • stingraycharles 5 years ago

        > Ever since Docker (+ Kubernetes and various cloud specific container solutions) became so popular, a lot of devs now at least partially deal with this on a regular basis.

        But that’s in line with the whole premise of DevOps, right? That the strict separation between dev and ops is a bad thing, and it’s good that devs get involved with ops and vice versa.

        I don’t think this has to do with containers per se, but they do help a lot with that goal.

        • throwaway894345 5 years ago

          Agreed. The core concept is that we should automate away as much of the ops workload as possible so (1) devs don’t need to learn the whole ops skill set and (2) no one is doing things that computers could do automatically. Containers and orchestration technologies are a form of automating away a lot of ops work (if you need to package an application you don’t need to solve for SSH, package management, log exfiltration, monitoring, or any of a dozen other things).

    • kendruOP 5 years ago

      I think those are very good points. In my opinion, the hoisting of packaging concerns from language-level and OS level was inevitable, and containers an an okay way to do that.

    • nikau 5 years ago

      Your experience seems to be the exact opposite of mine - debian packages are typically very elegant.

      Most corporate use of Docker I've encountered is a mess of stupid patterns like "RUN command A && command B && command C && ..." to reduce layers or some such nonsense which makes debugging build failures tedious.

  • titzer 5 years ago

    > They're just the symptom of the disease that is future shock.

    Yes, absolutely, and I hope you mean that in the capital-F "Future Shock", Alvin Toffler sense, because there is a lot he wrote that hasn't even been carried over and digested. Software is an endlessly disorienting sea of change, getting faster and thus worse as time progresses, and it's frankly madness at this point.

    It seems absolutely no one is committed to providing a stable platform for any purpose whatsoever. Even Java, where I spent many years being ingrained with the absolute necessity of backwards compatibility with old (perhaps even dumb) classfile versions, has been making breaking changes as part of its ramp up to semi-annual major version releases. Node Long Term Support "typically guarantees that critical bugs will be fixed for a total of 30 months."[1] Pfft. It's a joke. You can't get your damn API design straight by version 12? I'll do my damnedest to avoid you forever, then. It's so unserious and frankly irresponsible to break so much stuff so often.

    But change only begets more change. We're all on an endless treadmill, constantly adapting to the change for no reason. And people have to adapt to our changes, and so it goes.

    [1] https://nodejs.org/en/about/releases/

    • kaba0 5 years ago

      That’s ingenious to write off Java as not backwards compatible. The only change they did was closing the doors to the internals of the JVM because those are implementation-dependent anyway (meaning those programs wouldn’t have worked on anything not OpenJDK) and are likely to change. You can still absolutely run a class file compiled to Java 1.0 on a JDK 16.

    • koeng 5 years ago

      How about Golang in this case? AFAIK there haven't been any breaking changes yet.

  • zbuf 5 years ago

    This is well put.

    Containers side-stepped the deficiencies of Linux distributions, which had become so based on 'singleton' concepts; one init system, one version of that library etc.

    A shame because there's an inherent hierarchy; everything from the filesystem, UIDs, $LD_LIBRARY_PATH that could really allow things to co-exist without kludge of overlay filesystems. Just it was never practical to eg. install an RPM in a subdirectory and use it.

    Containers aren't a very _good_ solution, they're just just best we've got; and still propped up by an ever-growing Linux kernel API as the only stable API in the stack...

    • MaxGabriel 5 years ago

      This sounds like a lot of stuff that Nix solves, the multiple versions of libraries coexisting part at least

      • AnIdiotOnTheNet 5 years ago

        Nix strikes me as the Linux community looking at an overly complicated problem of their own making and deciding that the solution is to add even more complexity.

        Don't get me wrong, from what I hear Nix actually does deliver on the promise for the most part, it's just that you have to learn a new language to use it effectively and of course it has its own quirks.

        • jzoch 5 years ago

          What solution wouldn't require its own specification + quirks? Whether its a Dockerfile or nix package I don't see the difference besides people tend to be familiar with only 1 of the many options.

          Im not comparing whether Dockerfiles or buildpacks or nix packages are more ergonomic than one another but i do think your comment is...misguided. From what I have heard Nix is pretty wonderful to use and simplifies the problem - it just requires you learn about Nix a bit which i think is a fair trade-off for the benefits it supposedly provides

          • cormacrelf 5 years ago

            Anyone can build their first Dockerfile and deploy it for Node.js or similar in like 5 minutes. The tricks that make images smaller later use the same concepts and syntax. There is a reason it took over the world so quickly.

            Nix ... I have so far spent about 10 hours learning it to manage my machine. I have forgotten about 98% of it and abandoned the project. You feel like you're sitting in the middle of a spider web, and you can sense the whole system at once. Literally none of your prior knowledge of how to use a computer will help you. None of your existing build tool CLI can be used. Every package manager needs a nix-ifier, like node2nix. Everything you see in a nix file will have to be googled, searched in the documentation, searched in GitHub repos for some kind of example. Nix has rebuilt the world from scratch.

            If you're trying to make the next big thing, try to make it leverage people's existing knowledge. One truly excellent example is `compile_commands.json`. It does a very similar thing to Docker, where it extracts information from your existing build process, without actually changing the build process. The problem statement was that people wanted LSP (and predecessors) implementations to have access to a list of input files to a C/C++ compiler, but they didn't want to abandon Make and CMake etc. So they basically made a structured log of all the CC invocations, and a wrapper around CC that would parse the arguments and write to the log in JSON format. These days you get it for free with CMake[0]. You can use it with nearly every C/C++ build system on earth with a single CC=... argument to make.

            [0]: https://cmake.org/cmake/help/latest/variable/CMAKE_EXPORT_CO...

            • tadfisher 5 years ago

              Your compile_commands.json example is essentially what tools like node2nix produce. For saner build systems, e.g. cmake and meson, simply including the build tool package in `nativeBuildInputs` along with dependent packages in `buildInputs` suffices to build 90% of C/C++ packages with Nix, with the remaining 10% being Qt (I jest).

              The real killer app for Nix is in testing now, and that's the "flakes" feature. Lots of this stuff will get way easier to use when you can throw "github:owner/repo" in an `inputs` set and get a working Nix builder for your project without needing to read through nixpkgs. I hope you give it a try again sometime, as it has changed my perspective on how software should be built, distributed, trusted, and deployed.

              • cormacrelf 5 years ago

                Nix is still trying to be a thing that replaces other build tools. The generated nix file is a recipe for doing what “npm publish” does when it builds a tgz. Every single language has a different way of doing this, and will require its own *2nix to be maintained alongside changes to eg package.json, Cargo.toml, Package.swift. The 2nix bit is the hard part; every package manager has to be re-implemented.

                Docker does not have this problem at all. Every build tool on earth works with it, with zero configuration.

                Compile_commands.json may have similar output to node2nix etc, but it infects nothing, replaces nothing. It works with all the different makefile alternatives with no additional effort. The closest similarity is with Docker: Docker builders intercept at the file system layer, covering every build system ever; compile_commands at the standard GCC-compatible shell arguments layer, covering ~all C build systems and compilers. Nix does not intercept anywhere, it asks you to use a new tool to do everything from scratch in a Nix-compatible way, covering no build systems.

                That’s not to say it isn’t great when you have already built a Nixified package manager replacement for your specific language ecosystem. But it’s not going to take over the world like Docker and compile_commands did. Imploring people to give it a shot is the only way, unfortunately. I will remain open to it, especially if someone can figure out a force-multiplier for these 2nix implementations.

                • tadfisher 5 years ago

                  > Nix is still trying to be a thing that replaces other build tools.

                  Not true! Nix wraps other build tools, and provides hermetic and reproducible environments to those tools. If the tools exposed a way to get the URL and SHA256 hash of every dependency it downloads from the Internet, then the "infection" doesn't need to happen, as you would simply supply those hashes to Nix, which in turn will happily allow them to be downloaded in the sandbox by the tool. That tools like node2nix exist speaks to the walled garden created by these tools and ecosystems, because they do not (easily) expose dependencies to their environment, and/or they do not (easily) accept dependencies from their environment.

                  This would absolutely be a problem with Docker as well, if you added the same requirements that Nix enforces in its sandbox, because otherwise you are allowing Docker to fetch dependencies by URL without specifying their contents.

                  • cormacrelf 5 years ago

                    > If the tools exposed a way to get the URL and SHA256 hash of every dependency it downloads from the Internet, then the "infection" doesn't need to happen

                    Yes. Good start. If you can make it so that exposing this information to Nix is easy enough that e.g. the NPM team does not need a PhD in Dhall to write it to a file, then Nix will be a much more solid proposition. That data alone isn't enough, but that + a DAG of what NPM will do to the downloaded tgzs is much closer. It's also enough for cargo. And many other languages. Dhall is cool to write by hand but, back to my original example, compile_commands.json could be written by a monkey. It needs to be that easy. It needs to be as easy as printing GraphViz DOT to stderr. Then and probably only then will Nix support start getting upstreamed.

                    Dhall is probably Nix's biggest liability at the moment; they sought to make a single language, with a rapidly changing API, for configuring your computer (by hand) as for making compilers reproducible. Compiler output! In an essentially esoteric configuration/programming language, which takes a lot of effort to port to a new ecosystem! No. Use JSON. Ideally you will never have to actually write Nix, the same way humans have never had to write compile_commands.json by hand, and the way nobody has ever had to construct a Docker image by hand out of individual tar files.

                  • yencabulator 5 years ago

                    Meanwhile, NixOS forces me to configure systemd units by writing Nix, not by writing systemd unit files.

                • cormacrelf 5 years ago

                  Re force multipliers, I think you could do much worse than an HTTP proxy! Many package managers support folks behind corporate firewalls, and you can then support any thing that fetches code from anywhere else. Give it a specific Node.js version and a proxy, run npm install, log all the network requests into a nix file. Biggest issue is TLS probably.

            • kaba0 5 years ago

              You can’t really leverage what is fundamentally broken. To live up to Nix’s claims it has to heavily patch software. But this is a self-fulfilling thing, once it caters to enough people, other tools will natively support nix.

          • csande17 5 years ago

            You're right that Docker is similar to Nix, at least in the sense they both seem to be trying to work around problems with the Linux packaging and library ecosystem by piling more of their own complexity on top. I suspect the comment you replied to wants to see the actual underlying problem solved.

            To use an example from another community, no amount of performance improvements to NPM will ever make it a good idea to depend on hundreds of one-liner "is number odd" or "left pad" packages. Papering over the problem with yet more technology only ossifies it, making it harder to solve for real.

      • zbuf 5 years ago

        Yes, no doubt there's been some solutions within Linux distributions that think more outside the box (though I'm not familiar with Nix). There's many home-grown solutions in production environments within orgnisations, too.

        As you suggest, these are probably 'pieces' of the puzzle, by no means 100% identical to how containers are used today. But I think we'd have ended up in a different place.

      • mook 5 years ago

        My understanding though is that nix tries to solve this globally (it manages your whole system, or your whole home directory, as opposed to docker, which has clearly demarcated separation between different images), and it doesn't reuse existing packaging (in particular the language, as in "apt install" etc.)

        There's definitely advantages that way, but there's also drawbacks.

        • tadfisher 5 years ago

          There's NixOS, which you are describing, and then there's `nix` the package manager and build system. The latter is installable on any Linux or macOS system as a normal binary, and can be used standalone as a build tool. My company uses this to standardize (and cache!) development environments across a diverse set of hardware; the environments do double-duty as hermetic CI builders as well.

  • bob1029 5 years ago

    > getting devs to actually care if their software runs on platforms more than a year old.

    This is why we don't play games with siloing responsibilities on the tech stack. Every single developer on the team is responsible for making the entire product work on whatever machine it is intended to work on. No one gets to play "not my job", so they are encouraged to select robust solutions lest they be paged to resolve their own mess in the future.

    Maybe those solutions are containers in some cases, but not for our shop right now. Our product ships as a single .NET binary that can run on any x86 machine supported by the runtime.

    • moonchrome 5 years ago

      So you don't support M1 Macs ?

      • bob1029 5 years ago

        No - the part of the product implicitly discussed above does not. We don't really have any intentions of running our services on piles of macbooks at the moment.

        That said, we do have an iOS client which is intended to run on such classes of devices. I loathe the fucker so much (dev experience is garbage) but our customers like it a lot so... here we are. 99% of the complexity lives on the server, so the app is not a daily struggle. We also have a UWP client, but it has its own set of "difficulties" that I won't get into at the moment.

        At some point I want to try to build a pure HTML5/canvas solution that can be served from a cheap-ass linux box and consumed by any device with a reasonable web browser implementation.

      • tracker1 5 years ago

        Looks like .Net 6 (formerly Core), due for full release in a couple months supports M1 Macs[1] as a build target just fine. So does MAUI[2]

        1. https://github.com/dotnet/runtime/issues/43313

        2. https://docs.microsoft.com/en-us/dotnet/maui/get-started/ins...

      • tehjoker 5 years ago

        M1s support x86 via emulation.

  • raesene9 5 years ago

    Containers solve the problem of clashing library versions needed by different applications running on a single host (and I know there are other ways to solve this).

    This is really not a new problem :) I remeber dealing with shared libary versioning issues from no long after I started in IT in the 90's and it's been a problem since.

    Solving that problem seems like a win to me.

    • javier10e6 5 years ago

      Containers add a substantial level of indirection for us, developers. Now we have to grow a seventh arm to juggle to manage/fold into our workflow. For production? Hands down the right solution. For development, I wish, only wish, We could live without.

      • tracker1 5 years ago

        It's made the entire process MUCH easier for me... `docker-compose up -d deps` and I have all my background services running local... `... api` and the api is also running and I can concentrate on the UI.. `... ui` and it's all running. I can then run tests against the whole thing.

        Also, setup all the containers to include unit test results in the runtime container... this gets extracted/merged in CI/CD. Beyond this, I can stand-up the entire application and run through full integration and UI test suites in the CI/CD pipeline. Same commands locally... it all is much smoother than prior experiences.

        I will NEVER run a database install on my developer desktop again. Database deployes on the main application I work with, and unit tests all finish in about 5 seconds or less (not including initial download). I'm also able to run db admin apps right with the DB.

        Persist volumes, run/test upgrades and from-scratch. It all goes really smoothly overall. Wouldn't ever want to go back to mile-long dependency instructions step by step to getting a development environment running ever again. WSL2 + Docker Desktop are pretty damned great.

    • nikau 5 years ago

      its solved until your company installs a CVE scanning tool such as twistlock and it turns out you have 100 different docker images out in the wild you need to update, rebuild, and re deploy.

  • throwaway894345 5 years ago

    Containers aren’t the final destination, but they’ve enabled polyglot orchestration i.e., an app developer can target Kubernetes without needing to manage the minutia of operating a bunch of Linux hosts. It seems like almost every company that isn’t using containers for SaaS software development ends up badly reinventing Kubernetes and sinking a ton of time and money into maintaining it, and as a “human person”, I’m glad that I can focus my efforts on higher-level problems. When a technology inevitably matures to replace containers, I’ll look into it, but for now containers are the best way to build and manage heterogeneous distributed systems.

    • tracker1 5 years ago

      For that matter, pushing for SRE roles that manage orchestrating the K8s environment and as a developer you can focus on a local docker-compose, and spend more time in testing (unit and integration). The Developer is responsible for Dockerfile, and the CI/CD build and test portions of the process.

      Considering the level of options from Kubernetes, heml, istio, etc can get complex, the developer can focus on the boundary requirements... expected environment variables and peer systems/services.

      • throwaway894345 5 years ago

        I think that’s precisely the opposite of the SRE/DevOps model. The developers shouldn’t be managing their own clusters (istio, etc) but they should be able to define and maintain their own applications (not just the container code but also supporting infra).

        • tracker1 5 years ago

          Assuming that I can now take away time from other things to gain the knowledge to do so, sure. Also, taking away the development time from all other developers to do so.

  • encryptluks2 5 years ago

    I don't think dependencies is the only benefit of containers. I personally like the isolation they provide and generally prefer running services in containers, even if they are using the same dependencies as my OS. I run Linux too, so I don't have to worry about any virtualization framework overhead.

    • zozbot234 5 years ago

      Namespacing and isolation also unlocks additional features, such as VM-style checkpointing and migration (via the CRIU featureset, which AIUI is now part of the mainline kernel). Moreover, the 'container' workflow provides a common interface that the various sorts of orchestration/deployment/management platforms can then rely on.

    • rualca 5 years ago

      > I personally like the isolation they provide and generally prefer running services in containers, even if they are using the same dependencies as my OS.

      I would also not downplay the importance of Docker's support for software-defined networks and it's ability to arbitrarily configure networking at the container level.

      I firmly believe that networking doesn't pop up so often while discussing Docker because Docker solves that problem so fantastically well that a complex problem simply ceases to exist and completely abandons everyone's mental model.

      • dahfizz 5 years ago

        Having to define complex networking completely internal to a server is a problem that docker created, not one they solved.

      • superkuh 5 years ago

        Have they fixed ipv6 support yet?

        • encryptluks2 5 years ago

          Yes they have, but there is also a plethora of other container daemons and tools now aside from Docker with IPv6 support.

  • tyingq 5 years ago

    Java running in a container is somewhat amusing because of this. So you have a several solutions to the problem of agnostic packaging (java/jar/ear/war/etc) running inside another whole solution for agnostic packaging.

    • kaba0 5 years ago

      That’s just Java EE (now Jakarta) and Spring (when deployed to an application server). And indeed, these were containers before they were cool, with DB connections, pooling, it can connect to messaging services, etc. plus gives an environment where classes can easily do dependency injection, transactions, concurrency and many other things.

    • 5e92cb50239222b 5 years ago

      I don't think there's another way to ship custom certificate authorities without using containers? It's something you absolutely have to do around here if you want to interact with government APIs of any kind.

      I relatively rarely work with Java and am probably mistaken.

      • tyingq 5 years ago

        I'm not saying containers aren't needed. Just that we keep trying to solve packaging and end up with more layers that have to duplicate large swaths of functionality. So we get java->containers->container orchestration, for example. The containers overlap some built-in java functionality, and so does the orchestration piece.

  • kaba0 5 years ago

    Nix (and guix) does solve this issue novelly, and for some reason it doesn’t get the recognition it rightfully deserves.

    The problem doesn’t start with virtualization, that is indeed a side-track.

  • gumby 5 years ago

    Containers just moved the compatibility barrier up the abstraction stack. That’s not terrible (fewer and fewer understand how their computer actually works) but all those same problems still remain. Now they just apply to remote APIs instead

  • jayd16 5 years ago

    But I don't want to have to care about any of that stuff and containers let's us not care about it. That's a huge solve.

encryptluks2 5 years ago

This looks more like an advertisement than a useful blog post.

Also:

> Consider also that Docker relies on Linux kernel-specific features to implement containers, so users of macOS, Windows, FreeBSD, and other operating systems still need a virtualization layer.

First, FreeBSD has its own native form of containers and Windows has its own native implementation. Docker != containers.

I really don't see how Docker (or containers as we mostly know them) relying on kernel-features from an open source operating system in order to run Linux OS images as something to even complain about, and there is nothing preventing Mac from implementing their own form of containers.

  • kendruOP 5 years ago

    I am familiar with FreeBSD jails (and IMO, they are actually superior to Linux containers in most respects). My point is not so much that other systems don't have the tech to make containers work - or that OS vendors are not capable of adding containers to their kernels - but that having container technology is not the same as having a smooth devex for containerized applications.

    • encryptluks2 5 years ago

      The fact is Linux containers are probably hotter than anything else. Almost every enterprise are using them to some larger extent, and Kubernetes has become the platform of choice.

      Is vanilla Kubernetes easy for new developers? No, but there is an entire ecosystem offering tools and platforms to make development using containers a seamless as possible. Microsoft saw this, so they really had no choice but to adopt the container terminology and partner with Docker to try to stay relevant.

      My guess is without containers, Microsoft would have never even built WSL. If you want smooth developer experience with containers then that is what solutions like GitLab offer. Even Microsoft's GitLab is essentially built around running various actions inside containers.

      I personally welcome the change. I can spin up a local Kubernetes cluster and test an entire cluster of applications locally if I want, or integrate it into Skaffold or whatever else and test live in the cloud. It really is a lot better than what we had before. I think the solutions though really come down to documentation and resources to help train new employees and acclimate them.

tracker1 5 years ago

I think the next step(s) will be something closer to what the combination of Cloudflare Workers + KV + Durable Objects gives you... I think there also needs to be some implementation of PubSub added to the mix as well as a more robust database store. Fastly has similar growing options, and there are more being advanced/developed.

In the end, there's only a few missing pieces to offer a more robust solution. I do think that making it all webassembly will be the way to go, assuming the WASI model(s) get more flushed out (Sockets, Fetch, etc). The Multi-user web doom on cloudflare[1] is absolutely impressive to say the least.

I kind of wonder if Cloudflare could take what FaunaDB, CockroachDB or similar offers and push this more broadly... At least a step beyond k/v which could be database queries/indexes against multiple fields.

Been thinking on how I could use the existing Cloudflare system for something like a forum or for live chat targeting/queries... I think that the Durable Objects might be able to handle this, but could get very ugly.

1. https://blog.cloudflare.com/doom-multiplayer-workers/

cfors 5 years ago

Yes containers don't solve for dealing with the mess of third party saas that every company is built around.

But that's why anytime you integrate with one of these tools you should be aware that there is a cost for maintaining that integration.

asim 5 years ago

I spent 6+ years fighting this exact battle. It's hard. It's resource intensive. And timing is everything. It requires either one company to front all the development cost and bring it to the world after validating it or it needs an ecosystem to emerge through a shared pain and understanding. We're not there yet.

My efforts => https://micro.mu

Oh and prior efforts https://github.com/asim/go-micro

kendruOP 5 years ago

Author here. I have been developing Docker applications for years now, and while the experience is better than it used to be, it's still not great. I work for Deref, which is working on developer tooling that is more amenable to modern development workflows. We'd love to hear what pains you have with the current state of development environments.

  • johnchristopher 5 years ago

    I am not a 10x engineer or a linux wizard.

    I wish someone would rewrite docker-compose in a single go or rust binary so that I don't have to deal with the python3 crypto package being out of date or something when simply configuring docker/docker-compose for another user (usually me on a different machine or new account).

  • ryanmarsh 5 years ago

    How do "modern development workflows" differ?

    • kendruOP 5 years ago

      When I started in software development, I was mostly working on monoliths. All dependencies were vendored or were expected to be dynamically linked on the system where they were deployed.

      Next, I started working with Docker and languages with better package management. Dependencies were fetched in CI and were either statically linked or packaged in a container with the application I was working on. Still, these were mostly monoliths or applications with simple API boundaries and a known set of clients.

      In the past few years, almost everything I have written has involved cloud services, and when I deploy an application, I do not know what other services may soon depend on the one I just wrote. This type of workflow that depends on runtime systems that I do not own - and where my app may become a runtime dependency of someone else - is what I am referring to as a "modern development workflow".

      • ryanmarsh 5 years ago

        Thank you. That’s a worthwhile distinction to make. I often hear the word “modern” bandied about by developers trying to advocate for something novel. If you haven’t already it might be rewarding to fully articulate this in long form, including the consequences and implications. I wish I was a better writer or I’d write this myself.

    • mikepurvis 5 years ago

      Not sure about the GP, but for me, it's a pretty big difference jumping from tools like docker to ones like buildah, microk8s— I feel like k8s really makes good on many of the original promises of the "container revolution" in terms of automatic provisioning/scaling/image lifecycle management, declaratively defining relationships between containers, adding a much needed upper layer of abstraction in terms of the pod, etc.

      I know docker has made it part of the way there over the years with Compose and so on, but it's all felt pretty ad-hoc, whereas k8s feels like a cohesive system designed against a clear vision (which makes sense, since it was designed as borg 2.0)— no one else working in this space had the benefit of having already built a giant system for it and used it at scale for years beforehand.

nimbius 5 years ago

the one thing containers addressed was their use as a countermeasure to rising costs from greedy VPS providers, and as an agile framework to quickly evacuate from a toxic provider (cost, politics, performance, etc...)

providers in turn responded by shilling their 'in house' containerization products and things like Lambda for lock-in.

mikewarot 5 years ago

Virtual Machines gained popularity as are kludge to get around the remarkably horrible state of operating systems. The inability to reliably save and restore the state of a computer grew to be so costly that it became worthwhile to pay the performance penalty of a layer of emulation/virtualization to route around it.

Containers were the next logical step, as each virtual machine vendor tried to lock in their users. Containers allowed routing around it.

Both of these steps could be eliminated if a well behaved operating system similar to those in mainframes could be deployed, so that each application sat in its own runtime, had its own resources, and no other default access.

There's a market opportunity here, it just needs to be found.

Zababa 5 years ago

Since the author mentionned it, is the 12 factor app still a best practice? Was it a best practice? I saw the website a few times and all of it makes sense for me, but I haven't seen much discussion about it.

KingMachiavelli 5 years ago

Containers don't solve anything more than virtual machines. Containers are 'better' than virtual machines because they have less overhead and are 100% open source.

Containers and VMs let you divide and solve problems in isolation in a convenient manner. You still have the same problems inside each container.

Firstly, Docker & k8s made using containers easy. Minimal distros like alpine simplify containers to a set of one or more executable. You could implement the same thing with a system of systemd services & namespaces.

But now that everything was a container, you need a way to manage what & where containers are running and how they communicate with each other.

It looks like 90% of the stuff different container tools and gadgets try to solve is the issues they created. You can no longer install a LAMP stack via 'apt install mysql apache php7.4' so instead you need a tool that sets up 3 containers with the necessary network & filesystem connections. It certainly better because it is all decoratively defined but it is still the same problem.

This is why I mostly stayed out of containers until recently. The complexity of containers really only helps if you need to replicate certain server/application. You will still need to template all of your configuration files even if you use Docker, etc.

What is changing everything IMO is NixOS because it solves the same issues without jumping all the way to Docker or k8s. Dependencies are isolated like containers but the system itself whether it is a host/standalone or a container can be defined in the same manner. This means that going from n=1 to n>1 is super easy and migrating from a multi-application server (i.e a pet server) to a containerized environment (i.e to a 'cattle' server/container) is straightforward. It's still more complex and a bit rough compared to Docker & k8s but using the same configuration system everywhere makes it worthwhile.

dekhn 5 years ago

the one problem containers solved for me better than anything I ever used in previous UNIX/LINUX is heirarchical resource tracking. I work with many codes that fork from their main binary and do their work in subprocesses. If your resource manager isn't scraping /proc to invert the process tree, it needs a way to assign resources to process trees such that the entire tree sum cannot exceed the resource limitation.

forgotmypw17 5 years ago

My container is POSIX :)

  • ebiester 5 years ago

    What is your strategy that works for every packaging system and every version of every library you depend on?

    Nothing says love like realizing that you are segfaulting due to a library version you didn't test against subtly changing its behavior.

    • forgotmypw17 5 years ago

      I use almost no libraries and stick to languages which have prioritized stability for 20+ years.

      This amounts to using Perl, bash, and POSIX.

      On the client side, of course, it is HTML and JS, which I use a very limited subset of to improve compatibility.

jmartens 5 years ago

Follow up article: Containers don't solve anything.

  • 1_player 5 years ago

    6 months later, the containerless movement takes root.

    2 years later, the seminal article: Perhaps containers were not such a bad idea.

OfflineSergio 5 years ago

What solves everything?

  • ISV_Damocles 5 years ago

    The heat death of the universe?

  • swagasaurus-rex 5 years ago

    Containerization built in to the OS, with strict privacy controls on what containers can access inside of other containers.

    All applications run in its own container, unless they are granted granular permissions to do otherwise.

    The code and assets for a program belong in its own quarantined section, not spread out over the filesystem or littered around /etc/, /var/

    Built in networking for these containers.

  • the-dude 5 years ago

    Silver bullets

    • jtolmar 5 years ago

      Only if you have physical access to the server.

    • ozim 5 years ago

      Well you are onto something.

      Even if it is a joke, people want to have silver bullets. Those are killing the hairy problems which can be named werewolves.

      Downside is hairy problems just like werewolves come from people. So it in the end it is people problems not some container tech or other stack problem. There are no werewolves without people :)

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection