Settings

Theme

Containers are tents

increment.com

192 points by vmarsy 5 years ago · 108 comments

Reader

tptacek 5 years ago

That's a valid way to look at it, but there are other ways. Containers are also a simple, practical way to bundle applications and their dependencies in a relatively standardized way, so they can be run on different compute fabrics.

That sense of the term isn't loaded with any specific notion of how attack surfaces should work. I think modern "Docker"'s security properties are underrated†. But you still can't run multitenant workloads from arbitrary untrusted tenants on shared-kernel isolation. It turns out to be pretty damned useful to be able to ship application components as containers, but have them run as VMs.

https://fly.io/blog/sandboxing-and-workload-isolation/

  • Saint_Genet 5 years ago

    Your first paragraph pretty much sums up what docker is, it’s a convenient way to design and build a system, but it is not a security mechanism.

    If you’re building a system that’s handling classified information, there is probably not an accreditation authority in the world that would let you use containers or even hypervisors as a way to separate different information classes.

    • foobar33333 5 years ago

      Docker _should_ be secure, any part that isn't secure is a bug which can be reported. That disconnected to the reality of whether docker actually is secure, but in theory it is meant to be.

      Other implementation like podman get even better security by not running as root.

      • tptacek 5 years ago

        The fundamental flaw of the Docker container security model is the shared kernel, which is a gigantic attack surface in which vulnerabilities are present, somewhat routinely, in functionality that can't be masked off with system call filters.

        The win of virtualization is that the machinery required to hypervise a kernel is much, much smaller than the kernel itself; to use the 70s terminology, it's a minimized trusted computing base.

      • Saint_Genet 5 years ago

        Absolutely it should be as secure as possible, but the fundamental concept of what a container is means it cannot be used for some high security concepts. One of the cornerstones of classified information security is physical separation, and containers just can’t provide that.

  • djmetzle 5 years ago

    > I think modern "Docker"'s security properties are underrated†

    100% agree.

    The docker/CRI-de-jour (by default) strips off many "dangerous" system capabilities. By default a pid on linux gets something like over one hundred system capabilities, and most container runtimes strip that down to around 50. Those number are not exact.

    Stripping down the system level capabilities of your workload is assuredly a security improvement over running that workload "bare metal" on the system.

    Ref: https://www.redhat.com/en/blog/secure-your-containers-one-we...

  • globular-toast 5 years ago

    Technically what you're describing is an image. Might sound pedantic but interchanging container and image does often cause confusion in my experience.

    • 1vuio0pswjnm7 5 years ago

      I was making these all the time on NetBSD for dd'ing to USB sticks long before "Docker". FFS images containing only a bootloader and alternate kernels with embedded userlands. One was the "update" kernel and the other was the running kernel. I could pull out the stick after booting; I could also mount the stick and edit the update kernel. The userland was generally a single statically compiled binary, like busybox, but better. I thought about the possibilities of distributing software by bootable USB stick but not the possibilty that people might run the images in VM's.

  • nijave 5 years ago

    >you still can't run multitenant workloads from arbitrary untrusted tenants on shared-kernel isolation

    All those bargain basement OpenVZ "VPS" providers beg to differ :)

    There's also gVisor

  • RcouF1uZ4gsC 5 years ago

    > Containers are also a simple, practical way to bundle applications and their dependencies in a relatively standardized way, so they can be run on different compute fabrics.

    What I find interesting, is that many uses of containers are just reinventing statically linked binaries in a more complicated form.

    • nickelpro 5 years ago

      Those are two different things. If I need GCC 11 and its associated standard libraries and toolchain to build my app, and I need to build that app on CI services that ship an older version of GCC, I need some way to bundle my dependencies for arbitrary compute fabrics.

      The answer to this is a container. Containers don't "reinvent" static linking, they solve problems that go beyond static linking.

      • coldtea 5 years ago

        >If I need GCC 11 and its associated standard libraries and toolchain to build my app, and I need to build that app on CI services that ship an older version of GCC, I need some way to bundle my dependencies for arbitrary compute fabrics. The answer to this is a container. Containers don't "reinvent" static linking, they solve problems that go beyond static linking.

        The problems they solve might go beyond "static linking", but are accidental complexity problems that doesn't go beyond namespacing.

        With proper namespacing support, one could trivially build "with GCC 11 and its associated standard libraries and toolchain" on a CI service that ships with "an older version of GCC".

        Ideally, one would just need a local folder with the GCC11 and its dependencies, and at most an ENV entry for where to pick up deps (optimally not even that, the GCC11 binary should give precedence to the local versions within the same folder by default).

        But, no, instead we need to juggle 20 folder locations, PATH and EVN variables, burn-in build paths into libraries and executables, and so on...

      • rualca 5 years ago

        > The answer to this is a container. Containers don't "reinvent" static linking, they solve problems that go beyond static linking.

        This.

        I would also point out that containers handle also other critical features like it's virtual network, which obviously goes way beyond what may be simplistically described as "static linking".

    • tptacek 5 years ago

      They're more of a generalization of statically linked binaries than a reinvention.

    • 0xCMP 5 years ago

      I think of Docker as a universal static compiler. And I mean that in a positive way: static compiling makes a lot of sense with the incredible complexity we're often finding ourselves in. There really are no other ways to distribute node/ruby/python apps in a static way and when there is it's limited in serious ways beyond often being ONLY for a specific ecosystem (e.g. wheels for python).

      • snovv_crash 5 years ago

        There are many ways of doing this for Python... pyinstaller, py2exe, Nuitka to name a few

        • coldtea 5 years ago

          Python is probably the worst example for achieving this...

          The fact that "there are many tools to do this for Python" is already a big red warning sign...

        • RexM 5 years ago

          Cool, now do node and ruby, using the same tool.

        • ta988 5 years ago

          Yes and they tend to all crumble when you have complex use cases with numpy, scipy, qt etc They can be a pain to deploy and manage remotely as well.

    • flatline 5 years ago

      They are different means of composition. Dynamically (I assume that’s what you meant?) linked libraries allow you to compose binary artifacts. Containers allow you to compose services.

      • lanstin 5 years ago

        Not sure what you mean. The network lets you compose services. It has for decades been possible to do that.

        Containers as a form of static linking means that you ship one thing to prod and it has everything you need locally in it and it can’t be changed without you releasing a new one thing. If someone else upgrades MySQL client version on the host, your code keeps using the version you tested with, like a static binary or like a Python venv with pinned versions or vendored dependencies. It is a lot simpler to manage dependencies this way; downside is if you one of your dependencies has a security advisory, it can’t be updated by rolling out a new version by someone else. You have to update it, so unowned code becomes more expensive in that scenario.

        • flatline 5 years ago

          Interesting. I think you are talking about intra-container composition, and I’m talking about inter-container composition. I see what you are saying (I think) about dependencies within a single container.

    • brian_cunnie 5 years ago

      > many uses of containers are ... statically linked binaries in a more complicated form

      I have found that to be true in at least one case—I had built a custom DNS server in Go (statically linked), and originally planned to run it in a container, but on further reflection realized the container brought no added value, and it was much simpler to write a systemd service control script than to bring in the extra baggage of a container ecosystem to run the DNS server.

      • ric2b 5 years ago

        That's a very basic example. Let's say your program also depends on ffmpeg to convert some images, psql to interact with a database and a few other non-library dependencies.

        With containers you can trivially ensure those are always present and with the correct versions.

        Plus containers do give you some security benefits when compared to running natively.

    • catlifeonmars 5 years ago

      Exactly! I don’t see this as a criticism of containerization so much as it is a praise of static linking.

      What containerization enables is that it allows you to confer some of the advantages of static linking to languages and libraries that don’t natively support it.

      • rualca 5 years ago

        > Exactly! I don’t see this as a criticism of containerization so much as it is a praise of static linking.

        Not really. It seems the keyword "static linking" is being abused to refer to stand-alone executables, because that's what some people know. Yet, calling containers a kind of "static linking" is simplistic and incorrect, even taking the standalone executable interpretation info account.

        If anything, container images are installers, and containers are the end-result of installing and configuring these containers, which is barely noticed because it works so well even and specially the networking part. More importantly, containers are designed to be both ephemeral and support multiple instances running in parallel on the same machine.

        Then there's also the support for healthchecks, which allows container engines to not only determine when they should regenerate containers, but also provides out-of-the-box support for blue-green deployments.

        And absolutely none of this fits the "static linking" metaphor.

        • vladvasiliu 5 years ago

          I'd say that the static linking metaphor refers to the container image itself, in that it's standalone and (fairly) "universal".

          All the other things you talk about could be set up for standalone binaries as some form of orchestration, after installation, as you say.

          To me, the analogy doesn't have to be 1:1 for it to work. Yes, that means there are edge cases which should be taken into account, but that doesn't make it useless.

          Especially if you look at how most developers see containers: "I'll give you this image, which I know works [in a given way] and you can run it with something that understands it". You can go ahead and set up a full K8S cluster to run it, or you can run it on Docker Desktop on a random Windows / Mac laptop. "It just works", and it's in this I think the "universal static binary" analogy works.

          I get the feeling that all those fancy orchestration tools (health checks, blue / green deployment / fancy network setup / etc) have seen an impressive growth around container runtimes, and therefore often thought of as belonging together, but I don't think that one requires the other and couldn't exist without the other.

          I'm curious to see what kind of ecosystem will grow around new developments such as Firecracker and "unikernel containers". I seem to remember a post on HN the other day about some effort by google to run go binaries directly on some VM kernel.

          • rualca 5 years ago

            > I'd say that the static linking metaphor refers to the container image itself, in that it's standalone and (fairly) "universal".

            That metaphor makes no sense beyond the standalone part.

            Container images are way more than mere stand-alone statically linked binaries, much like an installer (deb/rpm/MSI/PKG/etc) are way more than statically linked binaries.

            If anything, container images are a kin to fat JARS or macOS's bundles, but even that metaphor leaves key features out, like healthchecks, image inheritance, and of course support for software defined networking.

            > To me, the analogy doesn't have to be 1:1 for it to work.

            The whole point is that the metaphor makes no sense and completely misses the whole point of containers.

            No one uses containers because they want statically linked binaries. Or even installers or packages. At all.

            What container users want is what containers provide and neither statically linked binaries or zips or installers or bundles come close to offer. We're talking about being able to deploy and scale the same app at will in a completely sandboxed and controlled environment. We're talking about treating everything as cattle, from services to system architecture. We're talking about out-of-the-box support for healthchecks, and restart apps when they fail.

            Each and every single one of these features is made available with docker run. That's what containers offer.

            None of this has anything to do with static linking anything, and insisting on this metaphor shows that people completely miss the whole point of containers.

            • nyx__ 5 years ago

              Containers aren't Docker. They're a combination of Linux features. Your container runtime might provide health checks and fancy features, but containers do not.

              > We're talking about being able to deploy and scale the same app at will in a completely sandboxed and controlled environment. We're talking about treating everything as cattle, from services to system architecture.

              Same is true for AMIs and VMs. Containers are the technology, not the pattern.

              • rualca 5 years ago

                > Containers aren't Docker.

                Discussing if containers are Docker or not I'm this discussing is a non-sequitur fueled by needless nitpicking and being pedantic for being pedantic.

                The whole point is that all those features that I listed are basic container features, not higher-level concepts provided at the container orchestration level.

                > Same is true for AMIs and VMs.

                No, not really. Running something in a VM is not the same as running a containerized process.

                • nyx__ 5 years ago

                  I'm not saying that running something in a VM is the same as in a container -> that's the whole point of the article. I'm saying that just like you should treat containers like cattle, you can treat VMs like that too. What do you think folks did before containers existed :)

        • catlifeonmars 5 years ago

          In practice, I think the analogy holds up. A sizable chunk of use cases for containerization is to confer the benefits of standalone executableness, which is similar to a sizable chunk of use cases for statically linked binaries.

          • rualca 5 years ago

            > A sizable chunk of use cases for containerization is to confer the benefits of standalone executableness,

            No, not really. Just because in the end you get to run an application that does not mean that it's reasonable to explain thins in terms of static linking.

            This broken metaphor is particularly counterproductive once you take into account that container images also provide you the tools to bundle interpreters, run shells, and create temporary file systems automatically.

            Talking about containers in any way that leaves out the containing part is counterproductive and a bad mental model.

            • catlifeonmars 5 years ago

              Yes those are things you can do with containers. I don’t get your point. It’s not like a said containers are only useful for bundling dependencies.

              As an aside, I think we’re approaching the concept from two different perspectives. I’m working backwards from a (sub)set of use cases, while you appear to be working forwards from a set of capabilities. I think both approaches have their place.

        • catlifeonmars 5 years ago

          > More importantly, containers are designed to be both ephemeral and support multiple instances running in parallel on the same machine.

          This sounds a lot like how all os processes work (regardless of linking type of a binary).

          • rualca 5 years ago

            You don't create persistent file system mounts by launching a process, nor do you mount files into the process's file system.

    • otabdeveloper4 5 years ago

      A Docker image is really just a .tar.gz under the hood, with a little bit of metadata.

      A Docker image is really just a chroot + some cgroups resource limits.

      • amarshall 5 years ago

        > A Docker image is really just a chroot + some cgroups resource limits.

        No, because an image specifies nothing about the runtime. Just add a Kernel and bootloader and one can boot most images. Further most container runtimes include a lot more than chroot and resource limits. Namespace isolation (process, user, network), seccomp rules, SELinux contexts, etc.

        • otabdeveloper4 5 years ago

          > Just add a Kernel and bootloader and one can boot most images.

          Certainly not true of any of the images I work with.

          • amarshall 5 years ago

            What sort of images are you working with? I can fairly straightforwardly boot debian:stable with no modifications to the image using direct kernel boot. Is everything perfect? No, but it does boot.

            • otabdeveloper4 5 years ago

              None of my images bundle a Linux distribution.

              (Indeed, most don't even bundle bash or coreutils.)

              • amarshall 5 years ago

                Sure, but that still doesn’t necessarily mean it won’t work. I can successfully direct kernel boot a VM where the entire filesystem is just a single statically-linked binary and it boots and runs it (just set init= or put the binary at /sbin/init). Some programs might need /proc /sys /tmp, etc., and if they do a bit more work needs to be done of course, but not all do.

    • globular-toast 5 years ago

      Static linking was done out of necessity more than anything. It's the obvious way to compile a program. The fact that you got a single distributable executable was merely a convenient side effect.

    • mrweasel 5 years ago

      If it was just a replacement for statically linked binaries, I’d be less concerned. In reality people stuff EVERYTHING in into containers, database, queue, a webserver and your application, it all goes into one container.

      • crummy 5 years ago

        I thought one of the first rules was one container per app/service/whatever.

        • dspillett 5 years ago

          The first rule of rules of to not expect people to follow any particular rule!

          There is nothing enforcing the mapping of one container to one service so people will have multiple if they find it convenient, or sometimes if they simply don't know the "rule".

          A lot of people use containers as light weight VMs. Some use them as not so light VMs, in fact. In that case multiple services in one is practically expected.

        • mrweasel 5 years ago

          Correct, you also shouldn’t have supervisor processes in a container, as this prevents detection of crashed containers.

          We’ve seen container with all sorts of weirdness, where half the service could crash and Docker would never notice, because the supervisor process was still running and that was the entry point for the image.

      • tjpnz 5 years ago

        You might be able to get away with that for a development environment ala Vagrant. But doing that in production sounds scary, and I'm saying that as a mere dev (who has very little to do with ops).

      • echelon 5 years ago

        > In reality people stuff EVERYTHING in into containers, database, queue, a webserver and your application, it all goes into one container.

        That's simply doing it wrong! If people are doing this, you can't point to containers as the problem.

jsiepkes 5 years ago

Should be noted that a portion of this (valid) criticism applies specifically to the most prominent "container" implementation; Docker. Not containers as a whole.

For example resources isolation with the Solaris / Illumos container implementation (zones) works just as well as full blown virtualization. You are just as well equipped to handle noisy neighbors with zones as you are with hardware VM's.

> Much as you’d likely choose to live in a two-bedroom townhouse over a tent, if what you need is a lightweight operating system, containers aren’t your best option.

So I think this is true for Docker but doesn't really do justice to other container implementations such as FreeBSD jails and Solaris / Illumos zones. Because those containers are really just lightweight operating systems.

In the end Docker started out and was designed to be a deployment tool. Not necessarily an isolation tool in all aspects. And yeah, it shows.

  • belter 5 years ago

    I can not agree more. It is the saddest thing the appalling implementation of Docker, and the whole lack of security around the ecosystem, made people think Containers equal to Docker. Docker is what happens when you put your security implementation in the hands of your Developer team and not in the hands of your DevSecOps people.

  • nyx__ 5 years ago

    Criticism applies to all Linux containers, not just Docker, which is one implementation of Linux containers.

    One could argue that zones are distinct from containers (a Linux implementation), with both being OS specific versions of jails.

jb_gericke 5 years ago

Enjoyed the article but having watched containerization and kubernetes maturing over the last 5 years (especially at an enterprise level), I'd say a huge part of the value proposition is (and this applies more to K8) it really catalyses prototyping/experimenting and (depending on the org I suppose) promotes a lot of autonomy for app teams who'd historically have to log calls to infrastructure to get compute, network/lb/dns, databases et al. built up before kicking the tyres on something. I've seen those types of things take months in large orgs. And then there's the inevitable drift between tiered environments that happens over time in richer operating environments (I've seen VMs so laden with superfluous monitoring and agentware they fall over all the time, while simultaneously being on completely different OS and patch versions from dev to prod). Containers provide immutability at the service layer, so I have confidence in at least having that level of parity between dev and prod (albeit hardly ever at a data or network layer).

rossmohax 5 years ago

I believe, that success of containers is not because of lightweightness or other isolation properties of them.

Containers won dev mindshare because of ease packaging and distribution of the artifacts. Somehow it is Docker, not VM vendors came up with a standard for packaging, distributing and indexing for glorified tarballs and it quickly picked up.

  • throwaway894345 5 years ago

    > Somehow it is Docker, not VM vendors came up with a standard for packaging, distributing and indexing for glorified tarballs and it quickly picked up.

    IMO the important, catalyzing difference is that Docker containers have a standard interface for logging, monitoring, process management, etc which allow us to just think in terms of “the app” rather than the app plus the SSH daemon, log exfiltration, host metrics daemon, etc. In other words, Docker got the abstraction right: I only care about the app, not all of the ceremony required to run my app in a VM. These common interfaces allow orchestration tools to provide more value: they aren’t just scheduling VMs, they’re also managing your log exfiltration, your process management, your SSH connection, your metrics, etc, and all of those things are configurable in the same declarative format rather than configuring them with some fragile Ansible playbook that requires you to understand each of the daemons it is configuring, possibly including their unique configuration file/filesystem conventions and syntaxes.

  • aequitas 5 years ago

    > Somehow it is Docker, not VM vendors came up with a standard for packaging, distributing and indexing for glorified tarballs and it quickly picked up.

    Packaged VM's existed for a while already with thing like Vagrant on top, there was also already LXC which leaned more into the VM concept. Where Docker made the difference imho is with Dockerfiles and the layered/cached build steps.

  • enw 5 years ago

    > glorified tarballs

    Calling container images glorified tarballs is like calling cars glorified lawnmowers.

    • brazzy 5 years ago

      Um...a container image is very literally and exactly a collection of tarballs and some JSON files with metadata.

    • kjeetgill 5 years ago

      Maybe closer to calling a 16-wheeler a glorified minivan. I think there's something meaningful in that comparison.

nickjj 5 years ago

IMO comparing containers to an apartment is more accurate than a tent.

Because with an apartment each tenant gets to share certain infrastructure like heating and plumbing from the apartment building, just like containers get to the share things from the Linux host they run on. In the end both houses and apartments protect you from outside guests, just in their own way.

I went into this analogy in my Dive into Docker course. Here's a video link to this exact point: https://youtu.be/TvnZTi_gaNc?t=427, that video was recorded back in 2017 but it still applies today.

fsociety 5 years ago

This is oversimplifying containers and VM by using the house vs tent analogy. Just talking about Docker weakens this too, because Docker is not the only way to setup containers.

> Tents, after all, aren’t a particularly secure place to store your valuables. Your valuables in a tent in your living room, however? Pretty secure.

Containers do provide strong security features, and sometimes the compromises you have to make hosting something on a VM vs. a container will make the container more secure.

> While cgroups are pretty neat as an isolation mechanism, they’re not hardware-level guarantees against noisy neighbors. Because cgroups were a later addition to the kernel, it’s not always possible to ensure they’re taken into account when making system-wide resource management decisions.

Cgroups are more than a neat resource isolation mechanism, they work. That's really all there is to it.

Paranoia around trusting the Linux kernel is unnecessary if at the end of the day you end up running Linux in production. If anything breaks, security patches will come quick and the general security attitude of the Linux community is improving everyday. If you are really paranoid, perhaps run BSD, use grsec, or the best choice is to use SELinux IMO.

If anything, you will be pwned because you have a service open to the world, not because cgroups or containers let you down.

nijave 5 years ago

Modern containers do provide lots of security features with namespaces, seccomp, cgroups (to some extent)

The author seems to largely ignore this. I would consider that a bit stronger than a "tent wall". Comparing it to a tent seems more akin to a plain chroot.

If I have my tent right next to someone else, I can trivially "IPC" just speaking out loud which would be prevented by an IPC namespace (which is Docker's current default container setup)

Also worth mentioning, turning a container into a VM (for enhanced security) is generally easier than trying to do the opposite. AWS Lambda basically does that as do a lot of the minimal "cloud" Linux distributions that just run Docker with a stripped down userland (like Container Linux and whatever its successors are)

  • throwaway894345 5 years ago

    I’m a big proponent of containers, but in fairness to TFA, I don’t know how to configure namespaces, second, or cgroups and I don’t know what settings my orchestrator uses by default. If containers can be secure but we don’t enable those security features properly, then it’s a bit of a moot point. That said, I think (but am not sure) most of us understand enough not to trust containers for isolation between untrusted processes, so I don’t regard containers as lightweight VMs, but rather collocated processes with their own namespaces. When I run untrusted code, like jupyterhub, I make sure those untrusted containers get scheduled onto their own dedicated mode pool with single tenancy (at which point the container is more of a tooling/orchestration convenience than a resource optimization tool).

eVeechu7 5 years ago

>Finally, there’s the whole business of resource isolation. While cgroups are pretty neat as an isolation mechanism, they’re not hardware-level guarantees against noisy neighbors. Because cgroups were a later addition to the kernel, it’s not always possible to ensure they’re taken into account when making system-wide resource management decisions.

I don't think virtualization really offers hardware-level guarantees against noisy neighbours either.

  • amarshall 5 years ago

    VMs provide stronger guarantees for maximum CPU, network, and disk usage, as well as memory size consumption. This because the abstraction boundaries are fairly clear (e.g. threads and virtual devices).

  • habeebtc 5 years ago

    It offers the opportunity to throttle noisy neighbors in hopes the party isn't too big.

encryptluks2 5 years ago

Starts off saying VMs are like brick and mortar houses and containers are like tents.

I agree somewhat but there has been significant progress to sandbox containers with the same security we'd expect from a VM. It isn't a ridiculous idea that VMs will one day be antiquated, but probably won't happen for a few more years.

  • kasey_junk 5 years ago

    Do you have any links to secure container runtimes that don’t either virtualize or replace all the system calls of the container such that it might as well be virtual?

    • encryptluks2 5 years ago

      First, saying it might as well be virtual is a bit of a misnomer. There are various options, and although they may act like a VM they are significantly faster than machine-based VMs like QEMU:

      https://kubernetes.io/docs/concepts/policy/pod-security-poli...

      > As of Kubernetes v1.19, you can use the seccompProfile field in the securityContext of Pods or containers to control use of seccomp profiles.

      If you're looking for a more general abstraction, there is gVisor and others as well.

      • kasey_junk 5 years ago

        Again, not an expert but security policies aren't immune from container breakouts right?

        Which leaves you to either use something like firecracker or gvisor which are either virtualization solutions or the next closest thing in that they intermediate all of your syscalls?

    • viraptor 5 years ago

      We can't answer that question because "secure container runtime" is not a well defined idea. Secure from what, in what way, with what guarantees? Docker is both secure and not depending how you draw the lines.

    • adolph 5 years ago
      • kasey_junk 5 years ago

        Singularity is likely* less secure than default container runtimes.

        *not a security person or an expert on singularity but it advertises that it doesn’t do file system or user isolation by default

    • jb_gericke 5 years ago

      Pod security policies and seccomp for call filtering at an OCI level

  • effie 5 years ago

    You can't make the Linux kernel isolation of processes as secure as Xen or Firecracker or SEL4 can. Yes, processes can be restricted to subset of syscalls and system resources but Linux is just too big and its attack surface is too big to put it on the same level of confidence as above hypervisors.

  • mercora 5 years ago

    i don't think that is necessarily the case but instead i believe in the near future the differences between container sandboxes and virtual machines might be less clear.

    I imagine CPU and memory namespaces coming implemented on hardware isolation features like VT-d io-mmus and alike thus making virtual machines integrated into some sandboxing feature.

znpy 5 years ago

> We don’t expect tents to serve the same purpose as brick-and-mortar houses—so why do we expect containers to function like VMs?

Marketing. Because of Marketing.

0xcmoney 5 years ago

Am I the only one getting tired of people stating confidently that containers don't improve safety _at all_ because they run on the same kernel? It's just not true.

Helmut10001 5 years ago

I place all my tents in a house (Docker VMs inside unprivileged LXC containers on Proxmox - yes, unprivileged = not a brick house, more like wood).

The only reason I use Docker is that I can access the system design knowledge that is available with docker-compose.yml's. Last example: Gitlab. Could not get it running on unrivileged LXC using the official installation instructions, with Docker it was simply editing the `.env` and then `docker-compose up -d`. All of this in a local, non-public (ipsec-distributed) network. I often find myself creating a single, separate unprivileged LXC container->Docker nesting for each new container, because I do not need to follow the complicated and error prone installation instructions for native installs.

unethical_ban 5 years ago

Container tech can be used for small scale "pet" deployment, but my understanding is that the true benefit of containers come with seeing them as "cattle".

You should never login to the shell of a container for config. All application state lives elsewhere, and any new commit to your app triggers a new build.

If that's not for you, then while containers like proxmox/LXC can still be handy, you're just doing VM at a different layer.

The article was a bit hand wavey about how "they" complain about containers, and then uses the analogy more than explaining the problems and solutions.

  • foobar33333 5 years ago

    >You should never login to the shell of a container for config

    I absolutely do this and think it works great. Fedora has built a tool called "toolbox" which is basically a wrapper on podman which can create and enter containers where you can install development tools without touching your actual OS.

    I basically do all of my development inside a container where the source code is volume mounted in but git/ruby/etc only exist in the container.

    This has the benefit of letting me very quickly create a fresh env to test something out. Recently I wanted to try doing a git pull using project access tokens on gitlab and containers let me have a copy of git which does not have my ssh keys in it.

    This is somewhat of an edge case though, for a server deployment, yes you shouldn't rely on anything changed inside the container and should use volume mounts or modify the image.

  • Sparkyte 5 years ago

    Only time it should be utilized as a small scale "pet" is when you externalize the storage mounts and it is an on-demand non-persistent virtual environment not directly connected to a data sensitive environment. That's mainly my take on it. I will often use docker locally to test out some kubernetes service connectivity, but never bring my Frankenstein stuff into an environment any higher than local.

leephillips 5 years ago

I’ve found systemd-nspawn useful. Use debootstrap to install a minimal Debian system inside your system, then boot it with this command. It isolates the filesystem while sharing the network interface, and is convenient for most things that I guess people use Docker for. I wonder why it’s not mentioned more often.

  • dijit 5 years ago

    I know it sounds like I want to be spoonfed, but do you have a walkthrough of this flow?

    I'd be interested in trying it out but I don't want to spend some hours reading documentation trying to get it working.

    • leephillips 5 years ago

      Well, that’s OK, because it did take me a while to track down the pieces of the documentation and find a procedure that worked for me. There is some less-than-optimal advice out there about this.

      Become root.

      Install debootstrap, which is in the Debian and Ubuntu repositories, at least.

      Make a directory to contain your embedded system. It can be anywhere. Let's use /var/lib/machine/machinename.

      This command will install a new, minimal Debian system in that directory:

      debootstrap --include=systemd-container stable /var/lib/machines/machinename http://deb.debian.org/debian

      It will download everything and, if I recall correctly, works unattended (doesn’t ask questions).

      Enter the container with

      systemd-nspawn -D /var/lib/machines/machinename/

      and set the root container password with passwd.

      Then do

      echo 'pts/0' >> /etc/securetty

      so the guest OS will let you log in after it's booted up. You may have to add other pts/x entries. I'm not sure about this part; it may be that if there is no /etc/securetty file that there is no problem.

      Now log out of the container.

      To boot up the guest OS, use

      systemd-nspawn -b -D /var/lib/machines/machinename

      You will see the familiar console messages.

      You will find advice on the web to include the -U flag here, which causes files in the guest OS to only use UIDs known to the guest OS when determining ownership and permissions. This leads to headaches, because you have to set the ownership of any file you copy in from the host system. Leave it out, and you can have parallel users on the host and guest OSs, which is more convenient. But you may have to change the UIDs of the users on the guest OS so that they match.

      Now, on the host OS, you can use the `machinectl` command to control all your guest OSs. `machinectl list` shows you what’s running, `machinectl login` lets you log in to them, and there are several commands for killing them with various levels of violence.

      If you want your machine to be a long-running service, just `nohup` the spawn command, and direct output as desired.

      If you want to be able to communicate with your machine from the internet, opening sockets from within the guest OS works, as they share the network interface. For a public-facing web service, you can install (for example) Apache and pick a port number to listen on, then set up a reverse proxy on the host OS, using a dedicated domain or a subdomain, so the users don’t have to use the custom port number. I’ve found that certificates for HTTPS need to be installed on both the host and guest OSs.

      Good luck!

      More information: https://wiki.debian.org/nspawn

lamontcg 5 years ago

containers are fat RPMs that construct a chroot jail.

smitty1e 5 years ago

I explain that if the an Amazon Virtual Private Cloud (VPC) is a datacenter "cloud", then a container implementation is a "puff".

Virtualizing the kernel like the Amazon Machine Image (AMI) virtualizes a chip core sounds great. But now, in the "puff", all of those networking details that AWS keeps below the hypervizor line confront us. Storage, load balancing, name services, firewalls. . .

Containers can solve packaging issues, but wind up only relocating a vast swath of other problems.

  • zaphirplane 5 years ago

    I have to say inventing a new metaphor for container made it harder to follow the point you were trying to make

  • wmf 5 years ago

    If you have few enough containers you can give each one an ENI.

srg0 5 years ago

If VM is like a nuclear war bunker, containers are like brick and mortar houses. They are not air tight and have glass windows which can be easily broken, but that's where most people live. They're cheaper to build, comfortable enough, and can last a human lifetime most of the time.

An analogy can go a long way. Both ways.

luord 5 years ago

I've seen the problems of treating containers as houses, primarily during development: Multiple different processes inside a single container, with a wrapper around them (inside the container) that makes it even more difficult to debug.

So, assuming I understood correctly, treating them like tents is infinitely the better choice.

Sparkyte 5 years ago

I was totally expecting this to go in the direction about tech debt with a homeless analogy, but it was about destructability. Yes we know this already and if you catch people treating it as a persistent host, slap their hands and say no.

johbjo 5 years ago

I'd be curious to see services designed to run as PID 1 inside containers, and contain or run nothing else other than the required binaries. Maybe someone is doing this.

1MachineElf 5 years ago

You had me at containers

eyeyeyerg 5 years ago

containers are cattle, VMs were pets. If one does not get the operational differences nor understands that these are completely two different usescases then probably should not work in IT industry

  • detaro 5 years ago

    VMs can be cattle. Physical machines can be cattle. This is not tied to the runtime technology, but to how you design and manage your machines and applications.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection