Settings

Theme

Docker Desktop: Your Desktop over ssh inside of a Docker container

blog.docker.io

133 points by rogaha 13 years ago · 82 comments

Reader

DannoHung 13 years ago

If you're sort of confused as to what advantage there is to this way of doing things over just running a VM in VirtualBox or using Vagrant, you probably aren't yet aware of what the Docker project is doing.

It's creating the VirtualBox of Linux Containers. Docker image files are extremely light weight when compared to VirtualBox images and use Union File systems to allow for complete isolation rather than using VM volumes.

An example scenario for when you'd want something like this is if you want to load an experimental library for a specific application that some part of your system depends on the stability of. Fire up a docker image for just that application with the experimental library replacing the stable library and just the applications inside the docker image will see it. No need to even play around with library versions or links. And since the Docker images are so light weight and incur extremely little performance penalty (I think it is limited to just the cost of using the Union FS over your normal FS), you can do this for dozens of scenarios at once.

  • zobzu 13 years ago

    I'm confused with the advantage over "pure" LXC and a couple of scripts for the mounts, what does it provides for this kind of usage? Or is it not using LXC and basically implements its own interface to the Linux namespaces? (that'd be actually cool... :P)

    • DannoHung 13 years ago

      Docker combines LXC along with a few other isolation and security technologies. What the Docker maintainers are also doing is setting up a system for distributing LXC based images. Beyond this, since Docker works with these union file systems, it also lets you build on top of other images.

      Eventually, there may be some way to merge images together, though I imagine that will always be a little harry compared to a simple stack up.

      Documentation is more than a little sparse right now, unfortunately. It took me a few days to figure out how all the pieces work together.

      • eldondev 13 years ago

        Can you be specific about the other security methodologies docker rolls in? Everywhere I read, people say "LXC != VM-level security," specifically, I hear that root on the container means root on the host. These suse guys at least say "If you want to be secure, kvm is still your answer." : http://unixcal.com/s/a4mn . Thoughts?

        • jpetazzo 13 years ago

          Root on the container doesn't mean root on the host. Machine-level virtualization has received more scrutiny than LXC, so as of today, many people consider traditional VMs to be more secure. But KVM or Xen are not intrinsically more secure than LXC or OpenVZ. They all have their histories of exploits and privilege escalations.

          One key thing is, that it makes sense to run containers without root privileges (greatly improving security), while it is much harder to realistically run a VM without root processes. As a result, it could be said that containers are much safer, because before even thinking about breaking out of the container, you have to work on a root exploit - on a system which, by essence, only runs the processes that you really need, and has a much smaller attack surface.

          We're working on a more elaborated answer, to be included within Docker docs!

    • shykes 13 years ago

      Docker does use lxc under the hood. They serve very different purposes.

      lxc is a tool for sysadmins to deploy and configure virtual servers on their machines.

      docker is a tool for developers to package their application into a deployable object without worrying about how the sysadmin will deploy it, and for sysadmins to deploy applications without worrying about how they were packaged.

      When you tinker long enough with lxc, eventually you start building something like docker on top of it, because it just makes sense. Now instead of reinventing the wheel you can just use docker.

    • FooBarWidget 13 years ago

      It's a wrapper around LXC (and a couple other things) to make it usable for mere humans. With LXC there's still too much low-level fiddling work to do. Docker provides everything you need to create and use containers easily and quickly. It downloads images for you. It creates bind mounts for you. It sets up networking in the container for you. It sets up certain IP forwarding rules for you. Etcetera.

  • rogahaOP 13 years ago

    Great explanation! Thanks DannoHung

beachstartup 13 years ago

> root@host:~# curl http://get.docker.io | sh

No no no. Do NOT do this. Kids these days...

  • tlrobinson 13 years ago

    Honest question: do you audit every line of code you ever download and execute?

    Edit: Ironically Docker itself has the potential to help solve the problem of running untrusted open source code. I think every open source project should include either a Dockerfile or Vagrantfile to help users get up and running quickly, and safely run untrusted projects.

    • WestCoastJustin 13 years ago

      On my linux machines, generally, I'll only install software from trusted sources, say rpm repos or ubuntu sources. In this case, yes, I would review the source of this file before running it.

      For example, I would notice that it requires apt-get

        echo "Ensuring basic dependencies are installed..."
        apt-get -qq update
        apt-get -qq install lxc wget bsdtar
      
      Then it downloads some binary into /usr/local/bin, at this point, I'd probably configure a VM to review this further.
      • tlrobinson 13 years ago

        But would you review the source of Docker too?

        • WestCoastJustin 13 years ago

          I think I get where you're going with this, in that, the burden to run this and review the source is too high, that it almost seems like a waste of time. If that is your point, then I agree with you.

          I don't know much about docker, but doing a "curl | sh" peeks my interest, then downloading additional binaries into /usr/local/bin, I'd want to take a closer look. Obviously, this is a case by case review, till it sits well with me. If I was going to run this in production, I'd want to have a really good idea of what this was doing and what to expect, so I'd probably take a closer look at the source if it was not clear from the documentation.

          • JasonFruit 13 years ago

            Completely non-judgmentally, it's piques my interest, so you can spell it right next time. (I have the opposite problem: I spell things right and say them wrong.)

            • mgkimsal 13 years ago

              I met someone who pronounced "queue" as "kway", as in "I need to go clear my mail kway".

              That is all.

        • zobzu 13 years ago

          Do you reverse engineer your CPU's schematics?

          This can go on pretty far - trust is always an eventually unsolvable issue.

          There is a certain level of trust that is easy to achieve and easy to get. Trusting dotcloud is easier than trusting everyone on the internet is pink bunny. Happens to be that HTTPS and signing aren't exactly hard either ;-)

    • jlgreco 13 years ago

      Why don't we just add "click to execute" to browsers while we are at it... /s

      • tlrobinson 13 years ago

        Because people who use command lines / open source software generally have better judgement about this sort of thing than the average user?

        You either have to trust Docker (a fairly well known project built by reputable people) isn't going to root your machine, or download the source yourself and audit it.

        This is no worse than suggesting you "git clone whatever; cd whatever; make" (aside from the lack of SSL)

        • jlgreco 13 years ago

          > You either have to trust Docker

          ...and everybody else on my network, with that method. Doing that I don't even get the chance to think "Hey wait a second, why was this only 50 bytes of shell script...".

          The reason that you see outrage for this "method" is because it is born of laziness and far too reminiscent of more disturbing times in computer security.

          • tlrobinson 13 years ago

            The original poster didn't say his issue was with the lack of HTTPS so I assumed he doesn't approve of this technique in general, but yes, I agree HTTPS should be used.

        • ics 13 years ago

          > Because people who use command lines / open source software generally have better judgement about this sort of thing

          Why do we need an instruction on downloading the source to begin with? It really just promotes bad habits with those who know no better, i.e. new/inexperienced developers. The problem is when people see instructions like that on 20% of the guides they read in earnest, trusting that everything is OK if enough people say it. One hopes they stumble upon a discussion like this so that they can consider the consequences but that just isn't going to happen to everyone. True, one should exercise equal caution while cloning, gem-ing[1], etc. It would be great if authors would just link to the source and paste the relevant lines from the README if necessary.

          [1] http://andre.arko.net/2013/03/29/rubygems-openssl-and-you/

    • jol 13 years ago

      I don't, but for install script one can look at least from where the stuff will come. if the download link was using ssl...

      • tlrobinson 13 years ago

        I agree, they should use SSL (and don't use a URL shortener, which they don't, but I've seen before).

        Ideally it would download a file from Github too, that way you can be sure it's coming straight from the publicly visible open source repo, and you can audit if you want.

        But I think the general outrage over this technique is overblown.

  • burke 13 years ago

    One alternative would be a graphical installer that asks for your root password. It would very likely also be served over unencrypted HTTP. This happens all the time, and HN never calls anyone out on it.

    How is this different, other than a graphical installer being completely unauditable, whereas curl|sh is quite trivially auditable? Both run code as root.

  • beaumartinez 13 years ago

    You can curl the URL and see what it is you'll be executing. You don't get that with an obscure binary you download from the web.

    But the URL should be HTTPS.

  • Groxx 13 years ago

    Of course, because binaries are incapable of doing the same thing as `curl x | sh`...

  • antocv 13 years ago

    This also sucks because the scripts always assume some variant of Ubuntu or Debian. Um, no, thank you, damn hipsters.

    • slashdotdotorg 13 years ago

      Exactly _what_ is your qualification for debian being lumped in with hipsters? Some of us have used it as the most rock solid STABLE linux distro for servers and desktops for quite a long time.

      • peatmoss 13 years ago

        He's an avid Yggdrasil user.

        As an aside, I miss some of the raw diversity that was present in the old Linux distros. Slackware was my drug of choice due to its steadfastly BSD flavor. I guess Slackware is still around, but have no idea what its status is and whether Patrick ever moved it over to system v-ish convention in order to be more like other Linux distros. I guess that distinction is even a bit anachronistic given all the fancy changes to the way init is done nowadays.

      • antocv 13 years ago

        I apologize to Debian users. Respect.

      • aclevernickname 13 years ago

        I have debian bo on vinyl.

    • shykes 13 years ago

      The website offers install instructions for several OSes: http://docs.docker.io/en/latest/installation/

    • mateuszf 13 years ago

      On Arch Linux installation is as simple as chosen_aur_wrapper -S lxc-docker-git

    • throwaway2048 13 years ago

      drivebyacct2 you are hellbanned, time for drivebyacct3 i guess

  • rogahaOP 13 years ago

    You can also install using this procedure: http://docs.docker.io/en/latest/installation/binaries/

  • txutxu 13 years ago

    I think there is more danger in html5 dinamyc fonts, or more evil in a dns request, than an opensource project installer.

    Of course, don't do this on your most beloved production machine, if you can package it properly, test it, etc

    But rendering a font gives execution with your user, so don't be so afraid of a installer "you can read" and has an interesting purpose.

  • anonymoushn 13 years ago

    This is how rvm's authors want you to install rvm :)

  • malandrew 13 years ago

    Is there any tool to automate the introspection of curl pipes to warn of potentially malicious code that needs to be given further attention?

    The usability of the curl pipe approach is here to stay, so the least we can do is help people be safe with it.

    Anyone have other ideas for making curl pipes safer?

    • RodgerTheGreat 13 years ago

      Well, detecting potentially malicious shell scripts is merely a matter of solving the halting problem...

      Food for thought: http://www.cs.dartmouth.edu/~sergey/langsec/

      • malandrew 13 years ago

        Would there be any benefit in creating a VM on the fly, running the shell script in the VM and there reporting back on what was modified by the shell script. If all goes well, I reckon you can then safely run the script on the host machine.

        • RodgerTheGreat 13 years ago

          Even if you can be bothered to semi-manually audit the changes a script applies to the VM and can afford the time and space overheads of such a "guess-and-check" approach, a malicious server could send you a different script the second time you requested it, or the script could in turn pull down other payloads differently the second time it executed. If you try to extract a diff of the changes applied to the VM and then reapply it to your host machine to ensure the behavior is the same, why not simply have an installer system which behaves in a more restricted way to begin with? The root of the problem is that shell scripts fetched from remote servers are far too flexible to be 'safe'.

        • tlrobinson 13 years ago

          Or... use Docker!

          Seriously, Docker is perfect for creating a sandbox with all dependencies to help new users get up and running quickly and safely. Every project should come with a Dockerfile and/or Vagrantfile.

        • vidarh 13 years ago

          ... except when someone writes a script that guesses (or reliably detects, depending on container technology) whether it's running in a VM/container and acts differently then. Or if it only acts maliciously say, one out of five times ("old school" viruses would often do that - destroy your floppies sometimes, but most of the time just spread).

        • gcr 13 years ago

          Sounds like a great application of Docker, come to think of it. I'm sure it's quite possible to spin up a new docker VM from a shell script and do exactly that.

          Hm, the only problem is installing Docker in the first place ...

    • jol 13 years ago

      I am not very good at unix command line, but maybe it could be done like this: curl http://get.docker.io > /tmp/docker-install && sh /tmp/docker-install && rm /tmp/docker-install that gives quick and easy way to inspect by executing only first command and then inspecting the source, i.e., 1) copy just first part curl http://get.docker.io > /tmp/docker-install 2) then inspect, e.g. cat /tmp/docker-install 3) run the install sh /tmp/docker-install && rm /tmp/docker-install or, if not reviewing run whole at once

      p.s. I know, that there should not be direct copy-paste from browser to console and this method leaves a file in /tmp on installation failure

    • darkarmani 13 years ago

      > The usability of the curl pipe approach is here to stay, so the least we can do is help people be safe with it.

      I don't think this is true. It's just going to take a very popular project getting their DNS hijacked before everyone wakes up.

      Well, you are right about the usability part. That part is amazing. It's just that curl-pipe is not the answer.

  • rogahaOP 13 years ago

    I have updated the blog post with a more secure way of installing docker.

sciurus 13 years ago

IMHO here's an even cooler hack-

Gtk+, the widget toolkit used to develop GNOME and many free software applications, supports rendering applications via HTML5. One of the developers has demonstrated using it to run desktop applications on OpenShift, Red Hat's PaaS, that you then access via your web browser.

http://blogs.gnome.org/alexl/2013/03/19/broadway-on-openshif...

http://blogs.gnome.org/alexl/2013/04/03/more-gtk-in-the-clou...

ivansavz 13 years ago

This could be made VERY interesting if you also add an NX server in the mix. I find basic X11 connections via ssh to be rather laggy and unpleasant to use when the internet connection is not top.

The idea behind NX is to "fake" an X client on the server side and fake a NX server on the client side. This reduces the number of roundtrips required for each action. The improved responsiveness is dramatic -- even on a low speed and high-latency link, using the remote desktop feels like a local machine...

    http://en.wikipedia.org/wiki/NX_technology

Unfortunately, the two open source projects which aimed to reproduce the NX functionality seem to have been abandoned.

    http://freenx.berlios.de/
    http://code.google.com/p/neatx/source/list
Is anyone using NX these days? Perhaps, people stopped developing these because they work well already?
  • sciurus 13 years ago

    Take a look at xpra instead. The performance is much beter than X11 forwarding when you don't have a low latency connection.

    http://xpra.org/

    • rogahaOP 13 years ago

      Thanks for your reply! I tried xpra and it seems to be much better! It also fixed the issue with the keyboard messed-up on Mac OSX. I will publish an update on GitHub soon!

    • rogahaOP 13 years ago

      Ok!

  • cpach 13 years ago

    X2Go[1] is under active development and to my understanding it's based on NX libraries. It might be a good alternative. I have only tried it briefly and not over WAN, but it worked quite good over WLAN at least.

    [1]: http://wiki.x2go.org/doku.php/start

  • morsch 13 years ago

    We're using NX/x2go for working from home or, sometimes but fortunately not too often, less likely remote places such as weekend holiday places. It does work well. I wouldn't go as far as saying it feels like a local machine, and it's not as responsive as regular X over a fast LAN, either. But it's very usable.

  • rogahaOP 13 years ago

    I didn't know NX! Thanks for suggesting it Ivan! I will try it !

    • alyandon 13 years ago

      Give x2go a try first (it's based on nx). In my experience it has been far less fiddly than freenx or neatx.

jol 13 years ago

I can see usig this to get perfectly replicable, easy to upgrade/rollback and movable works environment - for both local and remote use. I.e., use locally on powerfull machine or rdp to closest powerful machine you can access from slow device. Or have several workspaces similar to virtual desktops for multiple projects...

j_s 13 years ago

I got excited when I saw the Windows installation instructions link, but that is just how to setup Vagrant with VirtualBox to host a Linux machine.

Is there any open-source equivalent to things like Citrix's XenApp, VMWare's ThinApp, Microsoft's App-V, or independent tools like Sandboxie? http://alternativeto.net/software/sandboxie/

  • rufugee 13 years ago

    We've been experimenting with Ulteo (http://ulteo.com/home/) as a possible alternative to XenApp.

  • rogahaOP 13 years ago

    Sorry, but for now it's the only way to install it on Windows. Thanks for asking j_s.

    • mmgutz 13 years ago

      But why Vagrant? It's an unnecessary dependency that requires installation of more stuff I don't use. Why not distribute a pre-built VBox image and torrent it?

  • jahewson 13 years ago

    ThinApp, App-V, etc. are pretty much equivalent to a chroot jail on unix.

gcb0 13 years ago

So, if I understood that correctly, it's just a virtual box image of ubuntu or debian that you run headlessly in a linux container (via docker) and then run a Xserver on your actual machine OS and connect to it via SSH with Xforward?

how is this any better than simply running virtualbox on your OS to begin with?

  • rogahaOP 13 years ago

    Exactly. It's better because you can build that image anywhere where there is docker installed and it can be easily moved/upgraded and ready to run. But if you think only locally, then there is no much difference, despite that docker lighter and faster.

    • yebyen 13 years ago

      Further, the VirtualBox instructions are only for Windows users, to get Linux installed (which is a requirement of Docker). You don't need VirtualBox at all. But if you don't have Linux, you can try this with VBox (it's a virtualization tech that nests safely inside of vbox... unlike say, virtualbox inside of virtualbox.)

      • gcb0 13 years ago

        if i already have linux installed i can carry fat binaries and a kernel for chroot'ing an environment. all in a tar file... I think this is just new way kids does common things of yesterday. or maybe linux containers kicks chroot a in performance?

        • yebyen 13 years ago

          to me it's not about performance... it's about rigorous isolation. LXC is like FreeBSD jails, though there are things you can do with the cgroup namespace stuff now that are impossible using jails... eg. disk io accounting.

          in a jail, one user who attempts to monopolize disk io will succeed. in a cgroup, he can be restricted to exactly 10% of available i/o bandwidth, so you can guarantee that he doesn't starve the other containers.

          there are also easy and documented ways to break out of a chroot if you are able to obtain root in the chroot. those holes are plugged by lxc and docker. Most notably, access to devices can be restricted.

          I don't know what you mean by "carry fat binaries and a kernel for chrooting an environment" -- you don't need a separate kernel for chroot, any more than you need a separate kernel for docker. There's no advantage to static linked binaries (fat binaries?) when you can put the storage of your containers in a zpool or btrfs with deduplification. Same as your chroots.

          Try out docker. Read about cgroups. I first gave LXC a try a few years ago and I was really sad about the extent of support for creating guests and keeping them properly isolated. It was really not friendly at all. You basically had to commit to using kernel patches that made your system pretty unusable as a desktop. (Was that xen dom0 or lxc?)

          Everyone was saying, "Ohh, LXC is no better than a chroot." It's insecure, easy to break yourself out. Not so much anymore, with the current state of Docker you don't even have to know all the advances in cgroup and namespaces.

          It's worth a look. Really, go check it out.

      • darkarmani 13 years ago

        And for Mac users.

willvarfar 13 years ago

Here's a recipe for using vnc to get pixels out of a docker:

http://stackoverflow.com/questions/16296753/can-you-run-gui-...

VaucGiaps 13 years ago

Linux != Debian

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection