Docker Hack Day Highlights
blog.runkite.com"Sebastian’s team needs docker for a simple reason: they build Debian packages for anything they send to production. But they don’t want to SSH into a build server, clone the git repo, build the package, then copy & upload it. This just takes too long and is annoying."
I hope I'm misunderstanding this. Instead of improving their central build infrastructure, they gave up on it and have the developers produce builds for production on their individual computers? That seems like a step backwards to me. The benefits of a continuous integration are well-known; that's the logical place to produce your artifacts whether they be debian packages or anything else.
Here are a couple projects demonstrating how to build deb and rpm packages and publishing them to repositories from Jenkins.
Nobody said we gave up on continuous integration. Quite the contrary! We use jenkins and other CI tools on a daily basis.
Once we have tools to build debian packages easily in a controlled environment (containers), the CI server will be able to use the same tools to build and test the final package. The advantage here is that a developer can test the whole workflow and build local test/dev packages with the same tools and environment as the CI server.
I'm glad to hear I was reading too much into the summary. So you plan to use docker as a replacement for tools like pbuilder?
Yes packages are built in a fresh container every time. It's has a lot of functionality in common with pbuilder.
Of course, building packages is just one of many use cases. You also want to test the installation of the new package, run integration tests...
I am slightly embarrassed to say, even when I see all these folks say amazing things about how Docker made their lives easier, I still can't wrap my head around exactly what it is. (This when I KNOW I want jail-like isolation between many of my rails apps on one server)
Is it like a jail in BSD? Ok but then what are the chef-like recipes for? (the docker recipe-like stuff is leading me to think that)... I get the benefits of shipping containers, but what does this do for Quotas? for differing paths? How does it solve init.d vs Upstart vs launchd issues?
Also, on index.docker.io, how do I view these recipes? What do they look like?
Hopefully there's a youtube video out there where a speaker gets a little less abstract and more detailed about where these pieces fit in, actually looking at an SSH prompt on a server.
High-level overview: You get an image (an actual filesystem image). You "docker run" some command, which creates a container (container is to image what VM is to, well, disk image, the container is what runs). Any change to this container doesn't change the base image, but it's done on a separate one. You can commit at any point and "freeze" the container's image and get back to it. You can start and stop the container at any time, but you can only run one starting process in it (so you might want to run sshd or supervisord).
That's pretty much the gist. It's a VM where snapshots are free, you can clone it at any point and branch off, and it's really lightweight to run. Basically, think of it as the child of VirtualBox and git.
I can't say I had exactly your problem. I was working with lxc for quite some time now and it's basically what is underneath docker.
What I really had a problem with is getting to know the docker way of doing things. It took a bit of a time for me to get an idea of it and I feel like I'm still not there yet (2-3 weeks working with it with a lot of troubles getting it to work initially). If you keep going at it, it WILL dawn on you. Cool stuff.
Hey Antonse,
Does this presentation help you understand? http://www.docker.io/about/
I've seen this presentation and it's a great higher-level introduction to the benefits. The shipipng container analogy is great. Where I'm stuck now is how that maps to "what do I type in SSH, and how is it modifying my server's filesystem?" - just the getting started phase. But this page (https://github.com/dotcloud/docker/wiki/Docker-external-reso...) seems to be really helpful.
Yeah, it's hard to find the balance between "big picture" explanation and more concrete examples...
This wiki page might help: it's full of various Docker resources: articles, tutorials, etc. https://github.com/dotcloud/docker/wiki/Docker-external-reso...
There's also this Pycon lightning talk: http://www.youtube.com/watch?v=wW9CAH9nSLs
This talk was brilliant by the way, well done on it. Really happy to see a lot of movement around docker, seems like it's really hit a nerve.
Interesting to see emacs on its lonesome in one of those boxes. How does that work? Emacs client into an exposed port or something else?
In your .emacs
(defun create-docker-terminal (buffer-name) (interactive "sshell name: ") (ansi-term "~/docker/launch.sh") (rename-buffer buffer-name t)) (global-set-key (kbd "C-c d") 'create-docker-terminal)
In your launch.sh
docker run -u user -i -t -v local-dir:dir-inside-container /bin/bash --rcfile your-rc-file
So you are just running docker from within emacs rather than the other way round?
Videos seem to be unavailable at the moment.
Hey, sorry about that.
The Youtube live video from last night doesn't seem to be available anymore, but according to http://www.docker.io/live/, a better version of the videos should be up by the end of the day. I'll fix the post.
Thanks!