Ask HN: How do you manage your *nix binary package updates? (first post. eek)
We run clusters of machines and whenever there's an update via USN / DSA or whatever I end up manually patching each cluster with cluster-ssh.
This is less than ideal, but seems to work.
What do you do?
Note; I'm talking about binary packages distributed by your OS : apt upgrades / rpms.. not config files (Hi, puppet/chef), or deprec for capistrano style stuff. I used to manage around 300 servers myself. The only way it was possible was to have a completely stripped OS. All apps we used I installed under: /apps/<appname>/<app version>
example: /apps/perl/5.8.12
then I would symlink /apps/perl/5.8.12 to /apps/perl/current The profiles on the machine would add /apps/*/current/bin to the path.
This allowed upgrades and roll backs just by changing the symlink to the one I wanted to be current. This also allowed me to push out versions of software ahead of time, and they just change the link when we were ready to use it. Each machine would rsync /apps from a master distro nightly and of course I could force it with a for i in `cat hosts.list`... This sounds a lot like what gobolinux is doing [1] with its filesystem/package manager.
Did you ever incur in more trouble than it was worth with that approach? Making the filesystem the actual package database rather than a snapshot of what should be installed is quite tempting. I use a variant of your approach, apt proxy for the OS and since we run VmWare Infrastructure, have a separate filesystem for each version of the app and just "mount --read-only" the current version on /opt/appname. Create a custom repository of your packages and point all the machines in your cluster to it? If it is CentOS or RHEL just add a new repo to /etc/yum.repos.d. If you are upgrading system repos then just set the priority of your repo to be higher. Of course this implies that you have successfully rpm-fied your packages. We did that to all our packages and configuration. If you run the Red Hat family (Fedora, CentOS, SL, whatever else) you could also run Spacewalk. Machines periodically check in and get config files and packages that it needs. If you configure it to do so you can also push. At my last job we ran Spacewalk on our Koji box. Koji will let you build packages that you can then put in your repo to yum install. +1. Yum and apt and the like are awesome for individual machines, but pull based. Spacewalk and RHN are centralized push, which you need for any decent amount of machines. I was at devopsdays in Mountain View CA a week ago and one of the panel discussions was on package management. I was a bit surprised to see it as a topic, as there are numerous known ways to solve the problem. It turns out, that's exactly the problem. The topic was hotly debated and yes, there are endless possible ways to distribute packages across a large environment. I think choosing a pattern involves deciding for yourself how secure/auditable you need your environment to be and how tightly you want to couple your deployment process to your current architecture (ie. some package managers only work on some systems, etc.). That will narrow your choices down to a handful, and then you get to dig into the implementation details and decide from there. Isn't this what people run their own apt mirrors for? Yep. At my work we have a apt repository, and push packages to in, then issue an apt-get update on all the machines. Hopefully an upgrade too, as an update alone wouldn't do anything. So, do you guys not know what you're talking about is there some legitimate reason for downvoting this? Please, by all means, `update` to your heart's content with aptitude. It won't do a damn thing, but OKAY. Yes, I meant upgrade. I mistyped. I assume the reason for the down-voting is that it was a petty point and it was clear what I meant. For what it's worth, I meant it as a tongue-in-cheek sort of thing. I wasn't really criticizing. Sometimes I come on HN and am more conversational or joking than I ought to be. Anyway, I figured you meant as much, sorry! Perhaps Murder would work? See: http://engineering.twitter.com/2010/07/murder-fast-datacente... We manage our own dists. We package our own software as .debs, so everything gets managed the same way. All security updates, release deployments, rollbacks, etc., are managed with apt-get. I don't exactly understand pinning, but it's also important to how we manage packages. Depending on the release and the kind of server we're deploying to, we may do them all in one night or in batches of a few hundred over a week. All our boxes install security updates regularly (because we promptly add security updates to our dists.) I run a squid caching proxy and direct all my (Fedora or RHEL) systems to use it along with a non-mirror list repository that is fairly close to me. I'm not totally sure what your question is though. If you want to tightly control the packages that get updated then there isn't much to be done other than manually managing your own repo. Although if it is this big of a concern you should probably be running a distribution that is less volatile than Ubuntu. Say RHEL, CentOS, or Debian. You can do two things. 1. Mirror a yum repository plus updates. For each day create a hardlinked directory of updates named reponame.YYYYMMDD. On the hosts you wish to update sed -i to the new day in your yum repo config file and run yum -y upgrade. When you know it works, do it on the other hosts too. 2. Use a configuration management tool like Puppet. For your sized infrastructure you probably want the first. Already using puppet, and -yum, it's debian/ubuntu ;) (thanks though) Corporate repository to which official packages migrate upon certification. This usually takes about 24 hours. I've heard good things about MCollective for distributed package updates: