Concourse CI Hits 1.0.0
concourse.ciI wish documentation would offer a "Concourse made easy" or something.
I imagine the documentation is currently a work in progress. But starting for example from Github, it is extremely easy to go down a deep dark hole trying to figure out all the new terms: I guess I have to use BOSH, what is BOSH (itself a deep dark hole: uhh, stemcell? How does this relate to all the other similar tools? Why would I commit to using this new tool solely for the sake of Concourse?); now I have to learn a new strange distinction between the otherwise similar English words "job" and "task," something about Garden (yet another hole to travel) and a new meaning for the acronym "TSA" and for some reason the command line utility is called "fly" which has no obvious relationship to the project name (what if the command to interact with git were "spudge"?) can I please just install a .deb and run a service and add a build real quick to try this out?
This could be why it is so common to see Jenkins deployed everywhere. While it is wrongly designed in so many frustrating ways, you can still set it up for real work in an hour or so, and you won't need a dictionary to translate from a unique new jargon into English just to try it out. Every new CI that makes this even one bit harder than Jenkins isn't trying hard enough.
I'm curious to know how your flow through the docs went. We inlined "getting started" into the main page, and had the Setting Up section show "vagrant up" first, and we have warnings above sections that are not critical to know before learning Concourse itself, which then take you right to "Hello World". It's already optimized for "Concourse made easy" - but perhaps our navigation is not?
I should probably add a warning above the BOSH section though, or even the binaries section.
The trouble is, reality is complicated, regardless of whether it's documented or not. I've found that a lot of open source projects simply don't document things to give an illusion of simplicity; people just forget all the manual labor they had to do after downloading that .deb and installing it across 10 machines and maintaining those machines over time. Our docs cover BOSH as our preferred tool for clusters. If you don't want to use BOSH, just use the binaries and your own deployment tool.
We don't build .debs yet; as soon as we do other people will want .yum, and another will want Docker images, so we built the binaries that will feed into those first, which is often what I look for when kicking the tires on something anyway.
There is a simple 'vagrant up' you can do that allows you to play with Concourse without needing to learn about all the other tooling such as BOSH. It's a great way to get started. Take a look:
Check out VM setup here: http://concourse.ci/vagrant.html Setting up first pipeline here: http://concourse.ci/using-concourse.html
This gets you going with a local Concourse instance that you can experiment with. You're right that it can feel like "turtles all the way down" (quoting @oblio from earlier today), but you do not need to jump in with BOSH right away.
In regards to how to think about 'job' vs 'task', I like to think of 'Job's as a description of what the module is attempting to achieve. For example in the Concourse team's own pipeline[1] you'll see a 'job' named "deploy". 'Task's are on the other hand the individual things that need to happen in order to complete the 'job'.
-- 1. http://ci.concourse.ci/
Sadly, while it's easy to play with the software this way, the 'vagrant up' implementation is missing a bunch of functionality; you don't get any authentication functionality, and you don't get the ability to attach external workers. Also, as of right now, the vagrant build hasn't even been updated to the 1.0 release; it's still limited to version 0.76.0. So you can't play with this new Concourse release using a vagrant box in the first place.
I've had a brief struggle with trying to get the standalone executables to work under Linux, without any success (I'm told that they probably are incompatible with my server's kernel version; note that the documentation does not state what kernel or OS versions are supported). Similarly, I've had a brief look at what it'd take to set up an install using BOSH.. but it looks like that's going to be a massive undertaking.
As a practical matter, if you want to use Concourse for real work, you probably do have to jump in with BOSH right away, since the standalone versions are likely not to work with your VMs, and the vagrant version is missing critical features and hasn't even been updated to the 1.0 release.
And I just want to add.. I really like Concourse. I'd love to use it instead of Jenkins.
But at the moment, I haven't been able to get it to work, apart from in a vagrant box running an old version of the software, which I can't deploy to a real server, and to which I can't attach workers. Which, from a brass-tacks point of view, means that I haven't be able to get it to work at all.
I'd be thrilled if you can make it less painful to install on a proper server. I totally would jump on board in a heartbeat. But I've already spent a couple days trying to make it work, and I just can't afford to throw more of my time at it right now, particularly since I already have a fully functional (albeit somewhat unpleasant and occasionally unreliable) Jenkins install, which only took me 20 minutes to set up and configure.
I'd be very interested in taking another look if and when the whole install process has been streamlined somewhat! There's lots of promise here; I just can't get into it.
Speaking as a one-eyed Concourse fan, I think your remarks are fair and reasonable. My personal hope is that the binary version will be picked up by downstream channels (Homebrew, linux distros etc), which will simplify the experience for 99% of users.
Or a simple 'docker run'
I really wish there was a docker container for this. I was going to make one but didn't have the energy to dig into the documentation about installation from binaries. Maybe later...
Full Disclosure: I also work at Pivotal on a team that has done a large amount of automation using Concourse (check out our pipelines [1])
Things that we've done so far with Concourse:
* A dependency check notifier that based on an RSS feed from various language maintainers: Sends an email, updates our pivotal tracker account, sends a message to our slack channel
* A job, that based on the completion of a build will upload the resulting artifact to Github and tag the release with a new version, auto generated release notes and checksums
* Travis.ci style github pull request analysis
* A security feed monitor that notifies us when a part of our application is vulnerable and when it is, kicks off a build and sends an email notification
* Automatically updates submodules and gems for git repos and runs specs and tests against the updated artifact.
These tasks were not difficult to implement. Every pipeline for Concourse is explicitly defined in its configuration file [2], which we version control. There is never any reconfiguring or rebuilding "snowflake" servers when we re-deploy our Concourse VMs. They rebuild identically based on the config files.
[1] - http://buildpacks-ci.cfapps.io (Chrome friendly, Firefox not)
We've also got a decent example of Concourse use. You can see our pipeline at https://capi.ci.cf-app.com (build details are private, but you see the general pipelines), and our configuration is available at https://github.com/cloudfoundry/capi-ci . Check out ci/pipeline.yml for the goods.
Do you have a source on the Travis CI style GitHub Pull Request Analysis? When I looked into Concourse it seemed like this was not supported and that was a dealbreaker for us.
Hey Danny, would you mind pointing to some of those resources if they're 1) open-source and 2) not in the main concourse org?
Surely. An index of all our pipelines are located here: https://github.com/cloudfoundry/buildpacks-ci#pipelines. Of note, the PR resource which JT built: https://github.com/jtarchie/pullrequest-resource
I work at Pivotal, which sponsors Concourse development, so I got one of the fancy t-shirts we made for this occasion.
I wore it to the office today. Other people in the office were coming up to me and raving about Concourse. I've been proselytising it, but I feel like pretty soon I just won't have to. One of my colleagues was telling me that in two days he got an iOS CI pipeline that he never managed to achieve with weeks of tinkering with TeamCity.
I suspect that pretty soon the default choice for CI/CD in Pivotal Labs projects will be Concourse, and that will seed it pretty widely.
If you want to get in on the ground floor of something great, here's a good opportunity.
If you're wondering about what the heck it is, go to the homepage for the 40,000 view: http://concourse.ci/
If you're wondering why the heck another CI system, look at "Concourse Vs.": http://concourse.ci/concourse-vs.html
After that my advice is to look at public Concourse pipelines and pop the hood. And drop into the Slack channel, the team are in there now posting funny gifs. I'm in there too, under jchester_robojar, if you want to ask me anything.
Can you ELI5 Concourse's architecture? (I haven't installed it yet, it's possible that some of my follow up questions are answered by installing Concourse and testing it a bit)
I saw this page (http://concourse.ci/architecture.html) but it's turtles all the way down...
Or just validate if my understanding is correct:
Concourse has the following components:
* ATC: web UI + job scheduler; can be clustered; uses PostgreSQL as data storage; => this would be a part of the "master" in Jenkins terminology
* TSA: simple SSH server, used by workers to register and presumably be controlled; => this would be the second part of the "master" in Jenkins terminology, in a sort of JNLP Jenkins setup where the workers register with the master instead of vice versa (Jenkins SSH connection)
* workers: Cloud Foundry type containers (WTF is Cloud Foundry exact? their page is totally confusing); "slaves" in Jenkins terminology
Also, if I read the features correctly, Concourse supports both Linux and Windows builds, right?
You pretty much got it right. That page is still a bit rough; we'll give it some TLC soon.
Workers are just machines running a container management daemon (much like a Docker daemon) and a volume manager (a custom daemon for managing caches and efficient artifact propagation between containers). Not sure how you got to Cloud Foundry though, it's not really related. Garden lives in the Cloud Foundry GitHub organization, as it's also used by Cloud Foundry, but you don't need to know anything about CF to use Concourse.
Concourse supports Linux, Windows, and OS X. You can see an example of this here:
I will jump in with a question: what about FreeBSD? Can I have a FreeBSD slave / Garden?
If someone writes a Garden backend for it, sure. Here's an example of a pretty small backend that delegates to `systemd-nspawn`:
Also please add pictures. there is nothing beating a pictures with 3 boxes and arrows :P
Is Windows support on the roadmap?
Woops, typo'd in my original comment: it supports Linux, Windows, and OS X. :) Binaries here: http://concourse.ci/downloads.html
I've been trying to figure this out from the docs, but how does it support Windows? In the sense that for now (until Server 2016 comes out), you don't really have "container support".
Concourse just talks to Garden, it's up to the Garden backend whether or not it actually does containerization. So on Windows it just does the world's worst containerization (cd'ing into a directory), though it at least guarantees that the processes all die.
There's a proper Garden Windows backend in the works which we'll switch to at some point once we better understand it: https://github.com/cloudfoundry/garden-windows
More links:
Concourse's own CI: https://ci.concourse.ci and Concourse Slack: http://slack.concourse.ci
Concourse really seems like a nicer Go.CD, in terms of how pipelines, jobs and resources work. I'm quite interested to see how it can work for a fully Docker-based workflow (each git repo has a Dockerfile in the root, that we want to rebuild whenever the master branch receives a new commit. Then we run the image in a container, execute our tests, then if the pass export the Docker image and pass it on to the staging server, start up a new container and shut down the old one).
Building a docker image is relatively straightforward. You define the docker image resource, then "put" to it.
Congratulations for this release.
However as already seen in the case of GoCD, the success of Concourse will depend on how easy it is to convince Jenkins believers.
The "vs Jenkins page" needs to be augmented with more information regarding workers, plugins and simple jobs scripts.
At the moment that page is not really convincing
1)"Jenkins servers become snowflakes" All configuration is saved on the disk as XML files. Backing them up or putting them in version control is very easy. Restoring Jenkins servers is very easy. I have done this actually.
2)"Jenkins has no first class support for pipelines" True. But Jenkins 2.0 will be designed around pipelines, so once it is released this argument will no longer hold. And even today with 1.x not all organizations use pipelines (or are happy with the existing plugins)
3)"Trying to find the build output log for a failed build in Jenkins can be up to 3 clicks from the home page" That is not really an argument. I get an email when my build fails.
As others have mentioned having readers understand new concepts in order to run/use if effectively is problematic.
The UI is impressive! Congrats! I think you should market this more.
Basically if the effort required to setup Concourse with 2 workers is bigger then setting up the respective Jenkins setup, Concourse adoption will suffer.
It needs to be stressed that for a lot of small/middle sized companies, the build server is just an afterthought. Not everybody has full time people working on setting up pipelines.
I think it's less about putting a list of features in a spreadsheet and checking off boxes, and more about how the tool thinks.
Jenkins can "do" the things that Concourse does, if you assemble enough plugins, click through enough configuration pages and write enough shell scripts. But that's not its native way of thinking.
I agree that lots of Jenkins users will look at Concourse, scratch their head and leave it. But for those who switch, it's really that much better. I think it'll spread largely by word of mouth, especially once it begins to appear in downstream distribution channels.
> It needs to be stressed that for a lot of small/middle sized companies, the build server is just an afterthought. Not everybody has full time people working on setting up pipelines.
I'm in a team of four -- two pairs. We maintain and extend a medium-sized app, central to our revenue, that has some surprisingly complex and legally scary business rules.
These days whenever we're annoyed by something, it's common for us to think "how can we get Concourse to do this for us?", instead of checklists or wiki pages or Just Hoping Somebody Remembers.
I think people tend to set up Jenkins and never touch it again because it's scary to tinker with your CI/CD. Concourse is, despite being configured by flat files and a CLI, much more approachable in this regard.
I don't disagree with anything you said. All I am saying is that the current "Concourse vs.." page does not really achieve anything.
People who like Jenkins will read it and say "Meh", while people who have already switched to Concourse (like you) don't need to read it in the first place.
I'm looking forward to experimenting with this.
At least in the documentation, there seems to be a lot of focus on extra tools. BOSH seems very off-putting. Having to use the "fly" CLI command to update pipeline configuration seems unnecessary - can I just update the files on the filesystem myself, either via my configuration management tool or a git repository?
This is very much a trait of BOSH. When BOSH manages a machine, it owns that machine. It deploys the stuff that's specified in the manifest, but that has to come out of one or more 'releases', ie precisely structured tarballs of stuff, and there's no way to manage a file here or a setting there. You could push out a file by packing into a release, uploading that, and adding it to the manifest, but that's quite a ritual for one file. You are essentially expected to treat the machine as a black box.
As a result, BOSH-based systems tend to be managed via APIs exposed by the software running on them, which often come with a command-line tool. BOSH itself is like this, and so is Cloud Foundry.
This is a very different approach to tools like Puppet or Ansible, which are much more about openness and empowering the operator to directly manage machines. Although it is fairly similar to Docker etc, where you fire up images without any sensible way to fiddle with them, and rely on talking to the software.
So, if you are comfortable with the Puppet/Ansible type tools, then you won't like BOSH. This is 30% because BOSH isn't as good as them, but 70% just because it's different.
BOSH being different I think is why a former collegue of mine created a Chef cookbook for Concourse: https://github.com/dvanbuskirk/concourse
I have a handy heuristic about this: if you are interacting (directly or via a third party) with the filesystem of a service in order to use it, said service is probably solving your problem wrong.
Try `fly`. It's worth it.
Funny, I use the opposite approach. If I can't configure and manage something via the filesystem, I usually stay clear of it.
Specifically, not relying on the filesystem means I can't easily browse configuration, I can't have version control, and I can't use my configuration manager to have events trigger when things change.
I think there's some confusion. The actual config you send to a Concourse server is on your filesystem and ideally it's checked in.
`fly` is the CLI you use to tell the server "hey, here is a pipeline config", or "hey, I've changed the config, here's the new one".
No, i think raziel2p understands. If you're using a configuration management tool like Puppet or Ansible, by far the easiest way to manage a piece of software is through a configuration file on disk that the configuration management tool can manage directly: it's easy to see the current state, and to make changes. Anything that requires a tool or an API requires you to interpose some sort of update mechanism between the configuration management tool and the software, which doesn't have the transparency of a simple file.
Ah, gotcha. So if I'm following this line correctly, it'll need someone to write a wrapper that detects updates to .yml files and invokes fly?
Something like that. You could either have a little daemon that watches a file and flies it when it changes, or you could do it straight from the configuration management tool. For example, in Ansible you'd write a handler which is triggered after updating a copy of the pipeline file somewhere:
http://docs.ansible.com/ansible/playbooks_intro.html#handler...
The handler could use the command module to run fly:
Using the `fly` CLI allows you to know what changes are being made when you update the pipeline (it shows a diff), and when your configuration is invalid (or when the update fails for another reason).
I get the exact same features from my configuration manager, as long as there is a command I can run to validate the configuration file(s).
That command would just be `fly` or something else coming from Concourse. To me it makes more sense for the entry into the system to be safeguarded with validation, and then possibly bring your own filesystem-based configuration management system that goes through that same front door, building on trusted tools from Concourse itself. Mucking with files on disk that directly affect a live system (shipping production code!) seems like a recipe for disaster.
What configuration manager do you use?
For anyone else that got excited about having a Jenkins alternative, this project seems to be only about containers and not other workloads that VMs could tackle, such as running browser or desktop-based software tests. At least that's my understanding after giving the documentation a cursory glance. Could someone familiar with the project please confirm (or hopefully correct) this?
Some of my colleagues at Pivotal are running iOS builds with it. That is, they're running builds on a Mac which has an iOS device plugged into it. The workers are run using Houdini, which is a container management daemon that, er doesn't use containers:
https://github.com/vito/houdini
There's a blog post by someone else at Pivotal, although the details are probably a bit out of date by now:
https://blog.pivotal.io/labs/tech-talks/on-device-open-sourc...
As the author of that blog post I can confirm some of that is out of date. Hopefully I can find some time to update the post for 1.0, but the main concepts still work.
At Pivotal Toronto we're running iOS and Android builds using a mac mini as an external worker. What's been a great built in feature is using the pool resource to "check-out" a device and lock it from other tests while a suite is running.
Updates would be awesome but that blog post covered a lot, thanks! I'll spend some time looking into Concourse now.
Also, if you have some free time I would love to chat about your experience so far -- I work about 15 minutes away from Pivotal's Toronto office. My email address is listed in my profile.
We run browser-based tests in our own pipeline (I'm on the Concourse team). We just use `xvfb-run` around our test script. We also build VirtualBox images using a systemd-based Garden backend running on a dedicated server (not a VM).
I find these approaches better than risking test pollution, especially when it comes to building VMs and spinning up desktop apps.
Not using ConcourseCI, but I use containers for our browser testing. Blogged about it here - http://stacktoheap.com/blog/2016/01/04/running-webdriverio-t...
I am running a browser-based test in Concourse as we speak :)
I very much appreciate the "Concourse vs." page[1] which compares Concourse to several other CI solutions, but I wish they would include Buildbot[2]. Everyone always forgets about Buildbot!
Buildbot can integrate with Docker to run tasks on containers, create complex conditional flows with triggered builds/steps (among others), and has been in use for many years at organizations large & small.
While the YAML based Concourse configurations seem very useful, we've found that for many non-trivial pipeline style builds are better served by being written in a fully functional language like Python. It has some disadvantages compared to a declarative approach taken by Travis & Concourse, but I feel that they outweigh the limitations imposed by the latter.
The next version of buildbot is also being refactored into a core + plugins system, which should allow for some pretty impressive flexibility that should serve the needs of almost everyone.
--
1. https://concourse.ci/concourse-vs.html 2. http://docs.buildbot.net/latest/
I'm not familiar with Buildbot, but the benefit you're describing I think exists in Concourse as well. When you define a task you can just point it to a script to run. This can be a script written in a higher level language if you'd like.
This is similar to how some Concourse Resources are implemented with Shell[1] scripts, and some in Go[2]. -- 1. https://github.com/concourse/git-resource 2. https://github.com/concourse/tracker-resource
I'd love to see a Concourse.io vs. GitLab CI though...
We too! During making GitLab CI we had a good look at Concourse. We love what they made and think it is great. The pipeline view is great. For us it inspired http://doc.gitlab.com/ce/ci/triggers/README.html and https://gitlab.com/gitlab-org/gitlab-ce/issues/750910
GitLab CI is similar to Travis CI in how it works. The page lists three things about Travis CI:
1. Unfortunately it still doesn't have support for pipelines
2. only very simple builds are possible.
3. if something doesn't pass in CI you normally need to send up lots of little debugging commits to work out why it's behaving differently
GitLab CI has does the following to address this:
1. As mentioned GitLab earlier has triggers and we're working on per project pipeline views
2. Builds in GitLab can have many stages with parallelism per stage, since today you can pass build artifacts between stages and cache items https://about.gitlab.com/2016/03/29/gitlab-runner-1-1-releas...
3. You can test the runner locally with exec https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/...
Please let me know what is missed and what people need that we need to add to GitLab CI
First of all: Thanks for your excellent work on GitLab! We're evaluating it currently, and it quickly replaced our SVN and TeamCity workflows for some of our projects.
I love the lightweight approach with GitLab CI, it's really easy to get started with. I currently don't miss any significant functionality, but I guess our builds are not that complex.
As we're running our complete infrastructure on Mesos/Marathon, I wonder whether it'd be possible to use the new autoscaling for the runners with Marathon instead of docker-machine. Basically, the only thing I'd need to be able to is to retrieve the registration token for the runners via a API, so that it can be passed in when the new Docker container(s) is/are spun up.
Currently (we're still using the old runner version in a custom Docker image, https://hub.docker.com/r/tobilg/gitlab-runner/), we scale them manually via Marathon.
Probably it's easy to integrate Mesos/Marathon additionally to docker-machine, because the mechanism for scaling seem to be the same.
I think this will be beneficial to Mesos users, bescause it will be very easy to use and scale GitLab on it.
Thanks tobilg.
We choose docker-machine, because it also manages infrastructure, so this makes it much more easier for Administrators to manage hundred of build machines.
I'm thinking about an option to make it possible in one of the upcoming releases: how to scale gitlab-runner on other providers. You should look out for improvements in that department too!
GitLab CI Lead
Thanks Kamil. For other people reading and interested in computing platforms that we support also see https://gitlab.com/gitlab-org/gitlab-ce/issues/14812
Have recently rebuilt CI systems at work, and took a look at Concourse alongside Jenkins, Buildbot and Gitlab. Seems like not so many people have mentioned gitlab on this thread, but in my experience it combines all the benefits of Concourse (pipeline-driven workflows, passing forward build artifacts, config in source control) with a much simpler architecture. Concourse looks so complex and has so many moving pieces to learn - BOSH, fly, etc...
Gitlab is just a rails app and then has a go-based runner to do the CI work. Omnibus packages make installation easy. They even have some autoscaling stuff for the runners using docker engine there which I still think needs some work for my use case but for may others is ready to go and looks awesome.
CI covers so many use cases and so there is lots of room for different alternatives, but make sure to check out gitlab if you're looking for the simplest and most flexible CI system I've yet to see :)
Thanks for mentioning GitLab, glad to hear you like it!
We just officially released the autoscaling runner in https://about.gitlab.com/2016/03/29/gitlab-runner-1-1-releas... Please let me know what we can do to improve it for your use case.
Also see my other comment in this post https://news.ycombinator.com/item?id=11387327
This looks awesome! I hope to transition to it, handling Jenkins configurations is a huge pain.
Some of people also use their CI tools for running arbitrary one-off scripts (like deploys). Is this possible with Concourse? Currently we have a few Jenkins jobs that just execute a fab task that then does the actual work.
Yup! Concourse has an idea of 'one-off' builds that can be executed from fly as long as you provide all the inputs to the task. This can be done via "fly execute": http://concourse.ci/fly-execute.html
In addition to what XenoPhex said (`fly execute`) you can always just configure a job that's only ever manually triggered.
Does Concourse offer any integration with defect or agile development software that allows an easy way to track which builds contain a given feature or defect fix?
There is, predictably, an integration to Pivotal Tracker:
https://github.com/concourse/tracker-resource
I'm not aware of any others. It might not do quite what you want - it's more about pushing the progress of the CI pipeline into Tracker than about bringing information from Tracker into the CI pipeline.
However, resources are fairly easy to write:
https://concourse.ci/resources.html
So if your task tracker of choice has an API (or a command-line tool!), it should be kind of easy to build an equivalent.
You can add a custom resource type (http://concourse.ci/resource-types.html) to add these integrations.
There's a Pivotal Tracker resource that does what you describe: https://github.com/concourse/tracker-resource
Are there any plans to support OS X workers? This would be extremely helpful for Mac and iOS apps.
Yes, via "Houdini": https://github.com/vito/houdini
Concourse uses the Garden API to manage containers. Houdini is a "no-op" implementation that just tells Garden "yep, I've got your container right here".
There are prebuilt OS X binaries available on the Downloads page. Each binary supports both running the web UI and acting as a worker: