Ask HN: What CI/CD server/service do you use?
I heard so many people, read so many blog-posts on Jenkins, it's integration with many tools with it's plugin architecture, but also about TravisCI and how much it helped the OS community to build quality software for free, but I'm curious about other solutions and how well they work.
What are you using? What criterias did drive this choice? Would you do it differently? I'm using GitLab CI. I use GitLab.com for the free private git project hosting. GitLab.com has free CI servers available (even for private projects) that work well if you don't need to run builds all the time. A lot of people use their shared CI runners, so your jobs/builds might take several minutes before they start running. If you have a larger need, you can always host your own runners (software is FOSS). GitLab's CI has a pipeline design, so (like Jenkins) you can have some jobs wait for other jobs to complete and use their build artifacts (e.g. have a single build job that downloads deps and compiles everything so later jobs don't need duplicate that work), and you can have jobs only be triggered manually instead of on every push. It's not perfect, though. For instance, unlike TravisCI, you don't have a build matrix, but you can use YAML tricks to define template jobs (see GitLab's own CI file [0] and a resulting pipeline [1]). If you use GitLab.com, you should be aware that they have downtime a few times each month, both planned and unplanned. For their planned downtime, it is often during the work day in the US timezones, and usually lasts between 10-30 minutes (though it has been longer before). GitLab.com is also used as a "testing in production" environment for their monthly releases, so you will occasionally run into bugs (usually nothing showstopping though; mostly minor annoyances). I think that what you choose greatly depends on what you need. GitLab CI is a little opinionated, but it is still pretty flexible and usable for a large number of work cases. If you need a ton of customizability, Jenkins could be a better option (with plugins). GitLab CI is a lot easier to setup, however. GitLab CI also has CD features that you can look into (I don't use them myself). They are also constantly (i.e. every 22nd of the month) releasing new versions, and most of their features are open source. [0]: https://gitlab.com/gitlab-org/gitlab-ce/blob/master/.gitlab-... [1]: https://gitlab.com/gitlab-org/gitlab-ce/pipelines/6014043 Do you use it for building docker images? I can't work out how to tag my images with versions. They all become "latest". Do I have to change the gitlab config file every push? I came to second Gitlab CI. I build all of my docker images in Gitlab CI and tag them. The trick is to use the gitlab ci environment variables (I tag with with the git tag or the git branch name depending using this method). If you would like to get a bit more help, feel free to email me. My username here at gmail. That's very kind of you. Thanks. I think I understand how you do it. You just tag the git commit and get that value in the build by using $CI_BUILD_TAG? The description for that variable says it's only available when building a tag. Does that just mean the commit needs to be tagged before a push? Update: I just found this, which is probably exactly what I want to do: https://github.com/gitlabhq/gitlab-ci/issues/637 Update 2: Wait. What? This doesn't sound right. Now I'm confused: https://docs.gitlab.com/ce/ci/yaml/#tags Update 3: It seems those are 2 different types of "tag", both mentioned in the same configuration subobject. Crazy. So yes, you got it! They use the word "tag" in two totally different ways which is confusing. When you create a gitlab-ci runner you can "tag" the runner. Basically, this is like a named server. For instance, I have a general gitlab-ci runner which I use to run most of my builds. However, I found that I need a special runner just for building docker images. So most of my jobs in my gitlab-ci are marked with: Now when do I build docker containers? I actually don't use "git ref tags" (i.e. git tag v.1.16.001 or something). I have a development branch, a staging branch, and a production branch which tie to my environments (dev, staging, and production). So my job runner looks something like this: Thanks for taking the time to share your experience. What's the difference with your docker runner? I've been building docker images just fine without specifying a runner. I run most of my runners as Docker runners and I was having issues with a Docker runner building a docker image. So I swapped to a different method. But I haven't tried it in a while. I might be able to use the same Docker runners now. Seems like a good time to test it! Okay. Thanks again. What do you with your docker images after they are build? This is the one thing missing from my CD pipeline. Good question! So first I build the docker container with a Dockerfile I keep at the root of my project. I also have an aws private docker repo. (Read more about it here: https://aws.amazon.com/ecr/). So in my gitlab-ci I have a statement like the following: My secret sauce is in the deployment stage of my ci. I basically run a command which deploys to a ECS environment. This is a custom script which is part of my overall AWS process. I have scripted out the creation of everything on AWS (from building the environment, spinning up EC2's, security and security groups, auto scaling groups, container registry, deploying jobs into an ECS). In the current project I am in we are using Ruby so the command is just a rake task and looks like: Thank you for giving such a comprehensive answer. Codepen recently published podcast episode [0], where they tell why they spitched from GitHub to GitLab. Built in CI solution was one of the reasons.
If you don't want to listen the whole thing, there's also a transcript: [1] I use Codeship [1] for personal (private) project and Travis CI [2] for open source ones. I had also use AppVeyor [3] for one of my Windows web app project. I chose Codeship since it was the only one (at the time) that had BitBucket integration. The free tier (100 builds / mo) was more than enough for my side projects. In addition, it supports a lot of deployment targets (e.g. ElasticBeanstalk and Heroku). I use Travis CI for open source projects initially since it is what everyone uses :) However, one feature that I find really useful is the test matrix. I used it to test my project against different Java version and Cassandra distribution versions [4]. I use AppVeyor on one of my side project (which lies dormant now) since needing to test against both *nix and Windows build (a .NET / Mono web application). [1] - https://codeship.com/ [2] - https://travis-ci.org/ After being a Jenkins person for years I recently started playing around with Visual Studio Team Services, Microsoft's hosted TFS product: https://www.visualstudio.com/team-services/. You don't have to use TFS as the backend as git is supported and all of your build and release stuff is under the same roof. They have added a lot of "tasks" for most of the popular languages but worst case is you can always run shell commands to do stuff. The UI leaves a bit to be desired (I think) but for the most part I can live with it. Full disclosure, I work at MSFT but have been a Linux/Open Source/non-ms person for most of my career. EDIT: With all of that being said I do like Gitlab because it offers many of the same features (code repo, builds, etc) and am using Gitea/Gogs on my own VM for repo mirroring from Github. Have you taken Jenkins Blue Ocean [1] and Declarative Pipeline [2] for a spin since moving to TFS? We are quickly modernising everything from the UI to the way you manage your Pipelines in version control. Would be very interested in getting your thoughts on how we stack up now vs your past experience :) [1] https://jenkins.io/projects/blueocean/
[2] https://github.com/jenkinsci/pipeline-model-definition-plugi... I haven't yet but I did look at it awhile back. I'll try and set aside some time to spin it up and take it for a test drive, thanks for the reminder! Some guys in our workplace have been investigating Concourse CI and like it over Bamboo which we currently use. We use Jenkins, hosted on Amazon. We're .NET stack, so some of the other options didn't work. Biggest lesson for us: Avoid putting all your logic into Jenkins templates. You're not capturing your CI/deployment logic in version control, and this can cause problems down the road (we backed up Jenkins regularly, but versioning was a weak point). Instead, do as much as you can in scripts, and use Jenkins as a glorified crontab (+webhooks). Biggest lesson for us: Avoid putting all your logic into Jenkins templates. You're not capturing your CI/deployment logic in version control, and this can cause problems down the road (we backed up Jenkins regularly, but versioning was a weak point). Jenkins Job Builder[1] is a great tool that solves exactly this problem. You write job definitions in yaml/json files. Then, check these yaml files into version control and run JJB to push these jobs to your Jenkins server. JJB is great because it's purely a client. It requires no extra Jenkins plugins. Since all of your jobs are now captured in version control, you can reproduce your Jenkins jobs instantly in a local docker container or vagrant machine to dev/test change to your Jenkins jobs. I really recommend trying it out. No more manually clicking around in Jenkins to configure things. > Biggest lesson for us: Avoid putting all your logic into Jenkins templates. You're not capturing your CI/deployment logic in version control, and this can cause problems down the road (we backed up Jenkins regularly, but versioning was a weak point). We've found the Job DSL plugin pretty useful for backing jobs up. https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin I'd also second the Job DSL plugin if you want to keep all your Jenkins config in version control. Its also worth looking at replacing all your freestyle jobs with Jenkins Pipeline and have the job definition defined in a Jenkinsfile in your repository. We've got a new syntax coming out of beta shortly that makes the experience a lot more declarative thats worth a look https://github.com/jenkinsci/pipeline-model-definition-plugi... I use GitLab not only because it's built in but because I've also had a really smooth experience with them. They might be slower than others, but imo all development is made locally so I don't mind if sometimes it takes more than 5 minutes (used to take longer but they have improved a lot!). I've only waited for like 30 minutes once. I would definitely recommend it. For my larger business projects (the ones that make money) I finally broke down and ran my own runners. I never have to wait anymore! I do love gitlab ci. I've used various CI/CD server/services in the past. As with so many tooling questions this is one of those 'depends' situations. Jenkins is okay but you have to really stay on top of the build environment (not just a specified in Jenkins itself but also the OS and tooling on the Jenkins server and build machines). I'm in the process of building a new CI/CD pipeline and this time I'm using Concourse. This is a good fit because the target system is largely containerised (not that Concourse can't be used elsewhere, it can, but it's just a coincidental fit). Additionally, Concourse addresses the problem identified by Cpoll as, unlike Jenkins et al, Concourse pipeline, build plans, and tasks are 100% available for version control. That all said, I can't comment on Concourse's foibles as I've not used it long enough to uncover them :) We were locked into an Atlassian stack when they discontinued Bamboo and Pipelines wasn't ready for production (nor a good fit for us). We ended up going with a service called Buildkite, which is almost a drop-in replacement. They even had docs specifically about migrating from Bamboo, and in general their documentation has been excellent. We really like having full control over the test execution environment despite not being ready to start using Docker, and we've been pretty happy with the service itself as well as the tech support we've received. I might choose something else for a different company, but this is a very good fit for us. Nice to see GitLab CI getting some love on this thread. I have to polish it yet but my notes for teaching GitLab CI (one-day course on the basics of it) are available at https://gitlab.com/atsaloli/gitlab-ci-tutorial and may help one get started with it. I've been using GoCD for the past couple of months and I'm really liking it so far. Its very flexible and relatively straight-forward to setup on your own VM. The plugins work similar to the way Jenkin's does it. Also the UI is very clean. I've used Jenkins in the past and Bamboo right now without any issues for CI and a little bit of CD through custom scripts. Granted this has worked well for under 30 engineers, so i'm not sure how bigger teams do things. Semaphore [0] is great. It was the only place where I could get selenium tests to work reliably. CircleCI I'm considering GitLab for the built in CI, as long as it works painlessly with Heroku.
Where as my docker build job is marked with: tags:
- general
This basically pushes the job to a specific runner and has nothing to do at all with the git ref (which could be a branch name or a tag or whatnot). tags:
- docker-builder
So basically once the tests pass, and IF the branch is development, staging, or master, it will build the docker container, using the CI_BUILD_REF_NAME as the tag and then push that to my private aws container storage system. The AWS_ACCOUNT_ID and REGION are both environment variables I have injected previously. If that succeeds it moves to the deploy stage. stages:
- test
- build
- deploy
docker_build:
stage: build
script:
- do some stuff
- docker tag container-name:$CI_BUILD_REF_NAME $AWS_ACCOUNT_ID.$AWS_ACCOUNT_REGION.amazonaws.com/container-name:$CI_BUILD_REF_NAME
- docker push $AWS_ACCOUNT_ID.$AWS_ACCOUNT_REGION.amazonaws.com/container-name:$CI_BUILD_REF_NAME
tags:
- docker-builder
only:
- development
- staging
- master
This builds my container, tags it, and pushes it to my ec2 container registry. I do most of my deployments using AWS ECS (EC2 Container Service, you can read more about it here: https://aws.amazon.com/ecs/) - docker build -t container-name:$CI_BUILD_REF_NAME .
- docker tag container-name:$CI_BUILD_REF_NAME $AWS_ACCOUNT_ID.$AWS_ACCOUNT_REGION.amazonaws.com/container-name:$CI_BUILD_REF_NAME
- docker push $AWS_ACCOUNT_ID.$AWS_ACCOUNT_REGION.amazonaws.com/container-name:$CI_BUILD_REF_NAME
The rake task logs into the awsapi securely, runs some commands, verifies the work, and calls the task complete in gitlab-ci. If you have any further questions, feel free to email me. It's my username at gmail. stages:
- test
- build
- deploy
docker_deploy:
stage: deploy
script:
- do some stuff
- rake awsenv:redeploy[$CI_BUILD_REF_NAME]
tags:
- general
only:
- development
- staging
- master