The new pricing model for Travis CI
blog.travis-ci.comI guess this is the same pricing model as CircleCI?
I've always found "build minutes" to be a little bit of a vendor-favored pricing model. I really love wanting to do a release, and watching my CI provider take three minutes to pull down a 30MB docker image, or "npm install" running at dialup speeds. All while they're billing you per minute -- they make money by not investing in their infrastructure! I'd prefer to pay per byte transferred and CPU instruction executed -- if they make the hardware or network faster, the price stays the same, but they can do more work with their infrastructure. And if you schedule less work, the price for you goes down.
But, it's simply not done, and that's kind of sad because slow CI is probably the biggest engineering time killer in existence. Other than Hacker News ;)
For reference, travis CI was sold to an investment company some time ago.
The strategy is 100% to not improve the infrastructure, fire all the developers to save costs, and milk money from existing customers for as long as possible.
Travis bought out https://news.ycombinator.com/item?id=18978251
Staff laid off the next month https://news.ycombinator.com/item?id=19218036
Around that time and for that reason, I migrated all my personal projects to CircleCI. I did the same for my then employer's open source projects, but CircleCI was also a customer, so it was a pretty easy sell.
This is why I strongly prefer BYO-agents systems (github actions, buildkite, jenkins).
I got our CI pipeline from 90 minutes to 10 minutes by running parts in parallel and applying lots more hardware.
The cost is essentially nothing ($0.10 / build) compared to the time developers spend waiting.
I currently use & recommend buildkite because they offer a preconfigured cloudflare template that does all of the hard parts for you.
It's funny, this is almost exactly the argument we used to switch to non-BYO systems – moving from Jenkins to CircleCI.
We had Jenkins, but it required so much engineering time to manage, secure, upgrade, etc. Then when we hit scaling limits as our team grew, scaling it out to multiple machines took more time and cost a significant amount as we had to have capacity for peak time, which when all engineers are in one timezone is a significant peak compared to, say, the weekend.
We moved to CircleCI and while I have many frustrations with it, parallelism and ability to speed up the pipeline with minimal development overhead are not really frustrations I have. The cost is also minimal compared to the developer time, and while we're getting "less for our money", because we only pay for the active time, it's actually cheaper for us than Jenkins was just for hardware rental, let alone developer time managing Jenkins.
I can completely see how a different org with different constraints, different deployments, clouds, strategies, provisioning, distribution of engineers, etc, could come to the conclusion that you did – that BYO is better, but I think it does depend on so much.
I think that's more of a comparison of different pieces of software rather than BYO or not. Jenkins is quite the beast no matter what size your projects. There are other fully self-hosted CI solutions, but Jenkins is the biggest one... the hardest one... usually the most fragile one... and for some reason the most popular one...
Yeah that's definitely a factor. There's part of it that's not related though – scaling out build capacity. Setting up a Jenkins build node is actually quite straightforward and reliable on the Jenkins side, similar to a BuildKite node for example, the issue is where is that node, how does it get provisioned, how is it managed, removed, etc.
For us, it was a bare metal machine where we had to email a sales rep to get a machine added, then spend ~2 hours setting up firewall stuff with semi-manual Ansible scripts. Add to that minimum contract terms and difficulty cleaning machines, and it was a pain to manage.
Conversely, if you've got a reliable autoscaling solution of some sort, and your build manager is capable of poking that as necessary to scale up and down (possible with Jenkins, but hard), then this could be really easy to do and BYO may be feasible.
Having a CI provider give us ~unlimited pay-as-you-go capacity that needed no management on our end and was always a clean environment, that was worth a lot to us in engineering time.
> For us, it was a bare metal machine where we had to email a sales rep to get a machine added, then spend ~2 hours setting up firewall stuff with semi-manual Ansible scripts. Add to that minimum contract terms and difficulty cleaning machines, and it was a pain to manage.
That'll do it. I'm using the Buildkite elastic stack, which took me about 20 minutes to start using and 4-5 hours to dial in to ideal settings (eg adding IAM to allow deploys from agents, getting the right size spot instances etc).
Wait how is github actions a BYO-agent system? I thought you can only run actions on github's infra?
They can run on github's infra, but you can host your own agent as well.
https://docs.github.com/en/free-pro-team@latest/actions/host...
... though github.com is still involved in the round-trip. That is, if your self-hosted agent agent has to run a workflow in response to a push event, the event still has to come from GH's servers, because GH is still doing the job scheduling. The agent doesn't monitor for pushes itself, and the whole communication channel is specific to GH so you can't swap in another provider.
As an example of what the opposite pricing model looks like, YourBase[1] charges a flat fee per build such that it's in their best interest to make builds as fast as possible. Because of this forcing function, builds are instrumented and cached down to the system call level deterministically, based on file changes. It's amazing what economic incentive can do. (disclaimer: I work at YourBase)
[1]: https://yourbase.io
Would be great if you could make your pricing public. Call-me pricing immediately disqualifies a vendor from consideration for me, and I suspect I'm not alone.
That's great feedback! I've shared it with my team, thanks.
> slow CI is probably the biggest engineering time killer in existence
If you're at the size where slow CI negatively affects your projects, then you're big enough to own your own CI (at least the build agents).
Remember that one-man projects don't need CI, and that CI for small (n<5) teams is almost never the bottleneck. These SaaS CI providers really target the open-source / small-team market and it makes sense that they wouldn't optimize for larger-scale operations.
I disagree with this.
Even a single project with 20-minute build times is enough to slow down or frustrate development.
At the same time, I would not easily justify spending time managing CI infrastructure with my team of 6-10 people.
Things may have changed since then, but the last time I self-managed build agents, it often lead to build jobs being tightly coupled to the build agent and installed software versions. With a docker-based CI system, you are forced to have everything specified in code, making it much more maintainable.
Additionally, hosted CI allows me to do 100 parallel builds on Linux, MacOS and Windows. Perhaps this is a niche use case, but I saved a lot of time and reduced build times by an order of magnitude by switching from self-hosted CI on Windows and MacOS to a hosted solution.
> At the same time, I would not easily justify spending time managing CI infrastructure with my team of 6-10 people.
The step from a docker-based build to a proper build agent is a small one. From there, running your CI yourself on a cloud provider is not particularly hard and at size will quickly be cheaper than having an intermediary.
If the number of people who are on the team can fit in a single room, then the release process can easily be run on the laptop of one of the team developers. I don't see how any team of five people consistently releases quickly enough that you would have multiple people trying to release at the same time. A small team should not require tooling to enforce that tests are run before production deploys, that should be either cultural, or part of the script that is easy enough to run on anybody's machine.
Most people do not need CI until you have a separation of concerns (i.e. code from credential management) that are managed by different people/teams, and therefore all of the decisions cannot be made in a single room.
I do see being able to run test matrices across multiple OS or device options as a reason for smaller teams to adopt CI early.
One man multi platform projects strongly benefit from CI to test the other platforms.
> If you're at the size where slow CI negatively affects your projects, then you're big enough to own your own CI (at least the build agents).
Strongly disagree. When I joined my current company, as an early engineer on a fairly new product, we had an 8 minute deploy time and I see that as a fairly critical component to our rapid iteration cycle that was critical to the company at that stage.
We had 3 engineers and had no time to spend on owning our own CI.
One man projects do indeed need CI. Test matrices can get large very quickly, and it is far easier to let that be somebody else's problem.
The open source Python library I maintain has over 30 instances in the test matrix of Python version x platform x implementations. I don't even have a Windows dev box handy. TravisCI and Appveyor take care of that for me.
> If you're at the size where slow CI negatively affects your projects, then you're big enough to own your own CI (at least the build agents).
I think you vastly underestimate how much stuff people want to fit into CI and how quickly it turns into a big blob. I work as a freelancer helping not-quite-startups-anymore with things like CI speedup and tuning the database queries emitted by their ORM. You know, things where it's easy to build up technical debt.
It's not uncommon for a team of 3-4 to build so many tests and add so many linters and whatnot that CI takes more than an hour. Often, some basic love can bring it down to ~5 minutes but many teams are so focused on building new features that they will not take time to sharpen their tools.
Would you be willing to share some easy improvements that could be made? How can a large amount of tests run so much faster?
It usually comes down to parallelisation; you want to do a much work simultaneously as you can. Saving CI resources _can_ be reasonable if CI resources are very scarce relative to dev time, as in some open source projects. However if you are paying your devs then the extra few hundred bucks a month for beefier CI is often worth the increase in productivity. A couple of different ways:
- Oftentimes the staging of the CI build can be improved. Devs often set up CI so that linters must pass _before_ actual tests are run. Run them in parallel instead and fail the whole run if the linters don't pass. This is even more important if there are multiple linters (perhaps for different sections of the codebase) and they all get applied serially before any of the tests start.
- Obviously, split up your tests as well so they can run in parallel. If you have a project containing both JS and backend tests, don't wait for one to start on the other. Many "bigger" languages also have something akin to parallel_tests (https://github.com/grosser/parallel_tests) that let you quickly set up multiple databases to separate transactions etc. It also provides tooling to remember the output of previous runs and uses that to equalize the parallel tracks of subsequent runs as much as possible.
- Cache as much as possible. This is a wider topic, but dependencies, docker layers and static assets can all be cached and correctly using this alone can hugely cut CI time. You don't want to know how many projects I've seen that don't have this set up correctly (or at all).
- Longer running projects can have hundreds of database migrations and applying them all to an empty database can take minutes. Big frameworks like Rails can dump the schema for you in a way that you can load in a second instead. Have a separate job that runs in parallel and applies all the migrations then verifies the output against the schema, all the other jobs load in the schema and use that.
Every project I've worked on at Google and Microsoft has used cloud CI services at least in part - travis, etc. My current team had a custom jenkins instance + agents we maintained and we've phased them out in favor of the cloud. It just scales better and the time our team spent maintaining agents can now be spent on writing code and fixing bugs (we do still have people who do work to integrate with the cloud CI services, but it's considerably less)
While I see the mismatched incentives here, I believe they have enough other incentives to make builds faster. I gladly switched over from the concurrency-based pricing to per-minute pricing on CircleCI when it became available. This ended up being significantly cheaper for me, and I never have to worry about how many builds I'm running in parallel.
The interesting part is the FOSS changes
Basically they significantly cut the FOSS usage by instead offering free credit and then reviewing the projects on case-by-case basis if you run out of it.
It's a big change, but honestly, I don't know how they could they provide free computation for any public repo no questions asked as they have been up until now.
I see a lot of mentions of "travis-ci.com" in this post, are the OSS changes meant to impact travis-ci.org as well, or just OSS projects that are using travis-ci.com for some reason?
All OSS projects are supposed to move to the .com domain by the end of the year. They plan to shutdown the .org domain on December 31st.
Source: https://mailchi.mp/3d439eeb1098/travis-ciorg-is-moving-to-tr...
Thanks - I haven't used Travis for much in a while, so I didn't see this!
I'm okay with the change as long as they remember to send me a notification if/when I run out of credits.
Think the wiser options for OSS are Github Actions & Gitlab CI going forward. I have heard good things about Azure pipelines too.
After travis-ci got bought and layed off a lot of key staff I did think the day would come where they will no longer will be the defacto choice for open source and I think this news confirms that day being today :-)
Hi, Developer Evangelist at GitLab here.
OSS projects are eligible for a free Gold plan on GitLab.com and can benefit from more CI minutes: https://about.gitlab.com/solutions/open-source/
If you are planning to continue using GitHub, you can also integrate with GitLab CI/CD for your OSS project: https://about.gitlab.com/solutions/github/
I've been using GitLab CI and Travis CI interchangeably. But the recent cut to 400 pipeline minutes on GitLab CI might make it less attractive than Travis CI, even after the current change.
EDIT: Travis seems to take 10 credits per minute, with 1000 free credits per month. So it seems to be the least generous tool with 100 minutes per month vs 400 (GitLab CI) and 2000 (GitHub Actions).
Where did you see the "per month" for Travis? The 10,000 free credits are a one-time trial from what I can see.
This is from a Travis CI mail I got today:
> You have 1,000 credits left - these will begin counting down automatically as soon as you run your first build.
> You can use your credits to build on both private and open-source repositories using Linux, macOS, and Windows OS.
> 1,000 credits will be replenished automatically monthly. Additional Credits purchase is not available for Free Plan.
One of my colleagues got that email. It seems to be saying something quite different to the blog post and the billing documentation, and I don't understand how they fit together.
Even if the email is accurate, which is the better scenario, though, 1000 credits is 100 minutes of Linux build time (or 50 on Windows, 20 on Mac). Which is not much for any but the smallest projects.
I think that for Open Source projects the better CI is still SourceHut. They're not currently limiting any of their resources for free accounts, relying on the funding from paid accounts to pay for their resources.
I find this pay it forward model better than any other option.
Thanks for sharing! Good to know. Had not looked at their offering before.
I've been postponing moving to Github actions, but I guess now I have no option!
I use the free travis to build https://github.com/purpleidea/mgmt/
It's 100% open source and there's almost no income (some github sponsors) for it. I guess welp we'll have to switch away, or have to constantly send "ask" tickets to get free credits :/
Many of us will be in this position.
It seems GitLab CI/CD is beating Travis CI on pretty much every metric now, except the weirdness of only using GitLab for CI/CD and not code hosting, if you prefer to use GitHub for that. Am I missing something?
(GitLab supports using its CI/CD on a non-GitLab repo just fine, but it can cause some initial confusion.)
External Gitlab CI is funny because requires you to replicate the repo to their side and then glue everything together (build status, PRs, etc) by yourself
Then they moved the repository replication to the paid plan (20$/month/user).
For a simple 10 person team, that's 200$ a month. Exclusively to get access to repository replication !!
I have my own runners, so there's no reason to pay for the paid plan. I just end up hacking around some replication script.
See https://medium.com/@PedroGomes/mirror-repository-to-gitlab-f...
> Then they moved the repository replication to the paid plan (20$/month/user).
It looks like "pull" replication is available on the $4/mo. plan, per their documentation.
Interesting. I didn't realize you can use GitLab CI/CD on non-GitLab repo. Do you happen to have a documentation on this (my searches came up empty)?
Travis CI has unique feature - support for ARMv8, PowerPC, and SystemZ build environments. GitHub doesn't have this multiplatform building environment.
You still can use that for free if you disable osx and not use too many parallel build jobs. I just removed all of the osx jobs from all my repos. Even a PR can trigger that new pricing model.
https://github.com/uraimo/run-on-arch-action
It's on qemu, not that fast but it works.
Is GitLab CI/CD substantially better than GitHub Actions, do you know?
From my experience, writing a custom driver for GitLab Runner is a lot easier, if you need to support custom setup.
They also differentiate heavily on how they work. With GitLab, if you for example want to run jobs on ephemeral KVM virtual machines, you would have an agent on host machine (or multiple of them), which would then receive a job, spawn VM, execute commands in it, and at the end terminate it.
With GitHub, you would have to spawn VMs in advance, and launch their agent in each of them. When agent dies, you will have to manually kill the VMs, and spawn a new one.
This means that there is no easy way to have VMs spawn on demand, or to spawn different VMs with different configurations depending on job label.
From my experience of using both I prefer Github actions and consider it better. Person opinion and experience of using both.
I've used both, and it seems like both work equally well for the way I want to use it. I prefer to use GitLab because it's open source. Admittedly I haven't used the open source version of GitLab much, but it's nice to know that it's there. It includes CI/CD:
> CI/CD is a part of both the open source GitLab Community Edition and the proprietary GitLab Enterprise Edition
https://about.gitlab.com/stages-devops-lifecycle/continuous-...
As I start to use more advanced features of CI/CD, I might get a better idea of how they compare. I am aware that GitHub Actions has a marketplace but I don't really want to set up my builds that way. I would rather write the steps myself, pulling in docker images and packages from package managers like npm, PyPI, and apt as needed.
Wow, OK, this is a pretty big change for most open source projects using Travis CI right now. I think most Mac and iOS apps are going to be hit by this, I think most major projects are using Travis at the moment.
Personally, I really like that Travis offers a variety of architectures; I’m currently running binutils on it: https://travis-ci.com/github/saagarjha/binutils-gdb. I suspect this might be a substantial portion of the CI it sees, and it’s been great that it’s been free so far. I am unsure if it will still stay up with these changes. But, I’m sure running these kinds of builds can’t be cheap at all.
For OpenFaaS (and other projects - k3sup, arkade, inlets etc) paying for Travis isn't going to be an option, as they are open source and unfunded. The Travis platform means being able to have relatively portable CI that almost never needs to change because of the CI platform. Bring a Makefile and use it locally and in the build pipeline.
Moving off Travis to something like GitHub Actions will cost a significant amount of time and the opportunity cost is ridiculous.
This along with the Docker Hub limits makes a very strong case for GitHub's strategy.
> The Travis platform means being able to have relatively portable CI that almost never needs to change because of the CI platform. Bring a Makefile and use it locally and in the build pipeline.
What if you defined your GitHub actions to call out to shell commands? Then there's very little that's specific to GH actions.
Actions are nice but personally I'd rather have my CI as generic as possible, especially for the reason you mentioned which is to be able to run things locally if needed.
I'm wondering if there's still a solid reason to use Travis for new projects. I can't be bothered to move my current builds from there to GitHub Actions, but for future projects GitHub Actions seems way more lucrative to use. I think both pale out in comparison to GitLab CI however, it's a pity GitLab is less popular.
Not really, and hasn't been for a while, assuming your hosting your code on GitHub or GitLab. Both offer a service that is comparable to Travis, and are much better backed at this point after the acquisition of Travis and gutting of their staff.
The only thing I sometimes miss from Travis was the ability to run "debug" builds, but most of the time this was necessarily only because of how omnibus their VMs were and oddities of pathing.
I like Gitlab and I tried them for a year before moving off of their CI couple of years back. Main reason was their CI was slow and had some reliability problems at the time. Hopefully they have resolved those by now as I would like to see more adoption of their service for sure. Will give them another shot soon :-)
I recently found a comparison¹ which was interesting. Of course, it's gitlab's so take that as you will.
[1] https://about.gitlab.com/devops-tools/github-vs-gitlab/#comp...
Deeplink to the GitHub Actions vs. GitLab CI part of the decision kit https://about.gitlab.com/devops-tools/github-vs-gitlab/ci-mi...
I moved to GitHub Actions earlier this year and never looked back.
Travis is changing from growth mode to milk-the-melting-ice-cube mode.
That’s why they are turning off free foss builds.
That's been my suspicion ever since they fired everyone remotely clueful who worked there.
Can anyone comment on how these changes are going to impact projects like conda-forge, which use the free compute time to build binaries? My reading is these projects are what's being targeted. Perhaps there are more egregious uses of their servers?
It looks like conda-forge currently uses Travis only for ppc64le builds. I'm not sure if any other free CI services offer PowerPC support - if not, it's likely to impact PowerPC support (also in many other open source projects). However, an IBM blog post in July said they were providing the servers through Travis free for open source projects on GitHub, so possibly they will agree some way to continue to offer it.
Travis "free drugs" mode worked for a while. I got used to it via my open-source projects and ended up having a couple of clients pay for it for some of my commercial work.
Since their acquisition they have just been going downhill. I've been planning to move all my projects to Github Actions for a while. This is the "drop that overlfows the glass" as we say in Spanish, I'm putting everything aside tomorrow to migrate all my public and private projects to Github Actions.
TL;DR: Travis CI is no longer free for open source projects. Instead you get a free trial good for 1k build minutes, and can email them to beg for more when those run out.
i wonder if there are actually higher costs associated with macOS, or if this is just segmentation based on the fact that macOS users are more willing to pay more for software
Building Mac build farms is way more of a pain and absolutely costs more. There are a few services dedicated to making it easier, but Apple isn’t trying hard to make people’s lives easy in this.
At least they're not being litigious about small shops using hackentosh VMs for CI because sticking a bunch of Mac Minis in a rack technically works but seems super silly.
Travis CI and GitHub both use MacStadium's infrastructure for macOS builds, not their own.
That's fascinating – I always wondered how exactly Travis was running their Mac OS builds.
Based on MacStadium's public pricing, which is $150/month for the base model of the latest Mac Mini, this cannot be cheap for Travis, especially given they offered (past tense) Mac OS builds for free.
Not surprised to see them start charging more, but it's coming at exactly the wrong time given that GitHub (with Actions) is now offering a product that's a fair bit better, and it's still free.
MacStadium also offers some types of virtualized macOS environments, even a Kubernetes-on-Mac offering. So I'm guessing it's not all individual Mac minis running these builds... at least I hope not!
You can legally only run macOS on apple hardware, and that's expensive.
Also, there's a lack of dedicated server products.
I loved how imgix solved this: https://photos.imgix.com/racking-mac-pros (definitely not cheap, but super slick)
Does that actually make sense, or is it just another Engineering team doubling down on their poor early architectural choices ("or lack thereof")?
My understanding is that they needed some of the functionality in CoreGraphics, thus had to use Macs.
Which, if it allows them to get better quality results with less engineering time, could totally turn out to have been a sensible business decision.
(no idea beyond "could", but it does at least seem plausible)
The latest Mac Pro is sold in a rack mount version
Looks like they also reduced the speed of the builds, so their 1000 minutes are getting wasted even faster.
On my repositories, setting up NPM now takes around 120 seconds instead of 20 seconds a month ago, time of npm run build has been increased from 7 seconds to 20 seconds...
The same build that took 2 minutes last month takes 4-5 minutes today
Finally! This change gives me necessary motivation to switch to fully open source alternatives (probably Drone).
Thank you, I didn't even know about Drone CI until I've read your comment! They look interesting, will definitely check them out.
If Apple doesn't step up now to fill the gap I have to stop supporting the osx/darwin platform for all my projects. All my 5 personal Apple laptops stopped working some time ago already.
Now Apple looks like a legacy platform similar to Intel.
Looks like I finally have to bug GitHub to upgrade my GH Actions from beta to the current schema.