Settings

Theme

Changes to Docker Hub Autobuilds

docker.com

66 points by bnr 5 years ago · 52 comments

Reader

WnZ39p0Dgydaz1 5 years ago

Docker, the company, is such a sad story. They have such an impactful technology but completely failed to monetize it and lost multiple revenue streams to competitors. Shows how hard "open source companies" are. Others OSS companies have similar problems. It would be nice to find economic models where impact is correlated with revenue.

  • vergessenmir 5 years ago

    They are hard for sure. Docker did make some fundamental mistakes early on, especially on community engagement and open stable APIs.

    It is a shame since I rooted hard for them in the early days.

    • nickjj 5 years ago

      I've been using Docker since 2014 and IMO they did a great job in terms of API stability from an end user's perspective.

      Lots of CLI commands from 5+ years ago still work today and the upgrade story with Docker Compose has been gradual, painless and most importantly pressure-less for having to upgrade.

      • vergessenmir 5 years ago

        They got the developer facing API just right making containers accessible but their promised plugin architecture never did come.

        There was an arms race between them and the community to lead the story on new functionality. For example flannel came out first before docker acquihired SocketPlane for networking, while Calico was also in the works.

        There was pressure from the community for a stable plugin API to allow for external contribution, but I imagine that it was ultimately against their best interest

  • pm90 5 years ago

    Agreed. Docker (actually, containers in general but docker tooling) made it so damn easy to get local, reproducible builds. It’s fantastic and made life so much easier for me professionally as a Software Engineer.

    I remember the early days when Docker had just released, all devs were super excited but nobody had run containers “in production”. AFAIK I believe it was mesos that was used widely for orchestrating containers in production. Docker swarm/compose just took too long to get there and k8s just rapidly took over and became the standard.

    • mikepurvis 5 years ago

      Admittedly I didn't follow it super closely, but I did do fig when that was a thing, then tried initial docker compose, and then later did docker compose again.

      All along it has always just felt way too imperative. It was too much of a sense that you were issuing commands at a system rather than declaring a desired end state. Kubernetes got this right from the get-go.

    • ridv 5 years ago

      The history of the relationship between Mesos and Docker is definitely an interesting one. If memory serves right, Mesos was not keen of supporting Docker as a containerizer. The devs wanted to stick to improving the Mesos containerizer.

      In the end, the community was so vocal about Docker being supported in Mesos that it happened, but the end result was not stellar by all accounts (and a bit of a nightmare to deal with on the framework side to boot).

      I'm not privy to what was going at the time since I was just part of the larger Mesos community, but looking back, can't help but wonder what would have happened if they collaborated instead.

  • imiric 5 years ago

    I'm still rooting for them, specifically for Swarm which is a breath of fresh air for small and personal deployments. Though tooling around Kubernetes has improved in the last couple of years and it's much easier to get started now, the system still has a steep learning curve compared to Swarm.

    • KurtMueller 5 years ago

      I'm currently learning both Docker Swarm and Kubernetes (at the same time) and I'm a bit overwhelmed by Kubernetes (VS Code's extension and Kube's documentation helps with that). I just want use the Azure Kube Service to deploy my Rails app...

      I'm also trying to learn Docker Swarm mode and both the community and documentation are not as nearly comprehensive as Kubernetes'. And Docker's official documentation and tutorial has me provisioning 3 nodes on a cloud provider instead of just booting up a swarm on my local machine.

      And then I see any # of combinations of Docker Swarm with tools like Terraform or Pulumi. It's just all so overwhelming.

      I think I'm going to keep banging my head with Kube and its super verbose manifests until I finally get it.

      • imiric 5 years ago

        It's been a while since I've read the Swarm documentation, but it was fairly comprehensive from what I remember. If you're referring to this tutorial[1], note that you don't need to use a cloud provider, and you can also do it with local VMs.

        Actually you don't even need a cluster of machines. While Swarm can easily scale out, you can get started with a single machine and figure out the clustering later. Even on a single machine there are benefits of using Swarm over plain Docker (Compose): stacks, secrets, advanced networking. So you can test it out there and add more machines as you need it.

        One thing I'd suggest is to first get familiar with Docker itself, since Swarm simply builds on these concepts and adds orchestration. Use `docker run`, understand volumes, images, permissions, user namespaces, cgroups, etc. Then move on to Docker Compose and get familiar with the configuration, services, etc. And then finally pick up Swarm, which even uses the same Docker Compose configuration file (with slight variations), so it should be fairly straightforward at that point.

        Though honestly, learning Swarm in 2021 might not be a good investment, since Kubernetes is clearly the market leader, and I wouldn't be surprised to read a sunset notice about it, even though Docker Inc. is still supporting it. So you're probably making a good decision to stick with k8s.

        Good luck!

        [1]: https://docs.docker.com/engine/swarm/swarm-tutorial/

howolduis 5 years ago

cryptocurrencies are ruining hardware availability (both GPU and now storage devices), the environment and now free cloud services. When are we gonna admit it?

EDIT: typo

  • kordlessagain 5 years ago

    When the sun is wrapped in a Dyson sphere covered in solar panels made from Jupiter’s mass?

    • Aissen 5 years ago

      A single Dyson sphere is not enough, no serious crypto bro would let go of the opportunity to build a Matrioshka brain.

  • Cthulhu_ 5 years ago

    > When are we gonna admit it?

    Who is denying it? Who is "we"?

  • brennerm 5 years ago

    Guess you mean "ruining"?

  • Jleagle 5 years ago

    By cryptocurrencies do you just mean BitCoin? Most new crypto's use proof of stake which doesn't use up GPUs.

    Not heard it blamed for using up hard drives before.

raesene9 5 years ago

Sorry, but not really surprised to see this go (for free accounts).

TBH I'm surprised that they managed to keep this going as long as they have. Giving away free compute is a tricky thing to make financially viable.

TimWolla 5 years ago

Too bad. IMO the big benefit of automated builds was that Docker Hub was linked the source repository and showed the original Dockerfile, so that one was able to more easily verify what exactly a Docker image contains (provided one trusts Docker to correctly build these automated builds).

gtirloni 5 years ago

If this means a better experience for paying customers, I'm all for it. I have no problem with companies charging for their services.

  • ekidd 5 years ago

    Ironically, one of the results of this is that paying customers may not have access to as many open source base images.

    For example, I maintain a Docker image that builds statically-linked Rust binaries for Linux. It includes static versions of several C dependencies. It's useful mostly because setting up cross-compilation is really tricky and the details change occasionally.

    I've been keeping it up to date for many years, and it has about 750k downloads (which is pretty decent for a compile-time-only Rust image). I don't mind maintaining it as a volunteer service for people who use it. But there's a good chance that I'll simply retire it, and that any paying Docker customers will need to figure out how to cross-compile weird C libraries on their own.

    I'm not complaining. Docker owes me nothing, and I can just build images for my own use.

  • dividedbyzero 5 years ago

    I assume their free tier is (also) aimed at the open source community; thus providing images for open source software just got a bit harder and possibly costly.

    If everyone and everything went paid-only, you'd see a lot less open source software get made. I may be ready to sink a bunch of time into that sorely-missing piece of software I know how to write, but I wouldn't be willing to throw a bunch of money in as well, and I'm sure lots of others would draw a line there.

    Same thing with open-sourcing things created at work; it's a hard sell already, but with an ongoing financial commitment, not going to happen. So personally I'm very happy that lots companies still support this sort of thing with free services, and I really hope this kind of crypto cancer won't kill all of them eventually.

SlimyHog 5 years ago

This seems reasonable to me as a free user. I thought it was rad that I got free compute from dockerhub, but CPU cycles aren't free.

zwass 5 years ago

Do folks here have ideas of what will become the best place to host public Docker container images? Between this and the earlier changes to rate limit pulls, Docker Hub no longer seems like the ideal venue for reputable public images.

Should we look into implementing our own registry with AWS ECR or similar?

  • captn3m0 5 years ago

    ECR Pblic, but Quay[0]'s free plan is still without limits. Rooting for them against the crypto-miners.

    >Yes! We offer unlimited storage and serving of public repositories.

    [0]: https://quay.io/

  • mmbleh 5 years ago

    Interesting reaction. This could also be interpreted as making it _more_ reputable, by removing abuse and cruft, allowing engineering time to be focused on things that provide value to end users.

    • zwass 5 years ago

      I'm not sure if I misunderstand the limits[1], but I want my customers to be able to pull the image as many times as they need. While this may help with the concern about quality of images, it still leaves the rate limiting unresolved.

      [1] https://www.docker.com/increase-rate-limits

  • Hamuko 5 years ago

    My money's on AWS ECR Public.

nirui 5 years ago

Sad. I was hoping for multi-arch support (for Free tier, I don't make money out of my opensource image), and got this instead.

I wish the company could eventually found a way to make more money. Kubernetes too heavy to run, while Docker Swarm is rather reasonable. I guess there is a market gap?(???)

  • detaro 5 years ago

    It's hard to make more money if all people expect is more free service.

    • nirui 5 years ago

      I see the dilemma there.

      Most people I know of treats Docker.com as some kind of infrastructural service, it's like GitHub/GitHub Action+a Package Manager (Registry). For those who use these kind of service, they're expecting some kind of advanced (feature rich) free package for individuals, and a more advanced version for people who makes money out of the image/product they uploaded.

      So, I think the nature of the service environment basically determined that most people who use Docker Cloud will end up be a free tier users. I guess this is something needs to be cracked by Docker Inc.

      • j1elo 5 years ago

        If you learn to program with Node.js, one chapter of your tutorial will be to show that NPM is "THE repository for Node packages".

        For me, it is part of what makes some of these technologies leaders of what they do. If Node.js has NPM; Rust has Crates.io; and Debian has their repos, why wouldn't Docker have DockerHub? From my POV of 100% user of these technologies, it just seems natural (or at least that's just what they have made us to expect).

        Actually what doesn't seem natural is if NPM suddenly deleted old package versions just mere months after they didn't get any download. I know we're talking about containers, which take more disk space than a NPM package... but still, that policy change from Docker Hub felt odd. Now we have to worry about regularly refresh the containers we care about.

        (regarding the mantra "you should rebuild them anyway", well, when you want to reproduce a bug, you don't need to rebuild old packages in other repositories, right?)

  • mulltea 5 years ago

    GitHub's Container Registry has multi-platform support, and I haven't encountered any limits. I believe the same is true of Google's Container Registry. There's not much reason to sacrifice features to use Docker's own registry.

    • CR007 5 years ago

      I started using Docker hub recently and I can't avoid to say how outdated it feels. I just use it because is the default thing.

      Every other registry I've tried works better.

  • bdcravens 5 years ago

    While it's a managed service with all the lock-in that goes with, but ECS seems to be a nice sweet spot of what you need from Kubernetes without the complexity.

    Hashicorp Nomad is an open-source option.

j1elo 5 years ago

Maybe it is possible to detect the kind of activity or identify processes that are written to mine crypto? Then abort those ones.

I'm thinking of behavior analysis, such as the one some network firewalls do... not sure if that's even feasible at the process level, though.

  • nrmitchi 5 years ago

    My understanding is that it's totally feasible, but ends up being a game of cat-and-mouse (just like most security projects).

    The question is whether or not it's worth the effort (for Dockerhub). If they were leading in their space, and had a successful business, then maybe, but in the current situation it would be a large investment in providing something for free.

    Remember that this change only applies to the free plans, not to anyone paying.

  • barkingcat 5 years ago

    Most likely not worth it. The product that is being impacted is free to begin with, so the more a company invests into it to building custom filters, the more net negative they are spending, and they are not helping customers (free or paying) either. (ie the cryptocurrency miners will never become paying customer, because they are exploiting the platform specifically because it's free, so building filters means spending money to filter out people who will never be customers anyway.)

    I'd rather docker spend engineering time on improving their paid service to legitimate customers.

yabones 5 years ago

I think Docker (the company) just admitted defeat to GitHub. Wow. It's amazing how they blew such an incredible lead over the last decade.

devmor 5 years ago

Normally, I would take this at face value. But from the company that just started charging users to allow them to skip upgrades, it feels like more nickel and dime tactics to me.

  • palijer 5 years ago

    Not to me. A company had a problem (version spread) so they fixed it.

    Then the community asked for a feature, the company implemented it at a price point.

    • devmor 5 years ago

      Or from the other end:

      The users had a choice, so the company took it away.

      The users wanted their choice back, so the company held it at ransom.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection