Red Hat Introduces Open Source Project Quay Container Registry
redhat.comProjectQuay.io landing page is now live! https://projectquay.io
This is really great!
I recognize your username because I met you at Kubernetes Community Day AMS, after Helm Summit, when you told everyone your username from up on the stage...
By the way I loved your talk "Convergence of Communities" and everything about the Jellyfish modeling, so for the benefit of anyone who maybe does not know who you are, or why you posted this one-liner... this is Diane, Director of Community Development at RedHat, and you can find that talk on YouTube.
So, thanks for doing this!
From the basic installation page:
https://docs.projectquay.io/deploy_quay.html
"For a Project Quay Registry installation (appropriate for non-production purposes), you need one system (physical or virtual machine) that has the following attributes:
Red Hat Enterprise Linux (RHEL): Obtain the latest Red Hat Enterprise Linux server media from the Downloads page and follow instructions from the Red Hat Enterprise Linux 7 Installation Guide.
Valid Red Hat Subscription: Obtain a valid Red Hat Enterprise Linux server subscription.
"So, is it truly tied to RHEL and a subscription, or is that page just making me FEEL that way?
Annoying either way. :/
No Red Hat subscription will be needed to use Project Quay software. The first pass of the Project Quay documentation comes from the downstream Red Hat Quay docs. We are going to strip out references to RHEL and other Red Hat features (which should have been removed earlier) over the next few days. Sorry for the inconvenience.
The RHEL version linked is 7.5, not 8.1. They might just forget to update the doc.
The docs right now refer to using `docker` as well, but `docker` isn't officially supported in RHEL 8+.
So guessing the docs will slowly shift to RHEL8 and podman.
Prerequisites: CPUs: Two or more virtual CPUs RAM: 4GB or more
Why does a container registry need so many resources?
First, this prereqs are about what you could get on a netbook in 2012 and on a modern day cell phone.
Second, I think it would help to know why you think a container registry wouldn't need a moderate amount of resources.
I don't necessarily disagree that the resources could be lower at the minimum (and in fact, I recall they are quite a bit lower than this when running it on your laptop without any load), but is this really anything unexpected?
It's written in Python so it's not going to be as efficient as Go or C++ but it certainly isn't Java levels of resources being requested here.
> First, this prereqs are about what you could get on a netbook in 2012 and on a modern day cell phone.
High resource requirements mean that I need to spend more on compute, whether that means paying a cloud provider for a beefier instance, or spending more money on hardware and electricity.
2C/4G is hardly "beefy". If that is too much to run then use the quay.io service and not worry about self hosting your registry?
"640 kB ought to be enough for anybody"
(yes, that was in reference to a PC, not a server.)
>about what you could get on a netbook in 2012 and on a modern day cell phone.
Spinning up a cloud VM with 4GB for a year is a bit more than beer money though.
Reminds me of installing gitlab. Requirement 8GB...for a basic install. After a bunch of crashes (on a 8gb VM) I went with vanilla git.
4gb from linode is $20/month. Not bad if you can actually get use.
If you want to just test it, why not use a VritualBox VM or similar?
Without getting all SJW on you I think it's important to highlight that 20 USD/pm is still quite a lot to most people.
To illustrate:
My employer gives me $150 cloud credits. 20 USD seems like pocket change
Me paying out of pocket...20 that's like 2x netflix but fine whatever.
Me being part of the other 95% of the world...unaffordable.
But really my main objection here is why. To borrow my gitlab example...how does adding a interface on top of core software up requirement from zilch to 8 gb? Like wtf is going on underneath that bonnet. It's not doing the heavy lifting - git is doing that. So for a SINGLE user install...whats that 8GB doing? 8GB worth of UI? It's installing via CLI for gods sake...
Well the language choice does make a big difference.
Python has the GIL meaning parallelism comes at the cost of increased memory and CPU consumption because it's typically going to spin up multiple copies of the Python interpreter and your application. This is also true of Ruby, so it applies both to Gitlab and Quay.
Quay uses Gunicorn which uses pre-fork model which means spinning up multiple processes of Quay which can lead to increased CPU and memory requirements. Obviously you need multiple cores to take proper advantage of multiple processes, and multiple processes each result in multiple copies of the same application running, thus more memory.
This is kind of a fact of using something like Python or Ruby unless you design your application to use an event driven architecture with something like twisted, tulip, async, tornado, etc. Even then, unless you have async versions of the libraries your using (eg: your DB driver) that won't necessarily remove your need for multiple processes.
Because it's more than just UI? To stick with your example, where in OSS git do you get a task runner that executes your test pipeline? Do you think that might use some RAM to coordinate? Yes. Where do you get an audit log exporter in OSS git? Etc.
Oh this isn't even pipeline anything yet. It won't even install.
Normally I'd assume it's not working because I'm doing something wrong. But this is their omnibus installer - it's a single line.
So then use docker hub
Might have a look. A couple attempts and various error messages later I'm not super enthusiastic about gitlab though tbh.
Unfortunately that seems to be common. Artifactory for example has similar requirements. Guessing that running full free text search and analysis on the artifacts are the main culprits.
https://www.jfrog.com/confluence/plugins/servlet/mobile?cont...
It's java so you have to be generous with memory for the JVM.
Quay isn't just a container registry for hosting images. It also includes security vulnerability scanning and image building, etc.
Most of it will be used for cache by kernel if you don't use the image building and Clair on the same host.
Newbie question: How does quay differ to docker hub?
Once of the nicer features is that they offer an "encrypted password" where you login to quay and click the "generate encrypted password" option in your user preferences. Then instead of hardcoding your plaintext password into your docker config json, it puts the encrypted password that is only applicable to quay.
For those that use LDAP authentication for this, it makes is a much smaller attack vector.
The per-team "organizations" is very nice and allows you to give teams their own flexibility while still running things within your own firewalls (on-prem or in a vpc). It is an alternative to docker hub with a lot of really nice features.
The ability to do scheduled mirroring of images from other registries (such as docker hub) and replication between different instances of quay is also really beneficial.
Disclaimer: commercial quay enterprise user for some time.
Quay also integrates very well with OpenShift. It works fine stand-alone too, but if you're already using OpenShift it's worth looking into.
DISCLAIMER: I work for Red Hat Consulting focusing on OpenShift.
In fact, a version without the Container Scanning bits and some of the user management is the default internal registry in OpenShift 4.x.
Quay supports Podman & rkt, not just Docker images.
I have found it significantly easier to use, myself. Every time I try to use Docker Hub I get very frustrated very quickly and start trying out the competitors again.
Love this. After Red Hat acquired Ansible I thought for sure they would cave on their open source principles, but they didn't. They really believe in open source, and we are all the better as a world for it.
Disclaimer: I work for Red Hat but have been a fan decades longer than I've worked there.
They believe in making money.
The issue is that OSS is the only way to compete with the cloud. Quay would die without 3rd party contributions vs Amazon, Google, Microsoft.
We happily make money without sacrificing our Open Source/Free Software principles. Does that bother you?
(Disclaimer: Red Hatter since almost 15 years. I never doubted for a second that we would open source what we buy. Ansible, 3Scale, CoreOS, we have a long history of sticking to our principles)
I hate to break it to you, you are no longer a red hatter, you are an ibm’er and at the mercy of their board of directors. You’ll remain open source right up until the moment they decide that’s not the best way to utilize their recent acquisition.
I hope for all of us that doesn’t happen anytime soon, but IBM acquired Redhat, not the other way around... plenty of sun folks learned that the hard way.
I bet the same has been said about VMware, GitHub and LinkedIn. They are all still doing pretty well today, last I checked. And they weren't acquisitions that required the acquiring company to get investors in to close the deal.
The long history of corporate acquisitions disagrees with you. Those are outliers, not the norm.
I'm new to IBM (last four months), but it's clear that everyone knows not to mess with what makes Red Hat special.
Red Hatter here. I'm very glad to hear this! Thanks for saying it.
P.S. I also agree with other people posting here. Red Hat is chock full of people who are absolutely looney about Open Source. It's at the heart of everything we do.
I think that IBM know if they get their hands dirty in Red Hat, they wont have much ROI from their investment.
I think you are a little too confident in IBM's management.
Is there any reason to believe IBM hasn’t been a good open source citizen?
Arguably a lot of OS progress that we are seeing today, especially in the enterprise, is a direct result of IBM’s decades old investments in tools like Eclipse, which was probably the first enterprise grade open source development software, but more importantly, by promoting OS in the enterprise in the late 90s and early 2000s at a time when MS FUD about Linux was at an all time high.
Didn’t they also support Linux development before it was fashionable?
Yes, they did.
Ansible was never Red Hat's to open source, the only open sourcing in that space is AWX, and it lacks release tags, working migrations or any meaningful community self-support that doesn't result in "you need to talk to us if you're doing this" on GitHub. It's a GPL'd code dump actively hostile to having a meaningful community develop around it
> the only open sourcing in that space is AWX
AWX isn't a trivial or unimportant product. For deployments of scale it's very important. It would also have been very easy to hoard the code and not release it, so the fact that RH did is impressive to me.
Having not tried to use open source AWX I'll take your word for the state of it. That saddens me greatly, and RH should do better. RH normally works pretty hard to avoid exactly that, but nobody is perfect (not making an excuse, just acknowledging failure).
Have you tried to contribute to AWX and had a bad experience?
Not going to argue about any of your other points, but there are release tags at least; though they don't have any bearing on the releases downstream in Tower :(
I never doubted Red Hat, but I do doubt IBM. Glad to see that RedHat's principles are still alive.
Yes, any for-profit company has to believe in making money. However I think they could make a lot more by being stingy with their code. Ansible for example had a lot of paying customers that no longer had to shell out after it went open source. I (and many people/companies I know) would likely pay for RHEL if CentOS weren't a thing. There's plenty of opportunities for a cash grab if that were Red Hat's thing.
What they need to do is to come up with a monetization model that makes more sense.
As a devops, I rely heavily on Hashi and Ansible and Chef and so many other tools. However, their enterprise offerings are both too expensive and also too big for what I need. I can't get my employer to just donate money, so what can I simply and easily buy to fund the effort while still getting some value?
Grafana, for example, with paid plugins has made this easier: the moment you need a bit more, you pay for it. We need to extend this model out somewhat, get these vendors to offer more premium functionality - generally in the form of charging for integrations with other closed source software. That if you stay open the whole way yourself you can probably do the whole thing for free; conversely, if you have money for Datadog or whatever other service, then you probably have money to attach Grafana to Datadog too.
We happily make billions in revenue and just got bought by IBM for $34B. Why should we aim for more through betraying what binds us?
If CentOS didn't exist, that customer segment would jump to OpenSUSE, Ubuntu, Debian, etc. Same story for Ansible, etc.
The real money is with the organizations that feel the need (for many good and bad reasons) to pay millions of dollars for support, consulting, and the like. They don't care about the people that are looking to save a few grand on licenses.
You know what, after thinking this through I agree with you. Reversing position publicly is always risky, but you're right I'd probably just move to Ubuntu and if I need support I'd pay Canonical. I have lots of familiarity with Cent/RHEL which is why I'd prefer that, but I could gain that with Debian/et al without too much effort.
It's possible that the existence of CentOS is partially keeping RHEL viable. Maybe Fedora would be enough to do that, but overall CentOS probably contributes to RHELs demand, rather than detracting.
I've used CentOS in every gig I've been at until my current one (we use full-on RHEL here). Even if they're not paying for support it means that the industry is still thinking Red Hat; a long play.
A competing product (Harbor) is already open-source and part of CNCF. Quay wouldn't have had any future if it was to stay proprietary.
Regarding RedHat (or IBM) truly committing to open-source, I'll believe when OpenShift 4.x is open-sourced.
Openshift 4.x is open-source. I'm guessing what your speaking towards is the fact that there is no prebuilt distribution of it that doesn't require a subscription, ie: OKD, which is something being worked on. Clayon started off the conversation on this back in June: https://lists.openshift.redhat.com/openshift-archives/users/...
But everything it's being built with is entirely FOSS. Making OKD happen is a high priority and is being worked on.
From my understanding, most of it's been blocked on Fedora CoreOS being at a state that it can be used for OKD and just putting resources onto setting up the automation for building everything for OKD.
Remember that Openshift 4.x fundamentally changed how Openshift does updates and that affects OKD a lot. Claytons email touches on this quite a bit.
Disclosure: I work at Red Hat, on projects related to Openshift.
> Openshift 4.x is open-source.
That's great to hear. My mistake then, last time I've opened http://github.com/openshift/origin, I saw OpenShift 3.11 even though latest release was 4.2 at the time. From that, and given the fact that all other RedHat products are upstream first, I've made a conclusion that OpenShift 4 is no longer open-source.
> From my understanding, most of it's been blocked on Fedora CoreOS being at a state that it can be used for OKD and just putting resources onto setting up the automation for building everything for OKD.
What's the difference between OKD and OpenShift? Why does OKD use Fedora CoreOS, while OpenShift doesn't? Is it not the same code?
> Remember that Openshift 4.x fundamentally changed how Openshift does updates and that affects OKD a lot. Claytons email touches on this quite a bit.
Don't know Clayton or seen his email. I'm confused why would OKD use a different code than OpenShift. I though that the only difference between OKD and OpenShift would be the subscription.
We needed fedora coreos. OpenShift used RHEL CoreOS. It took longer for fedora coreos because we also wanted fedora coreos to be a sufficient replacement for ContainerLinux. That integration started passing CI with openshift today.
Readme updates and lots of this stuff need to be done - we left the readme at 3.11 because that was a coherent install (vs the more work in progress of fedora coreos).
Every bit of source code was there (and developed in the open), but it wasn’t all “pulled together”
Thank you for clarification and for all your hard work.
I'm looking forward to OKD4 and I will be checking out Fedora CoreOS soon.
Is there already some documentation on how to play with it?
Coming very soon - hopefully ready for KubeCon
Clayton already answered some of this, but I also have some other ways of saying the same thing.
OKD 4 uses Fedora CoreOS for the same reason that OKD 3.11 used CentOS instead of RHEL. For better or worse, they're built and maintained by different systems and/or people, and we simply can't just make OKD use a RHEL derivative due to how support and subscriptions work.
Functionally, they should be nearly identical, but in practice, they're two different pieces of software and they're maintained and built in different systems, much like how Fedora, and CentOS are managed separately from RHEL. The differences mostly come to where packages come from and what systems built them.
The code for OCP/OKD is the same, the major difference is how it's built and released and the OS (RHEL CoreOS vs Fedora CoreOS), and potentially the upgrade graphs supported via over the air updates.
As to who Clayton is: he is basically one of the the main architects for Openshift since basically the beginning (I forget if it goes back to prior to v3).
Its OSS, but it is definitely not free software.
Lets be honest, nothing redhat gives out is actually free.
I think I'm a lot more benefit-of-the-doubt than you, but I agree completely with what you said.
Red Hat is very committed to putting the necessary infrastructure and organization in place around projects before they open source to make sure that the code isn't just available but can actually use community contributions. I don't have any inside info on this, but I wouldn't be surprised if OpenShift 4 is just waiting for that, or possibly to be in a stable enough state that the community can contribute.
Of course it's possible also that RH is keeping it closed for other reasons as well such as avoiding tipping their hand to competitors until their end goal is realized or something like that. I guess the point is I don't know, but given Red Hat's history of open sourcing even valuable acquisitions, I have faith that they will with OpenShift 4 as well.
> Red Hat is very committed to putting the necessary infrastructure and organization in place around projects before they open source to make sure that the code isn't just available but can actually use community contributions.
OpenShift went from open-source to closed-source. The infrastructure and organization was already in place.
> or possibly to be in a stable enough state that the community can contribute.
I would say that it's stable enough for all RedHat customers who pay for it.
> I guess the point is I don't know, but given Red Hat's history of open sourcing even valuable acquisitions, I have faith that they will with OpenShift 4 as well.
I have no doubts that old RedHat would do it, but IBM might have a different approach.
ignore my above comment, I've jumped to premature conclusions.
True, but it's been a goal to open source Quay from before Red Hat acquired CoreOS. The Quay team has always wanted to go community OSS, and it's been Red Hat's policy since acquisition to help them. There were just a whole bunch of prerequisites to iron out first.
And now users are in a much better place, because they have multiple choices of container registry, which will hopefully drive innovation.
Another great project my org is in the process of migrating to is Kraken. We’re migrating to that from after a bad experience with Harbor.
I don't have any experience with Harbor so can't comment on its architecture, my argument was purely about there being an existing, highly popular, open source solution backed by CNCF. Can you share more details about what's so bad about Harbor architecture?
Curious what your experience was. Been using harbor for a couple months and have no issues. BUT we're not using it in a very heavy environment.
Yeah, and Ansible has only gotten better since then. Seems IBM hasn't nuked the company culture yet, either. I've been considering applying to Red Hat, they inspire me a lot with the scale they've managed to achieve. Linux would suck without them... All of my servers and physical machines are Centos/Fedora and once a few pain points are addressed I'm looking forward to switching to Silverblue.
"What is Project Quay? It's an open-source distribution of Red Hat Quay."
So, what is Quay? And why the information pages assume everyone knows what it is?
A quay is a the platform you load or unload containers to. It's usually paired with a wharf in a harbor.
In that case, I'll be a software longshoreman going forward.
And Harbor is a name of VMware's open-source registry.
How to pronounce Quay: https://www.grammarphobia.com/blog/2018/04/cay-key-quay.html
Edit: I called it Kway myself and googled it after getting puzzled looks from my UK peers. The referenced article says "key" is the older pronunciation but either is acceptable.
I've always pronounced it "kway" but so long as its being used, I'm fine with any of the pronunciations :)
Disclaimer: Named, cofounded, and now engineering lead of Quay
The "most correct" pronunciation of Quay is "key", as that link says, but the CoreOS team always pronounced it "kway".
(I assume this has carried through to Red Hat.)
I asked Brandon Phillips (Then CoreOS CTO) on a call how they said it. He was quite frank it was pronounced "kway" and not "key", much to the chagrin of our Aussie coworker who still calls it "key". I've called it "kway" since.
Yes, the team calls it kway.
However, when I worked at SUSE a running joke was: as long as you love using it we don't care how you pronounce it.
"You can call me he. You can call me she. You can call me Regis & Cathy Lee; I don't care! Just as long as you call me" - @RuPaul
RuPaul cribbed that one from Bill Saluga's Raymond J. Johnson Jr. character. Saluga basically made an entire career out of this stupid spiel.
Does he also visit the Florida Kways?
Florida's cay/quays/keys are just called Keys (presumably to short circuit this madness): https://en.wikipedia.org/wiki/Florida_Keys
Kway makes more intuitive sense to me; it's like the word quail.
But this is also a word in common usage, and it already has a pronunciation: "key".
English is not a good place to go if you are looking for intuitive pronunciation.
Nor consistency of pronunciation. Schedule / School is another good one if you’re in the UK.
And Nippl-e
Merriam-Webster lists ˈkwā as valid, and Cambridge lists kweɪ under an 'American Dictionary' subsection.
Mmmm, quail salad: https://upload.wikimedia.org/wikipedia/commons/thumb/2/20/Bo...
I'm from the UK and when I first moved to London I pronounced Surrey Quays Surrey Kways until I was corrected, I pronounced Torquay and Newquay correctly but it never occurred to me to pronounce Surrey Kways Surrey Quays.
I live in a housing development with Quay in the name. Very quickly learned that "KEE"/"KEY" is that only thing that won't get you sideways glances
I think its "kway" as well. It sounds more soothing.
Worth noting this is happening after the incorporation of VMware’s harbor registry into cncf. Harbor provides a solid enterprise registry with auth, Clair scanning, and a reasonable ux. https://goharbor.io/
Still, it’s great to see quay make it out into the open.
I'd be interested in hearing more analysis on why it is "worth noting." What are your thoughts?
It's probably because after Harbor got incorporated into CNCF its development kinda skyrocketed.
It was in a mostly stagnant state, a release once in a while, and now it's going regular and strong.
The thing is, after things get the "CNCF" stamp they kinda go viral and become the "de facto" standard.
This means that Harbor would become the most usual way to run a private registry and thus Quay would lose ground (=> harder to sell).
Source: just implemented Harbor at work. Quay would probably have been better (probably "production ready") but Harbor was free & open source.
Thanks znpy. Can I ask what your orchestration stack looks like?
I thought it was worth noting, as I follow the CNCF and their numerous projects, but I hadn't made the connection myself.
Not to take anything away from the accomplishments of the Quay team or their contribution, there is definitely value in having more than one kid on the block when it comes to open-source solutions for problems like this. I think the tendency is to push for "one solution to rule them all" and that kind of approach can stifle innovation pretty hard.
I'm not sure there's any relevance between this announcement and that one, as Harbor has been in the incubator since about 12 months ago from what I can tell, and was sandbox before that. But it appears to be another mature solution in the same space with many of the same features, if that isn't something worth noting I can't understand why not!
To your point, it would be great if the comment was a bit more substantive.
DISCLAIMER: I work for Red Hat Consulting focusing on OpenShift.
We've been discussing and working on this since the CoreOS acquisition. My good friend and Consulting colleague wrote a lot of the Operator code now used for installation.
Does Quay support deleting images/tags/repositories in the registry? That's a huge pain point with the official registry image, it's storage requirements grow constantly and we have yet to find a way to actually remove data completely (the garbage collector never worked for us).
> IRC: #quay on freenode.net
They're note using Slack!
That's very refreshing. I'm glad that more teams are choosing open chat options like Matrix or IRC.
RedHat has been using IRC a long time.
"...the on-premise offering of Quay was released."
It's my premise that the author of that press release doesn't know the difference between "premise" and "premises." I know we have a habit in American English to evolve the meaning of words faster than any other language but surely IBM's press team could offer to proof-read things like this before posting.
https://www.merriam-webster.com/dictionary/premise
(And yes, I die inside a bit when someone on NPR ends a sentence with a preposition...)
IBM use "on-premise" all the time. Like it or loathe it, it's become the accepted neologism for "self-hosted".
My personal preference is to use "on-prem" because:
1.) There are arguments but no definitive justification for choosing between on-premise and on-premises.
2.) A lot of what gets lumped under on-prem is not actually on your premises anyway. It's in a colo or a managed hosting provider, etc.