A Brief Goodbye to CentOS
clementchiew.meI wasn't clear what's changing that is so problematic, so to summarize this post[1] and various comments: CentOS Stream will track ahead of RHEL and thus will be more like a beta channel, losing the stability guarantees that CentOS users depended on.
[1]: https://blog.centos.org/2020/12/future-is-centos-stream/
IMO CentOS stream isn't problematic as such, and neither is CentOS "classic" switching to stream; distros evolve all the time. The basic idea of stream seems like a decent one as such. Some may not like it, but that's okay. In my opinion, the problematic part is that the EOL date of CentOS 8, released last year, suddenly shifted from 2029 to 2021.
For years CentOS promised long-term stability and "boringness", which was its main selling point. A very long support cycle was its main selling point. It's why people installed CentOS, and now, suddenly, that's gone.
I don't think it matters that it's free in this case; they made a commitment/promise and suddenly went back on that. I wonder what all the sponsors[1] think of this; I'd be pretty darn peeved if I would sponsor something like CentOS, and it suddenly shifted direction like this.
Isn't Fedora already the RHEL upstream? Why not just kill CentOS entirely?
No, Fedora and CentOS Stream are very very different.
Fedora is where new and shiny lands, with a release schedule of every 6 months, and ~1 year of support per version, with minimal backporting of bugfixes and frequent package updates. Lots of packages (including the kernel) update freely. That would not fly in CentOS.
CentOS Stream is the next minor version of RHEL. There is lots of backporting patches, ABI stability, the works. And it is supported for as long as RHEL is supported (the standard tier anyways) because it is the next minor version of RHEL.
For a visual metaphor:
Fedora ---------------------------------------------> CentOS Stream --> RHEL
The development process works basically like this:
1) A new RHEL release is created from a rough snapshot of Fedora. It's not an exact copy of Fedora, a fair number of changes are made in the process.
2) Fedora keeps moving forwards quickly, RHEL stays put
3) CentOS Stream takes the most current RHEL release and starts layering updates on top of that
4) After a couple of months these updates from CentOS Stream are then pushed into RHEL as a new point release
5) Repeat steps 3 and 4
Or to put it another way, Fedora is (roughly, with caveats) the upstream for new major releases of RHEL, but CentOS Stream is upstream for minor releases of RHEL.
Centos is going to be a beta for the next point release. Fedora is like next major version but I’m not sure if they actually fork Fedora to get that or if they just use it to preview the technology.
s/fork Fedora/fork RHEL/ I imagine you meant.
Fork fedora to be the new rhel version. maybe there’s a technically better way of describing that.
I see. I understood "they" to be "fedora", so I thought you said "fedora forks fedora" i.e. "fork fedora to be the new fedora version". So, I thought you meant "fork RHEL to be the new fedora version". I see that's backwards now.
EDIT: "Fedora is like next major version" also adds to the confusion. If Fedora is like the next major version of RHEL, it makes sense that Fedora forks RHEL and not the other way around.
so will it be basic the same as openSUSE Tumbleweed for SUSE Linux?
Debian supports upgrades with major versions, and doesn't bump package major versions between minor versions. SLE/Leap don't support upgrades between major versions, and bump package major versions between minor versions.Debian Unstable -> Testing -> Stable -> Oldstable -> Oldstable with LTS Fedora -> Centos Stream -> RHEL -> RHEL with LTS Opensuse Tumbleweed -> Leap -> SLES -> SLES with LTSIt is untrue that SLES does not support upgrades between major versions: https://documentation.suse.com/sles/15-SP2/single-html/SLES-...
It is, however, true that SLES is less conservative than e.g. RHEL when it comes to bumping software versions between their minor releases (service packs). I remember that they switched from 2.6 to 3.0 kernels sometimes in SLES10 days. Fun stuff.
openSUSE however doesn't support it https://en.opensuse.org/SDB:System_upgrade#Supported_scenari...
The link you posted only states that the upgrade is not supported for 64-bit ARM, and that 32-bit (x86) system cannot be upgraded to Leap. It doesn't say upgrade in general is not.
tumbleweed does or rather, tumbleweed does not have point releases. in my eyes tumbleweed is way better for cloud, IoT and desktops, as soon as you use transactional-update
P.S. I'm fallen in love with it.
Note that the gap between Leap and SLE will become quite small after Leap 15.3, with binary packages for SLE being reused for Leap.
It's interesting that while RH/IBM are moving away from the 'community rebuild' model SUSE are moving close.
> Opensuse Tumbleweed -> Leap -> SLES -> SLES with LTS
Almost right :)
Leap is the fixed point release, based on SLE. SLE is rebased on Tumbleweed every 3-4 years.
Well, my intent was to roughly chart the supported timeframe and expectations of each, not the flow of code through them. That SLE gets rebased from Tumbleweed directly every 3-4 years justs showcases my second point from before: SLE bumps major versions of packages between minor SLE versions.
RedHat's goodwill will be spent down over the next decade or so, and eventually I fully expect to think of them the same way I do IBM. (Which is, roughly, the same way I think of Oracle.)
All of the people I deal with there are the same as before the acquisition, so things have not changed much for me personally, yet. But I am looking at building replacements for certain tools we depend on; the writing is on the wall.
This is (at present) a standard progression in the life cycle of a corporation in the US. There are vanishingly few corporations left who practice a business model designed to span more than about 30 years.
Most businesses today are founded, then the founders are bought out/merged, getting cash in the process.
Then the new owners change things to effectively profit on the business at the expense of longevity, which causes discontent in users and starts the slow downhill slide to the product's death. Then as the product becomes less relevant and loses market share cuts are made to preserve profitability and lower ongoing costs.
Once the business has extracted all the value from the product, they then drop the product entirely but retain the IP for it, preventing anyone else from resurrecting it, thus suppressing competition.
>they then drop the product entirely
Or sell it to Micro Focus
Odd, isn't it, that nobody feels the need to ask what your opinion of Oracle is - it's just (with good reason) assumed.
I haven’t worked with IBM, but soon as Oracle was mentioned, I understood.
It's easy to want to do, but much harder to justify banging on public companies for trying to increase revenue.
> but much harder to justify banging on public companies for trying to increase revenue
I assure you, it is very easy to be unhappy with a company for screwing over its users, even if they think it might net them more revenue (bonus points for this being a questionable assumption)
Well, if you ignore the history of Red Hat. It used to be "the" Linux company, a paragon of what you can achieve by combining business acumen of the corporate world with complete transparency of the open source process.
When the first clones of RHEL appeared, they received C&D letters about the use of "Red Hat" in the name, so they complied and started to replace the branding before recompiling. Who would expect that the only free-as-in beer RHEL clone we'll be able to use will be Oracle Linux.
I’m deeply interrogated by Atlassian forcefully moving users to the cloud.
It’s as if, in 5 years, we developers wouldn’t have a safe Linux to deploy on, and then we’d be required to use Amazon Linux or GCP Linux, the other ones being not officially supported and therefore not insured in case of leak, or not approved for PCI or PII or GDPR or...
> With the advent of DevOps and SRE, businesses and startups are moving away from the old-school concept of traditional server clusters to running their applications on disposable containers. The trend is clear and true. Developers are increasingly less reliant on a tried-and-true Linux distribution that lasts for a decade. With containers, developers can develop, test, deploy, and rollback with blazing fast velocity.
As a user of Linux as my main OS since 2005, and using it partially for years before that, I think another issue is that the quality of software releases is just much higher than it used to be. There used to be a tradeoff between "trustworthy" and "recent". These days it's more "possibility of a problem" vs "absolutely rock solid".
And of course the general move to the web. Apps that run in your browser no longer need to run on a server. When I started in my current job in 2004, there was a Debian stable server used for teaching. (It might still be in use for all I know.) This semester I used CoCalc. That's one less use for an ultrastable server.
> These days it's more "possibility of a problem" vs "absolutely rock solid".
CentOS is mostly a server distro, I bet 99% of installs are without GUI. So "possibility of a problem" is a no go for a server.
Sorry to break this to you, but if the possibility of a problem is a no go for your servers, you will unfortunately have to pull the plug on them. All software has bugs, so does hardware.
What a supercilious and condescending response, and verging on snark. Sure all software is likely to have bugs, but some software is likely to have less bugs because there's more test coverage or the software is conservative about what features get added/updated.
See:
https://news.ycombinator.com/newsguidelines.html
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
Understood, I tried to be funny and failed. Will try to do better next time.
Honestly I thought it was fine and I completely agree. It’s not like parent is a moderator and his/her interpretation is de facto/de jure
Maybe stick a ;) on the end to show a tongue in cheek comment :)
Come on. Possibility of a problem is say Arch, rock stable is Debian / CentOS. There are certainly bugs in the later too, but I never experienced any in 15 years since I run a dozen Linux servers in production.
I understand, your comment simply rubbed me the wrong way, because it, in my mind, kind of ignores the main point of the parent, that nowadays the trade-off is less obvious than it used to be.
I also genuinely want to apologize if my comment came of as patronising. I wanted to express my genuine opinion in a fun way and it seems like I have missed the mark pretty badly.
I somehow missed that Red Hat started sponsoring CentOS a while back, and owns the trademarks. I mean, money and cooperation is great and all, but how could anyone have expected CentOS Linux to continue for long when Red Hat has such a fundamental conflict of interest?
Edit: here's the HN post at the time:
Yeah, I assume this ends with somebody forking CentOS into another "redhat without redhat branding and license costs".
There is already an announcement for Rocky Linux, from the same person who started CentOS, doing a Monty from MySQL.
Maybe CERN will revitalize Scientific Linux.
Scientific Linux[0] is still maintained ( Although, by Fermi National Accelerator Laboratory and not CERN )
Yes, maintained, but no new versions since they switched to CentOS.
There's no RHEL8 equivalent of Scientific Linux.
To be fair, I truly thought MySQL was dead in the water when Oracle purchased them 12 years ago, but as far as I can tell MySQL continues to be well supported with new features being added. So I no longer know what to think about these kinds of acquisitions.
MelissaDB
MariaDB?
The future of operating systems, at least from a non-GUI perspective, is going to look a lot like SeL4 + Nix. A rock-hard provably secure capabilities-oriented kernel, combined with reproducible declarative package management that can handle any combination of dependencies that you want.
Essentially this means that the idea of different distributions for LTS, stable, beta, alpha, bleeding edge, etc., goes away completely. You are never forced to update an old package, nor prevented from updating a new package. You get the best of all worlds.
And since the kernel is about as secure and performant as you can get, you essentially always have the latest kernel. Drivers get updated as necessary (defined by your policy, not the distribution's) in userspace, potentially as quickly as the moment they are released, with no downtime whatsoever.
So there's 2 ideas here: kernel, and packages.
For the kernel, I'm curious how your suggestion is better than Linux is already. Linux, today, is already a performant, secure kernel with an incredibly stable userspace-facing ABI.
For packages, the problem isn't so much being able to install old and new packages (although that's certainly useful); the problem is maintaining a stable version while still fixing bugs and security issues. It's no good just having a packaging system that lets me run a 5-year-old glibc with a fresh-from-git application server if that ancient glibc has multiple known exploits in it. The work in an LTS system is carefully backporting fixes to your chosen old version while holding its ABI stable.
The kernel isn't super secure. There are way too many CVEs, and the kernel now has 28 million lines of code. That is a massive surface area for bugs and exploits. Even with LTS kernels, you can still expect to update your kernel a couple times per year with security patches. Each time that happens, you're gonna need to reboot.
You're right that LTS work doesn't go away...bugfixes will still need to be backported to old software versions. But that work is actually quite a bit easier when it is not so tightly coupled to kernel versions and repositories that are unique for each distribution and release version and architecture.
That complexity is a combinatorial explosion. Instead of having a different codebase for each (PackageVersion,KernelVersion,Distribution,Release,Architecture) combination, you would only need to maintain a codebase for each (PackageVersion, Architecture) combo...and maybe for packages which are trivially cross-compiled, even fewer.
The CVE database and Linux Kernel Self Preservation project beg to differ in regard to security.
I think it's going to be more like hypervisor + vms + containers.
A proper analogy is probably a city block in NYC. Lots of buildings of various eras with the only commonality being utilties in and out plus the ground they sit on.
Although periodically buildings are knocked down and rebuilt, this is an exceptional circumstance.
Sounds good, but unless there’s a developer and end-user product approaching the maturity of BlueHat or even Debian it doesn’t seem likely to happen.
Am I the only one who doesn't get the "risk using CentOS Stream" stuff? Isn't it going to be slightly-less stable RHEL?
I actually worked at Red Hat a few years back and almost all work was released upstream-first. The one time I fixed something security-related, the fix was still made upstream first, but just embargoed until the fix was made and released for downstream versions. If I recall correctly, we pushed the upstream fix the day the downstream patch was public.
Now I run CentOS in production for a small web app. I get wanting a decade of support for your OS, but at least for cloud-based web apps that seems pretty unnecessary.
What am I missing here?
It's not about risk; many vendors produce software for a specific version of RHEL - we want to use exactly that version, but we don't want to pay through the nose for the support we don't need.
Exactly this. You can produce software that is compatible with a RHEL version without paying for RHEL if you’re not even using it
FWIW, redhat does offer developer licenses of RHEL specifically for this kind of thing. Maybe not quite as nice, but probably workable.
I worked in defense for a while. Every DoD contractor is locked to RHEL version that their DoD targets use as end users. But they don’t need to pay for RHEL support currently because they all just use CentOS instead. This forces all of those companies to finally pay up, which they will def not be happy about.
This genuinely surprises me. When I moved into enterprise 15 years ago, I was more inclined to use free things and piece everything together myself. This quickly overwhelmed me, and I learned the golden rule of enterprise--nothing without a support contract.
Call me old fashioned, but I ship both my private projects and those at work as debian packages.
Debian packages are trivial to put into a container, and we tried that, but honestly it's not half as nice to work with.
With containers you have to do a ton of extra steps to get functionality and debugging on a level a default debian system provides you.
Additionally the tools to automate the installation and configuration of debian systems are way more mature compared to docker et al.
Containers aren't quite there yet.
I'm really considering doing this for a new spin-up myself... The stability of Debian tooling for large-scale, hyper scaleable solutions is outstanding. I just feel like the world balks at me every time I do something considered slightly "old fashioned" completely ignoring the finished product to shame me for not using buzz-word tooling.
Containerization is fantastic don't get me wrong, but I've had more success with old-school approaches to package management, deployment, optimization, debugging, etc. running thin Debian servers. Just... prod ops is easier and more stable at the end of the day. I really don't see the need to containerize everything outside of cross-platform development tooling. I also really prefer having a semblance of an OS/bash terminal when it comes to ops!
Also: this is purely anecdotal. And, to get ahead of the folks yelling "you just don't understand Docker and K8s" - yes I do. I still think they're great, I just am not fully sold on them for every use-case.
Debian plus an automation system like Puppet, Chef, etc works extremely well.
It's just not sexy enough for people to write hundreds of posts about how to set up your own package repo, understand unattended-upgrades, and do monitoring.
> Debian plus an automation system like Puppet, Chef, etc works extremely well.
Eh, it does until it doesn't. Sooner or later you run into pitfalls around the leaky abstraction of pretending your state is truly idempotent and path-independent. E.g. spinning up a new instance works fine, but the existing instances that need to uninstall a previous version to upgrade to the newer one end up breaking. Or vice-versa, existing servers work fine but then when you need to launch a new one your realize the config no longer works on a clean install and you hadn't noticed it for weeks.
Container-based systems certainly have their own problems, but it is really nice having a model where you don't allow long-lived implicit state and cruft to accumulate on your application servers in the same way.
If I don't understand what I'm doing, it doesn't matter what tools I'm using to fail to accomplish it.
Ah, the myth of the perfect programmer.
Everyone makes mistakes, and state resets such as VMs or containers are one of the easiest ways to revert these mistakes.
Everyone makes mistakes, and having real testing infrastructure is the best way to catch it.
Many mistakes are made against databases; maybe you can roll back, maybe you can't -- either way, it's cheaper in testing than production.
There are also state inconsistencies on servers. Configuration files are not updated correctly (maybe even with a silent failure), binaries are not updated correctly, temporary files left over interfere with the update, cached files do the same, etc.
There are many reasons why resetting the environment on deployment is a solid and cost effective solution to many issues.
Except when your Puppet/chef whatever aren’t set up carefully to be self-contained to be able to bring up the system to the correct state regardless of whatever interim state it might be in. It’s not impossible but it’s also not an out-of-the-box thing, it’s complicated to pull off, and usually requires constant expertise rather than being a playbook you hand off to whomever (even with expertise you can have bugs, you just aren’t in as big of a hole because you’ve ratholed on the assumption that you always have version 0 or version N-1 to get to state N).
Doing this is hard and there’s nothing in the ecosystem I’m aware of to guarantee that. Nix maybe? That’s part of the problem. The other is that docker has a conveniently large set of configs and things “out of the box” so there’s familiarity and documentation.
It’s not impossible to accomplish but building that community of doing things a sustainable way that actually addresses the pain points (rather than just dismissing it with “you’re just using the old tools incorrectly”) is what you need. If you’re going after docker your solution will have to support devs and devops. If you’re going for a niche community of enthusiasts/experts (probably more defensible and easier to grow), then focus just on a single niche use-case that general solutions could never outmatch (but don’t ignore growing it carefully if your niche solution is meaningfully better - listen to your users that you trust).
Re: Containers aren't quite there yet.
I think they've been there for 12-14 months. It's no longer a question of "If" but "When" a company decides on its container strategy (and its more than just k8s - see https://blog.coinbase.com/container-technologies-at-coinbase...)
I work for a company that is 100% k8s. Base linux of the containers is Debian 8- but honestly doesn't really matter that much - the OS is more kubectl and the orchestration around k8s (GKE, Prometheus, Sysdig, Grafana, ELK) - the "operating system" has moved up a stack.
When I was working in a stack like this I found people spending outstanding amounts of time not actually working to improve the stability/performance of the application. The reason you triggered that memory was "GKE, Prometheus, Sysdig, Grafana, ELK" - that's exactly what we were dealing with. The support infrastructure/compute needs for it far exceeded the 20-30 hosts that actually needed to be there to operate the application.
We either were using someone else's prebuilt orchestration for something like ELK (insecure, needs constant auditing to be OK) or rolling it ourselves (very expensive in engineer time). None of it was ever working 100% and that was because we were jumping at software packages no one had really taken the time to fully understand. The mentality was "it's containerized!" which many on my team took to mean "we don't need to really grok it, it's in a container!" That burnt us, both on our TIG and ELK stacks. I left that job because it became putting out dumb fires that were not business-justifiable.
All-in-all I'm not saying what anyone is doing is wrong, I'm just saying that if you're going for an orchestrated environment like this you have to have a very mature team. You have to really care about learning these services well, and you have to be careful to not let your own architecture take your time away from solving real problems for the business.
The team I was on did not have that maturity outside of a couple bitter/broken ops guys who didn't deserve what the team had done to them while buzz-word driven leadership gutted their very-proven and stable VMWare infra into a total cluster-f K8s setup because "that's what we're suppose to do in 2018! That's what the new engineers want to work in!"
> the "operating system" has moved up a stack
Splitting hairs: The OS is still the same. The "stack" is newly imposed abstraction on-top of already established paradigms where we are trying to abstract ourselves away from the OS. It's distributed compute more than it is the "OS moving up a stack".
Edit: Ha I think you may have edited your comment with the Coinbase article. That article is actually what I point people to when explaining that K8s isn't some golden bullet, I personally think Coinbase is a great compromise in leveraging containers without going off of the rails (as they write about, ex: talking about the need for dedicated "compute" teams etc).
"outside of a couple bitter/broken ops guys who didn't deserve what the team had done to them"
Hey, that's me.
Well, if its any consolation, programmers aren't safe either. In fact, we are highly paid obnoxious people (to execs) and just imagine being able to replace one of us with a box that works 24 hours, 7 days a week for the cost of the hardware, electricity, and network. How exciting! If and when that happens, I suppose I can find work as a (bad) carpenter or something.
I mostly agree with all of what you've said here. In our case, it's not unusual for a single customer environment to surge to 200-300 instances of an underlying compute server, and then scale back down to 20-30 at steady state. With 30 customer environments, you might have customers running from anywhere as low as 15 containers to as many as 500+, with a lot of dynamic flux depending on data ingestion and ETL.
K8S is in flux, so you still have to have a few top-end SRE types to manage your kube environment - the acceleration / maturity of the ecosystem is incredible though, so, sometime in the next 3-4 years, we'll start to see things get standardized enough that the wizardry required to keep it running will become a more commodity skill set.
And, more importantly, most of the ecosystem is fairly identical between azure/google/AWS - so porting or going multi-cloud is usually a weeks effort if that's something you want to do.
By "Moving up the Stack" - Of course I understand that cgroups/linux underpins it all - it's just that we're not using linux system binaries to manage the containers directly.
I mean tasks like process, storage, memory, CPU, resource utilization isn't something we tweak/query with OS commands, rather we're sending request/limit configurations to kube, and let it worry about managing the resources, relying on PromQL to monitor resource utilization, etc...
> we'll start to see things get standardized enough that the wizardry required to keep it running will become a more commodity skill set
I am so ready for this!
Constant scale-up/scale-down and dynamic load is what I jump at K8s for personally. Totally see the use-case for what you're talking about.
All-in-all I love K8s and containers, use them myself, and have been really happy with the results. It's just when I've worked with it professionally I don't find my colleagues typically have the skillset (not the fault of the tech).
GKE is the container/kubernetes engine, but then you also mention Prometheus, Sysdig, Grafana, ELK.
Sounds like much of the problem was the monitoring stack, just curious why you blame that on containers and k8s? Wouldn't you still have needed a solution for that for 20-30 hosts regardless of how you're orchestrating/running the applications?
> just curious why you blame that on containers and k8s?
Crawl, walk, run sorta stuff. We had never just gotten the application/monitoring/everything humming on pure Linux hosts skipping that entirely because "K8s and containers!" When you haven't properly QA'd, vetted, whatever your stack throwing heavy abstraction at it (containers/K8s) is an anti-pattern.
Most companies don't have the resources to run a competent K8s distributed compute infrastructure and as a hiring manager (as much as an IC) I know I have to hire very specific, very expensive people for that role. Good ops folks come with experience in their realm and the newer the tech stack the harder it is to find competent help due to talent market conditions.
I don't blame containers and K8s - I blame the people, and I blame companies/teams for jumping at new tech that often doesn't have a justifiable use-case outside of "we're doing the popular thing!" vs. really considering what the needs of the solution are.
I also have a very low tolerance for downtime, and with those huge abstractions I find stuff gets missed more often, leading to my application being down for my users. I am a KISS engineer.
Fair enough, I definitely can relate to that line of problems.
"Should we spend the time to do a thorough look at our monitoring needs, figure out where the gaps are, be more disciplined about using the tool consistently, etc.?"
"That'll take too long, I heard about this shiny new ops tool that claims to require zero configuration, let's just drop this in instead!"
"Containers aren't quite there yet."
The rest of the world beg to differ, it's not a question of is it ready or not, it's "Am I going to use it or not".
We're way passed that question.
IMHO we are in a phase of enthusiasm, but we can already anticipate the peak and the trough of disillusionment will come.
In 10 years, the pendulum will ice swung a bit back and forth and we‘ll know better what works well. I bet it’s some form of lambda architecture.
Let me say that I am not pro or against docker per se. I just happen to have started my career with a strong team pre-docker and a lot of the docker-enthusiasm isn’t all that much addressing what was lacking in the operations space pre-docker.
> I bet it’s some form of lambda architecture.
Well: https://aws.amazon.com/blogs/aws/new-for-aws-lambda-containe...
In 2000, I had HP-UX Virtual Vault (aka containers) and CGIs (aka lambda functions).
Everything old is new again.
I guess call me new fashioned, but I've never really understood how to use debian packages well. I recall vaguely looking into the dpkg and build commands many years ago, it felt kind of inscrutable and clunky and I didn't find good resources that made it easy to learn so I just gave up on it and kept using the shell script to install the thing I needed with its dependencies.
By contrast, docker build and docker run are super simple to get started with (at least at a high level, figuring out the right order of flags and options to mount volumes and expose ports can get a little cumbersome). And the docker registry is super simple to browse. It's so easy to get up and running with, and the model it promises of isolation and self-contained dependencies makes a lot of sense which I think is why it has taken off so much. Despite the fact, which I think is what you're pointing out, that there often end up being a lot of pitfalls lurking just around the corner.
I think this too. Debian packages I'm sure are powerful but I never sat down and RTFM yet, which I think is a solid requirement. Docker is easier to understand and I picked it up pretty easily just running through a quick tutorial.
Definitely pros/cons to both depending on your situation. I can imagine debian packages being more useful in large scale multi-developer environments.
A deb file is just an ar archive of 2 tgzs, one with metadata and one with the actual files.
Now, the tools that build them are a whole different kettle of fish.
Plus some stuff is really old school, ar is an archive format that nobody has used on its own since 1996, it's a sort of transparent bundler/archiver à la tar, I think it was originally used to bundle .so file into a bigger package but still have the symbols inside visible. I don't really remember all the details, it's been a while since I looked into .deb packages.
I think the main problem is that Debian is not a commercial project and it shows sometimes. The tooling is kind of "hidden" (you have to poke around the distribution, mailing lists, etc. to figure things out) and the processes are kind of the same thing. The docs on the site are ok in some regards but they're far from complete and up-to-date.
Meanwhile the Dockerfile format is reasonably well documented and the tooling is also quite straightforward. You can see that a company made it for a while its raison d'être and wanted to make it easy to use.
I'm totally the opposite (old fashioned?), if I want to play with some library I just 'fire up' an rpm so I don't have untracked files spread around /usr/lib.
Having a multi-megabyte docker container to run some random program just seems...wasteful.
It's a mistake to assume that just because you're using docker, you now shouldn't bother understanding how (debian's) packaging works.
You're usually still installing packages and working with a debian/ubuntu/whatever distribution, except they're in a container now.
Docker is an additional abstraction layer one should understand, not a replacement for an existing one.
If you are used to dealing with debian packaging and running stuff directly on your server, using docker can feel like taking a step forward, but also two backwards. A few things are better, but a bunch are worse.
How do you handle multiple versions of the same project/software/deployment on the same machine?
You can for example have many SQL databases on the same db server, or many web sites on a www server. So you solve it with configuration. If you need different kernels you have to use virtual servers anyway.
The simple answer is you can choose not to. In many ways, a VM is a better abstraction than a container due to the simplicity of virtualising the hardware interface, as opposed to creating another abstraction layer in the kernel dealing with process isolation, permissions and system controls.
On the other hand, VMs are wasteful resource-wise (and $$$-wise) and have a much larger operational overhead (suddenly for every deployment you have a different Linux installation, with its own root /, with its own configuration drift, which you have to manage separately via CM).
To be fair, containers often end up being its own Linux installation with its own configuration drift. So many dockerfiles mindlessly pull in an entire Ubuntu system just to run a simple app.
But the image [1], once built, is still idempotent. You can deploy it and it will always contain the same configuration and code.
Meanwhile, a month-long Ubuntu VM that has received regular CM pushes (including system updates) will likely vastly differ from a branch new Ubuntu VM and a single CM push. To the point, where you can't be sure anymore that your current CM config will even work on a brand new machine, unless you're regularly testing that.
[1] - Yes, Dockerfiles do not make for reproducible builds - but once an OCI image is built, its deployment going to be reproducible. And there's more ways to build images than via Dockerfiles - some of which solve this problem (using Nix or Bazel, for example).
> But the image [1], once built, is still idempotent. You can deploy it and it will always contain the same configuration and code.
VMs can be idempotent too. It's just that traditionally people attach storage to it. But VM snapshots are a thing.
> To the point, where you can't be sure anymore that your current CM config will even work on a brand new machine, unless you're regularly testing that.
The same can be argued about attached storage to a container.
By idempotent do you mean immutable?
That same issues exists with docker containers. You can also build a pipeline to deploy very barebones VMs that contain the kernel, a barebones userland and the application. Use KSM to minimise memory usage. What you get with containers is a shared page cache and reduced context switching.
Once upon a time in tech, the thinking was hardware is cheap, technical staff is expensive, hence we moved on to systems and programming languages that saved us time at the expense of efficiency on the hardware.
20 years on, the cost equation hasn't changed. In fact, its probably shifted drastically towards the extremes. We'd likely save more energy by eliminating crypto mining than moving all VMs onto containers.
On my development laptop: a mix of VMs, docker containers, language/package managers. It's a per project choice, either mandated by my customers or advised by me. To name a few technologies I'm using right now in three different projects open in different virtual desktops:
docker
vagrant
VirtualBox (even some scripts to mimic EC2's spawning of machines with VBoxManage)
asdf (really, my fingers didn't slip on the keyboard)
npm
rvm
python's virtualenvs
I never had the need to do that and I'm not sure in what kind of situation I would.
For updates I just install the new version of the software, then perform a restart (new version starts, once it's ready, old version stops).
each version of the project gets a directory and contains everything the project needs. put all of those into a parent directory. use a symlink to point to the currently active version. E.g.:
/usr/local/thing/versions/thing-v1.3.7 /usr/local/thing/versions/thing-v1.4.2 /usr/local/thing/current -> /usr/local/thing/versions/thing-v1.3.7I mean, sure, I've done this too with a handful of scripts. But is this something you do via .debs? I'm asking specifically about how to handle this with plain Debian/Ubuntu packaging.
I do it similarly. I run multiple versions of a service at once with systemd service files. It gives me the same stuff as containers - cgroups, isolation, logging, service definitions and automation with ansible, but its easier on my feeble psyche.
Why would you want to do that?
For development, for example. Or for some kinds of shared hosting.
Isn't that a sign of technical debt? Not OP, but for development/testing: in a VM.
How is it technical debt? How else do you handle software rollout and rollback, or canarying? Do you have a VM for every single version of your software?
Uhhh... backups?! Not all companies release daily, weekly, and sometimes not even monthly. Stage the rollout, do your testing, get your evidence, get your plan, perform the release.
So if you deploy a new release which turns out to be buggy, your only recourse is doing a full backup restore?
No there are snapshots for that.
q3k, I can't reply to you at this depth. But yes. You're saying a "full backup / restore" but it's not the entire system.
Let's say you have an app, in a folder, that reads config files from 3 other locations on the machine. It talks to two databases. You back up two databases and 4 total folders. That's your backup. It's simple and straight forward to me.
(you have to wait until you can reply after a certain depth - this is HN's anti-flamewar system kicking in)
I understand you can restore from backups, but this doesn't seem simple to me - especially when you deal with situations where there's more than just one person deploying to production.
In comparison, my rollbacks are performed the same way rollouts/rollforwards are - by editing a single line in Git (ie. changing the OCI image string) and running `kubecfg update`. No need to access backups, no need for special procedures.
I look at it this way. I think it's simpler to do this kind of backup in case things go wrong (which, honestly, is not that often. Twice in 5 years that I can think of off the top of my head) than it is to set up kubernetes, manage kubernetes, and convert our applications to work correctly in kubernetes. All of that is required so that your single line edit becomes a possibility. That's a lot of work to enable that workflow versus copying 5 directories to one location, zipping it, slapping a version tag on the zip file, storing it in a couple of places.
Whatever works for you, man. I honestly enjoy having docker images instead of zips and bash scripts, but I see where you're coming from.
I think it’s flawed to think that you can safely have multiple versions of the same app running (or even installed) simultaneously in the same “universe.” Whether you use jails, VMs, containers, or whatever you should not count on “I didn’t change anything between these two versions that would corrupt the other instance” to help you.
What? Deploying new software is technical debt?
In a real sense, yes. Once it's deployed, it incurs costs, like a debt.
Reminds me of:
> My point today is that, if we wish to count lines of code, we should not regard them as “lines produced” but as “lines spent”: the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.
-- Edsger W. Dijkstra
I make heavy use of both package managers and containers at my job, they solve different problems. Just because you don't have the problem that containers solve doesn't mean they aren't there yet
As others have noted, something like this was expected once IBM purchased RedHat. Looking forward, I’m excited for the future of Rocky Linux.
Rumor is RH was planning this from before merger. Supposedly they wanted to do it before C8 release.
It was handled in a very hamfisted way. Permitting entities to rebase from C7 -> C8 and then pulling the plug has caused tremendous ill will.
> Rumor is RH was planning this from before merger.
I fully believe this. Like I noted in another comment https://news.ycombinator.com/item?id=25358847, they have done this exact similar thing in the past with the JBoss application server community edition.
A common criticism of CentOS is that it's too stable. I don't see a mention anywhere of LTS channel of CentOS Stream, but wouldn't the next RHEL release effectively mean that current CentOS Stream channel becomes the LTS alternative, if RedHat commits to providing security backports?
I think IBM/Red Hat know someone will fork and carry on with CentOS under a different name. They just won't be paying for it.
The server response on clicking the link lacks a "Content-Type" header. I just thought I'd mention that in case the admin would be here and interested.
It's causing a download extension in my browser to show a download dialog on clicking the link. I don't think it's a bug in the extension, but rather a required behavior because of the limitations of extensions. The extension has no sure way to know what my browser's treatment of the response will be after it inspects the body content to guess the type, so the safer thing to do in its case is to show the dialog.
I don't think Content-Type is required by HTTP, but it's pretty rare for servers to not include it. Might be good to add it if only in interest of the Robustness Principle[1].
I'm still relatively new to Linux. My question is why wouldn't Fedora Server be considered an alternative for the CentOS diaspora? It seems a better fit than Debian to me.
Previous product I worked on required a supported RHEL/CentOS environment on which to install our product. Even with the six years or so of support, our customers would piss and moan every time we'd tell them that RHEL/Centos5 was hitting EOL and they'd need to upgrade their servers to Centos 6 or 7 to stay supported. Most of them wouldn't even entertain the idea of "upgrading": for them, business as usual was holding on to the existing OS as long as possible, then buying an entirely new server with the latest-greatest CentOS release freshly installed on it, and doing a data migration.
I can't even imagine the amount of headache we would have gotten if they needed to upgrade every two years.
When you update often it turns out to be less work, because more gets automated and changes are smaller. This is the CI/CD proposition anyway.
Except when the update brings some kind of massive change. RHEL 6->7, for example, was when systemd became the norm, and so right around the time our customers started upgrading, I suddenly had the hot potato dropped into my lap of needing to convert 20 years' worth of our software's init.d scripts into systemd services.
Debian, Ubuntu handled those side by side for a while, surprised RH didn’t.
CentOS releases were supported for something like 8 years, while Fedora releases a new OS every ~6 months. It does support up to 2 older versions with updates, but that means potentially breaking major upgrades every 6-18 months, compared to once every 8 years.
Debian, OTOH, has a much longer release cycle, and more of a reputation for moving like molasses, which for better or worse mirrors CentOS a bit more closely.
You want your server OS to be a mountain. It shouldn't move from under you as you build.
Fedora is a river, it never stops moving.
For people who this doesn't make sense, it's the timeline of the stack on top that drives the OS stability requirements.
If you work in heavily regulated industries, systems migrations and software development can be order-of-years.
Having a mandatory OS upgrade mid-development/deployment is not desirable.
It's a terrible way to develop, but when changes need to be documented in excruciating detail and signed off on by legal... sometimes it's just the way things are.
Why would it be a better fit, because of the package management?
Package managers are irrelevant. The relevant part is the quality, methodology of testing, existing written policy and technical requirements for the software. That you at the end deliver it in a deb, rpm or what have you, amounts to little.
Debian Stable (with its quality, Debian policy, testing, and stability of package major versions) is closer to CentOS. Fedora would be, as Debian Unstable, too fast moving for CentOS usecases.
> With containers, developers can develop, test, deploy, and rollback with blazing fast velocity.
Wow, that has not been my experience with containers.
A lot of this depends on your CI/CD environment. In ours, every single push to every branch on the remote repo ends up building a testing/building a full container environment that is ready for deployment in the cloud - with no differentiation between production/development in terms of completeness.
It's fast enough, that some (obv not most) developers don't even set up their local docker environment, and just use the CI/CD deployments to test their work. With the advantage being that if it looks good - single deploy moves all of the production environment to it.
They want to implement that sort of thing, but it just doesn't work for very large projects.
Can you elaborate?
Even the basic ability to just spin a container with all your stuff and start developing is an important advance. (especially for cases like python2/3, java5/8 etc)
Coupled with the remote containers feature of visual studio, working with containers is now a game changer.
Maybe for very small projects. But the scale of things I work on doesn't lend itself to that sort of thing.
I have noticed more companies adopting Amazon Linux when a few years back they would have used CentOS. I wonder how much this affected the decision.
Wish we would see more adoption of Freebsd and NixOS in the future.
For now I like my Debian :-)
Without being a free version of RHEL, nobody would have bothered with CentOS. There was White Box Linux back then. Maybe this would give another chance to that forgotten project.
We are creating the next RHEL based Enterprise Linux - same as CentOS used to be. Targeted first release is January 31, 2021.
Looking for both volunteers or people who want to be paid for their efforts. Join us to secure Enterprise Linux as a free (as in beer and freedom) for the foreseeable future.
The only place I run into RHEL or CentOS is maintaining old PHP apps where the web host is running someone's control panel... Often times, it's like stepping into a time machine because it's pretty obvious that things haven't been updated in quite some time. I'm not sure that a 10 year promise of stability is one we want kept (security patches, etc...)
The whole reason for the distro family to exist is for security patches.
No mention of Fedora? Is it not feasible that those currently using CentOS, and don't wish to use the supported Red Hat, will gradually migrate to the CoreOS, IoT or Server version of Fedora?
I would think that would be easier than migrating to a different package management system.
People use CentOS because it's stable - you can run one version for years, and get security and bug fixes, with no feature changes.
This change to CentOS means it won't have that stability, so some will need alternatives.
Fedora changes even faster than the new CentOS - a new version every six months! Each version is maintained for just over a year, and the maintenance includes new features.
Honestly, I won't be sad if this means appliance and VM image providers (for on-prem / private cloud application hosting) switch to Debian en masse. I only use CentOS where Debian is not an option.
If you are operating a billion-dollar computer paying IBM/Redhat is probably a good idea.
If you are operating a billion-dollar computer, not using IBM/Redhat is probably a good idea.
At that scale, hiring your own quality support and running open systems is a drop in the bucket.
If you want the full IBM/Redhat experience, then you can even afford to hire 5+ layers of middle management and PMs between you and your engineers.
> At that scale, hiring your own quality support and running open systems is a drop in the bucket.
"support" doesn't mean reading man pages, it means diagnosing and fixing some intermittent bug in Intel's 10G NIC driver.
Also, and this is why you pay Red Hat specifically: Having easy access to the developer that compiled a package, made a particular design decision for the OS, etc.
No offence, I've been in Linux business for 15+ years but I'm struggling to comprehend your comment. Apologies in advance, I'm not a native speaker.
Linux has come so far in the past 20 years, that not paying an annual license fee is now shameful.
Assuming you mean "billion dollar company", I think they always did, but they did it for the support.
I'm still confused about this? Why everyone is calling it death of CentOS?
Too bad Rocky Linux isn't called DollarLinux
Looks at cluster of over 5000 machines running CentOS.
Doesn't look dead to me.