Settings

Theme

Red Hat Enterprise Linux 8 released

redhat.com

289 points by lubomir 7 years ago · 149 comments

Reader

chomp 7 years ago

Wrote this comment a while ago for anyone wondering about this:

Just installed it in a VM, changes that jumped out at me:

• No Python (that you should develop against) installed out of the box. There's a /usr/libexec/platform-python (3.6) that yum (dnf) runs against, and then python2/python3 packages you can optionally install if you want to run python scripts.

• Kernel 4.18

• No more ntpd, chrony only

• /etc/sysconfig/network-scripts is a ghost town, save for a lonely ifcfg file for my network adapter. No more /etc/init.d/network, so /etc/init.d is finally cleaned out. It looks like static routes still go in route-<adapter> and you ifdown/ifup to pull those in (it calls nmcli).

• Pretty colors when running dmesg!

  • avar 7 years ago

    > Pretty colors when running dmesg!

    Neat, but a great isolated example of the ancient software people who use RHEL have to deal with. RHEL 7 has dmesg from util-linux 2.23, the "colors by default" feature[1] first came out with 2.24 released on October 21st, 2013, which is around the time[3] the first beta of RHEL 7 came out.

    1. https://github.com/karelzak/util-linux/commit/9bc2b51a06dc9c...

    2. https://github.com/karelzak/util-linux/releases/tag/v2.24

    3. https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#RHEL_...

    • lima 7 years ago

      Most things on RHEL 7 aren't ancient - there's plenty of backports, even major ones.

      Some examples:

      - OpenSSL rebase to 1.0.2k (for HTTP/2 support).

      - overlayfs2 kernel support.

      - Kernel eBPF instrumentation.

      - Introduction of podman and friends.

      - Ansible is kept up-to-date.

      - GCC 7 and Python 3.6 via Software Collections.

      This includes extensive testing. I have non-production systems on Fedora which run mainline kernels and have seen my fair share of performance regressions and crashes.

      I'm assuming there was no notable customer demand for colorful dmesg output.

      • X-Istence 7 years ago

        Some of the backports aren't nearly as fast or performant as using a newer kernel though.

        For example eBPF was back ported (and also in CentOS), but running a syscall heavy work-load in a docker container on the older kernel about 50% of the CPU time was spent in the kernel filter.

        I ended up moving our entire CI/CD platform to Ubuntu 18.04 and the performance issues went away and my workloads now run at full speed without slowdowns.

        RHEL 8 comes with the 4.18 series of the Linux kernel that is already EOL. That's a shame and once again it will fall behind quickly :/

        • CameronNemo 7 years ago

          Seriously why did they not up it to 4.19? Do they hate bicycles? Do they make money based on the fact that upstream LTS kernels have short shelf lives compared to RHEL's own LTS kernels?

          • rwmj 7 years ago

            The kernel version was finalized some time before this release when 4.18 was current. Red Hat expends a ton of effort on long term maintenance and huge backports of new features to the kernel, so while I don't want to speak for the kernel team, I don't think the upstream stable kernels bring very much to the table.

            Plus (my personal view) what goes into the upstream stable kernel is fairly random based on just mailing list NACKs, whereas what goes into the RH kernel has to pass a massive range of automated tests on a wide variety of real hardware.

            • snuxoll 7 years ago

              Red Hat also certifies a whitelist of symbols that will remain identical in all future releases for the life cycle, stuff like that takes time so just grabbing the newest LTS kernel the moment they cut a release isn’t feasible.

        • paulcarroty 7 years ago

          > That's a shame and once again it will fall behind quickly

          RH use kernel with TONS of patches, so, version isn't critical here.

          • pas 7 years ago

            Their frankenkernel is very hit and miss, backports or not. Docker had to disable a few features, because at first they seemed to work, but then they were buggy on RHEL.

            • Spivak 7 years ago

              But why wouldn't you use Red Hat's build of Docker? I mean if you're already paying them...

            • paulcarroty 7 years ago

              Can't remember any real problems with docker on RHEL/Fedora.

              • X-Istence 7 years ago

                I mentioned my sys call filtering issue with eBPF taking up massive amounts of CPU time...

      • VintageCool 7 years ago

        Colorful dmesg output was already available on Cent7, just not the default.

    • CoolGuySteve 7 years ago

      I've always felt that RHEL really excels at that old school corporate Unix feel of having to deal with stodgy tools that are either really old and/or lack basic ease of use features.

      Reminds me of the time I wrote a script that called 'hostname -x' on SunOS instead of Solaris and it changed the hostname to '-x' and broke X11. RHEL is the nostalgia Linux.

      But seriously, has anyone ever empirically verified that the Debian Stable/RHEL model of shipping a bunch of really old packages and then layering years of patches over top actually generates more stable, more secure code?

      My intuition after a couple decades of software dev is that bugs will fester longer in the old version and the patches themselves will start having bugs as the top of tree diverges more and more from the shipped package over time.

      • avar 7 years ago

        The main source of stability for RHEL isn't that any one arbitrary version of a package they ship is better than another one, or that their patches on top don't suck. It's that they ship a long-term "stable" (as in "doesn't change much", not "sucks less") set of software for production use.

        Thus, if you install some random vendor's shitty software you can rest assured that the version of libcurl and 50 other libraries they depend on is something they themselves have tested on RHEL.

        The same goes for hardware that you buy. When you buy e.g. Dell rack-mounted servers you can safely assume that the open source driver version maintained by the vendor shipped as part of the RHEL kernel is something that's seen extensive production use, unlike the latest upstream kernel, or whatever "in-between" Debian et al are shipping.

        Am I recommending you use RHEL? No, it's not the right answer for everything, and I certainly have my share of RHEL scars, including a couple of times where a mundane bug in my program turned out to be a kernel bug (one in RHEL's own shitty patches, another "known" bug with their ancient kernel).

        But this is the reason to use it, and why some major commercial vendors say "we support Linux, as any distro you want as long as it's on this list of RHEL versions". They just want to deal with those kernel/library versions, not any arbitrary combination out there in the wild.

      • _red 7 years ago

        >layering years of patches over top actually generates more stable, more secure code

        Well, I think your definition of 'stable' is different than what RHEL/Debian customers think. Stable isn't seen as "doesn't have bugs", its "works predictably". Which is a subtle but meaningful difference.

      • secabeen 7 years ago

        > But seriously, has anyone ever empirically verified that the Debian Stable/RHEL model of shipping a bunch of really old packages and then layering years of patches over top actually generates more stable, more secure code?

        Debian has released a new stable version every 2 years for the last 14 years. RHEL/CentOS are the only ones on a 3-5 year cycle.

        • CameronNemo 7 years ago

          And yet they wait months between freezing the distribution and releasing, because a few troublesome packages have issues.

          Someone needs to thaw Debian out.

          • rincebrain 7 years ago

            No?

            The fact that there's a freeze to allow for shaking out troublesome issues in a few packages (and possibly discover ones you didn't already find in older ones) without much risk of others newly breaking is a feature, not a bug.

            Debian testing/unstable, backports and third-party repos exist if people really want the latest anyone's packaged, or the latest version of one specific thing on their otherwise stable system.

            You may disagree with the philosophy, but every part of that behavior is working as intended.

      • linuxftw 7 years ago

        Stable doesn't necessarily mean 'doesn't crash' what it means is that the API/ABI interface is stable.

        EG: let's say libfoo.so.1 implements DO_FOO; libfoo.so.2 implements DO_FOO2, but not DO_FOO. In this case, anything you need that links to libfoo.so.1 and needs DO_FOO would need to be patched, recompiled, and shipped out to all your customers. For the distribution provider, this is not really a huge deal. But RHEL is merely the platform. The value-add is that 3rd parties can write software and compile against libs and know they're not going to break arbitrarily.

        Similarly, if you've ever written a kernel driver, you'd know that kernel function names and signatures can change from release to release. The same example above applies to kernel code as well. So, compiled binary drivers would have to be patched and recompiled, and shipped out. If you're writing a driver for a network card, would you prefer having to ship your (non bug) driver updates every few months, or every few years?

      • geofft 7 years ago

        It doesn't generate more secure code—as you say, patches themselves may have bugs, and way fewer people are looking at the patched branches. Active development happens on HEAD, and dodgy code is often rewritten before anyone goes actually looking for bugs (security or otherwise). Many years ago I helped with a paper on how the practice of applying only "important" security bug fixes doesn't work: https://arxiv.org/abs/0904.4058

        But the goal of the long-term-stable approach isn't security or stability per se: it's striking a tradeoff between operator work and risks to security and stability. You could, of course, snapshot Fedora (or Debian testing, or Arch, or whatever) from 2013 and keep running it. Nobody is stopping you, and it'll still run on new machines. And then you have to do zero work to keep your system up-to-date, but you'll likely have tons of security and stability bugs. On the other extreme, you could run Fedora rawhide (or Debian unstable, or current Arch, or whatever) and update nightly, which would mean you get security fixes as fast as possible (they're almost always developed on HEAD and backported to release branches), and you get performance and stability fixes that people haven't deemed worth backporting, but you also risk API-incompatible changes that break the actual applications you care about. You'll need to set up really good CI to make sure you have coverage of everything in your application, and it's not just a matter of automation: you'll need a well-staffed team to respond quickly every time that CI goes red, figure out what changed, and update your applications to match. (And, of course, you have the risk of security issues in new code that hasn't been subject to public scrutiny yet—the inverse problem of security issues in old code that's no longer subject to public scrutiny.)

        The goal of a long-term stable distro is to be in the middle of those two, to give you something that changes rarely (stability in the sense of "no surprises," not "doesn't crash in prod") but often enough that you get major, identified security fixes and particularly safe performance (and stability-as-in-"no longer crashes in prod") fixes.

        And yes, part of the goal of a long-term stable distro is that it provides you measurable security and stability over unmeasurable but potentially greater security and stability. They don't fix every CVE, but they do fix the flashy ones. You can look at it cynically and say, this is the distro for people who want to tell their boss "Yes, we patched Heartbleed and Shellshock" but don't inherently care about security. But on the other hand, flashy vulnerabilities are more likely to be exploited, so it's not a particularly bad tradeoff.

        • mistrial9 7 years ago

          .. booting a machine with Ubuntu 1404 as this is written, it is NOT so easy as 'snapshot and run it forever' because the OS people are trying to HELP you by FORCE to get a current version, plus so much of these machines success was networking, is network based, and relies on network to operate more things that anyone casually realizes..

          It is a GOOD thing to run old versions, for purpose, by your personal choice. It is NOT GOOD to have help by force, and in the US law system at least, many individual rights are based on this assumption, even with some inevitable negative outcomes. Please note that in many parts of the world, and in many kinds of organization, this trade-off is NOT made, and quite a few fundamental technical decisions are going to be made along the lines of 'do it, there is no choice'

        • lima 7 years ago

          As you say, it goes both ways. Many kernel vulnerabilities are found and fixed within weeks to months of introducing them, with LTS distros totally unaffected.

          And then you have bugs being fixed on master (sometimes silently), and the backport maintainers fail to backport them.

  • rwmj 7 years ago

    So I hope I can answer some of these [disclaimer: I work for Red Hat]:

    Python: This is about the module system. Modules let you install different versions of parts of the stack. For example, different Python, different Apache version, different QEMU. These will move much faster than base RHEL because they're now decoupled. You can install one version of each module from a choice of several versions available at any one time -- it's not parallel install (for that there is still Software Collections). The reason for not having parallel install is basically because people use containers or VMs so they don't really need it, and parallel install brings a lot of complexity.

    For Python we tried to remove all the Python dependencies from the base image, didn't quite do it because of dnf (although that is in the works with at least the base dnf 3 being rewritten in C++). So we need a reliable System Python which isn't in a module (else dnf would break if you install modular Python 2.7). Basically don't use System Python unless you're writing base system utilities, instead "yum install python3" should pull in the right module.

    Kernel: As usual the version number isn't that interesting, as a lot of work will be done through backports.

    ntpd: Can't say I'm very happy about this myself :-(

    Network scripts: It's NetworkManager all the way. Again, mixed feelings about this, but I can't say I loved network scripts either.

    • beagle3 7 years ago

      I have experienced only joy switching from ntpd (and worse, openntpd) to chrony.

      Why aren't you happy with the ntpd->chrony move?

    • noinsight 7 years ago

      > For Python we tried to remove all the Python dependencies from the base image

      Do you know why? I think it would be cool to not have any interpreted languages in the base image and FreeBSD manages to do that but I don't consider it that critical. For me it would be more interesting to not have Perl at all than Python...

      I guess the situation with Python 2.7 on RHEL 7 was/is that painful?

      • rwmj 7 years ago

        There's been a huge effort to get the base RHEL image size right down, so obviously getting rid of Python would help there. As for why we need to reduce the size of the base image, the answer is - as always - because containers.

        • Crontab 7 years ago

          I agree with this. My personal opinion is that advanced scripting languages, outside of shells, shouldn't be installed by default.

          (Of course, this usually gets killed pretty quickly, as dependency hell quickly brings in advanced scripting languages.)

          • int_19h 7 years ago

            I'm curious where the line for "advanced" lies, although in principle I'd agree that Python is certainly past it.

            But shouldn't interpreters be good for reducing the overall runtime code size in principle, if enough system tools run on them? High-level bytecode can be very compact.

            Or better yet, compile natively, but to threaded code, and share the stdlib behind it.

    • merb 7 years ago

      what about ansible? how does it fit into the "only system installed python"?

      • rwmj 7 years ago

        Ansible the client, or the target? Ansible doesn't need anything installed on the target (except sshd). For the client which presumably would be installed explicitly on far fewer machines it would bring in whatever Python it needs. I don't know if it uses System Python or a module however since I don't have it installed on RHEL 8 right now.

        • slrz 7 years ago

          Ansible does require that some flavour of Python is available on the target hosts. Without a Python interpreter, you're basically restricted to using the raw module (which, of course, you may use to bootstrap Python by invoking the package manager).

          Still, I guess most people don't bother with that and just assume the presence of Python (at least on Linux, the bootstrap-using-raw approach is already required on FreeBSD and others).

          https://docs.ansible.com/ansible/latest/modules/raw_module.h...

          • shoo 7 years ago

            > Ansible does require that some flavour of Python is available on the target hosts.

            Support for managing windows hosts with ansible is implemented by replacing the use of SSH & python with winrm & PowerShell respectively.

  • Qerub 7 years ago

    • No Docker. But we got https://podman.io/ instead.

  • sparkling 7 years ago

    It makes a lot of sense that OS-stuff points to a seperate Python interpreter, don't you think? I like this approach.

    • SloopJon 7 years ago

      Red Hat Developer blog post on the topic:

      https://developers.redhat.com/blog/2019/05/07/what-no-python...

      This is an instance of what they're calling application streams, as explained in another post:

      https://developers.redhat.com/blog/2018/11/15/rhel8-introduc...

    • lima 7 years ago

      Exactly, and with App Streams you won't be stuck on an old Python version! This is an awesome feature.

      You already have that on CentOS/RHEL 7 with Software Collections, and App Streams make it a first-class citizen.

    • ckuhl 7 years ago

      Oh it definitely does. I remember seriously messing up a Linux Mint installation a while back because I upgraded all of the 3rd party dependencies that were preinstalled (because hey, newer is better, right?).

    • tannhaeuser 7 years ago

      Yes and no. It absolutely makes sense to ensure that admin scripts used by RHEL run in a predictable, tested environment; even more so since Python has dropped backward portability. OTOH not even being able to rely on Python's presence is exactly the kind of thing that makes Python unsuitable as the shell replacement it is being promoted as.

      • CameronNemo 7 years ago

        Installing a python interpreter is one package. If you just want to write a quick script using the standard library, it is cheap as all hell.

      • Skunkleton 7 years ago

        Is python being promoted as a wholesale shell replacement? There certainly are plenty of overlapping usecases, but shell is a better fit for many of these tasks. Until something like ipython can replace the interactive shell, I don't see python replacing shell scripts entirely.

    • Diederich 7 years ago

      Aye; this should have been done a long time ago. I wanted to say 'since the beginning' (whenever that was), but maybe there were good reasons 20 years ago that I can't recall.

      It probably made sense when disk space was a lot more constrained.

      • toyg 7 years ago

        It just wasn't the practice back in the day. Does redhat ship a "platform-perl"?

    • vbezhenar 7 years ago

      Do you think that there should be separate platform-sh interpreter for system shell scripts, and so on? That kind of strange for me. Probably difference is that shell is "complete" program, while Python is not.

      • auscompgeek 7 years ago

        The difference with system shell scripts is that you can't accidentally clobber libraries that the system tools require to work - shell scripts don't really have any notion of installable libraries.

        • owl57 7 years ago

          It's not like you can't clobber PATH with ease and in diverse ways… It's just less easy to unclobber: huge hacks like Nix are born in this quest.

          • pfranz 7 years ago

            > huge hacks like Nix are born in this quest.

            Would you mind elaborating on this? Maybe I'm using Linux wrong, but Nix seems like a huge step forward fixing a lot of my frustrations. Admittedly, I haven't used it in production.

            One huge benefit of language-specific package managers is having multiple versions of packages on the same system and you can choose which you'd like for the project (without changing the OS). I feel like 10 years ago I heard a lot of grumbling about languages like Ruby or Python should just use apt/rpm, but I haven't see any OS package manager put much effort into this use-case (this is ignoring mac/win support). The closest I've seen is something like Red Hat's Software Collections.

            Personally, I feel strongly that any software that's critical to your company should be decoupled from the OS. This is borne through painful and much delayed OS updates and following it makes things much easier long term.

            My personal use-case is that different projects (working on multiple in tandem) need different stacks of versions. Also giving the ability to swap versions on the fly. Here's one package manager specifically designed for it https://github.com/nerdvegas/rez

            Swapping versions in Linux is pretty heavy out of the box (with rpm/apt); download, remove old versions files, write new versions files. Only one installed at a time. a/b comparing libraries is a pain. For the things I need, I build things into folders like libjpeg-turbo/2.0.2 and nasm/2.14.02 and set the ./configure flags to point to these...basically a more ad hoc approach to what Nix does.

            Where am I going wrong?

      • alexlarsson 7 years ago

        There isn't a history of issues with people wanting to update to a newer bash causing issues though.

        Generally sticking with whatever bash your distro comes with is fine, whereas the services you deploy often depend on a particular version of python.

      • derekp7 7 years ago

        There absolutely is a platform sh interpreter. That is why when ksh came out, it was called ksh instead of sh, so that /bin/sh would still function as expected (same with csh, bash, zsh, etc).

        And many of these will emulate /bin/sh behavior when called as such.

      • mbrumlow 7 years ago

        Ubuntu had dash - a minimal sh implementation.

        For the system I think it is a good idea to limit library interactions with things related to keeping the system running or booting.

  • aorth 7 years ago

    > • No more ntpd, chrony only

    Not surprising. I've been preferring Chrony to ntpd on systems without systemd-timesyncd (like CentOS 6 and 7) for at least two years since I read this Core Infrastructure article about Chrony:

    https://www.coreinfrastructure.org/blogs/securing-network-ti...

  • eikenberry 7 years ago

    Anyone have any thoughts on why chrony vs openntpd?

    Back when the ntpd security became a thing I evaluated chrony and openntpd as replacements and went with openntpd. It seemed to be simpler, used fewer system resources and had the openbsd teams reputation behind it.

    • jlgaddis 7 years ago

      For me, it comes down to the type of hosts I'm dealing with and how accurate I'd prefer their time to be. Years ago, I ran the reference implementation everywhere... but not anymore.

      OpenNTPD's goals are to be "good enough" and provide "reasonable accuracy". On an OpenBSD laptop and several "play" VMs (running OpenBSD), it was indeed "good enough". For individual desktops or laptops and the random "standalone" machine, OpenNTPD is simpler and "just works" (I like that it can "verify" the time using HTTPS hosts of my choosing).

      Nowadays, only my stratum 1 NTP servers still run the reference implementation. Everything else -- especially hosts which I may need to correlate events based on timestamps -- runs chrony.

      A comparison of the three implementations [0] is available on chrony's website. From a quick glance, I don't see anything blatantly incorrect or "biased. The comparison was discussed here on HN ~18 months ago [1].

      Basically, if accuracy to the second is good enough, OpenNTPD is fine. If you want more precision than that, go with chrony. It'll be MUCH more accurate and it really isn't any "harder" than OpenNTPD. You'll probably want to stick with ntpd if you're using reference clocks, although chrony supports a subset of them. If you're a nerd that wants the absolutely most accurate time you can get, Google "PTP 1588" as well.

      [0]: https://chrony.tuxfamily.org/comparison.html

      [1]: https://news.ycombinator.com/item?id=15324386

    • beagle3 7 years ago

      YMMV, but in my experience, if for whatever reason your clock is wrong by an hour in one direction (either ahead or after, don't remember), openntpd will take ages to skew it back, whereas chrony (and ntpd) do the right thing.

  • Tor3 7 years ago

    > Pretty colors when running dmesg!

    My most disliked feature. The colors in everything always clashes with both my background color (the best one for my eyes) and my vision in general. The first thing I do on any new system is to figure out how to turn of the colors. Otherwise I can't see any of the output.

    • dharmab 7 years ago

      The color choices are client-side. You can use something like base16-shell to customize it.

    • Bjartr 7 years ago

      Wouldn't it use the colors your terminal is configured to use?

      • Tor3 7 years ago

        My terminal isn't configured to use any colors. It's just xterm. The only colors configured are background and foreground, and those colors various tools insist on as default are always clashing with them (and with my vision too). There are no color environment variables enabled, or anything else indicating that colors should be used, but even so colors are coming out of various command line tools lately.

        • Bjartr 7 years ago

          If you run the script described in this post[1] you can display the colors that your terminal is configured to use. Just because you've overridden foreground and background doesn't mean you've altered the other colors xterm uses by default.

          [1] https://bbs.archlinux.org/viewtopic.php?id=51818&p=1

        • rashkov 7 years ago

          I think that if you configure the rest of the colors, then the commands will use those. I’ve set my urxvt term to use the solarized theme and I don’t have any problems with viewing colorized output. I’d have to test it, but I’m reasonably sure of this.

  • paulcarroty 7 years ago

    > No Python (that you should develop against) installed out of the box.

    Great news, don't like Python by default. All Linux basic services works great without it.

w-m 7 years ago

Here are the actual release notes (which don't seem to be linked anywhere from this marketing page):

https://access.redhat.com/documentation/en-us/red_hat_enterp...

  • CiPHPerCoder 7 years ago

    Good news: They ship PHP 7.2

    Bad news: ...without ext/sodium

    That's a frankly irresponsible decision for Red Hat to make.

    • Conan_Kudo 7 years ago

      > That's a frankly irresponsible decision for Red Hat to make.

      You say that without knowing anything at all about the situation? If you're a Red Hat customer, you could file a support ticket to get it pulled back in.

      Historically speaking, Red Hat is rather conservative about the number of crypto libraries they pull into their system because of the requirement to validate the system for certifications. But if there are legitimate requirements to have it included and managed by the base system, then usually they'll work to fix this if they are informed that it's needed.

      Again, if no one has officially requested it, then why would they pull it in?

      It can also help to file bugs on RHEL 8 in the Red Hat Bugzilla: https://bugzilla.redhat.com/enter_bug.cgi?product=Red%20Hat%...

    • wbl 7 years ago

      Red Hat has a long history of harming cryptography.

      • Conan_Kudo 7 years ago

        That's not fair. If you want to blame something for that, blame software patents. Some stuff used to be a huge minefield because of that.

    • beefhash 7 years ago

      Is it perhaps in a separate package, at least?

apaprocki 7 years ago

One thing to keep in mind if you build a lot of C++ -- this is the first RHEL version to use the C++11 ABI. Be prepared!

lelf 7 years ago

In case someone from RH is reading this: “Get the Study” links to https://www.redhat.com/en/page-not-found

_Understated_ 7 years ago

I have no appreciable Linux skills so forgive my naïvety with this question:

In the promo vid on their site, there are a couple of people gaming. Is this alluding to the fact that you can game on RHEL or that it powers the backend of games?

Just curious...

  • asark 7 years ago

    1) There actually are quite a few games available that run natively on Linux these days. Usually not AAA titles but lots of indie games. I've got (checks) about 550 games on Steam, largely through various bundle sales, and something like 30% of them run natively on Linux.

    2) Steam now bundles Wine and lots of games are tested and semi-officially supported with it now, bumps the playable fraction to more like 60-70%—and you can enable it for all games with a settings checkbox, too, and more often than not it works.

  • aetimmes 7 years ago

    Probably a little bit of both.

    You can game on RHEL but it wouldn't be my first choice of distro for it - IMO, Ubuntu and Fedora are both better-suited for that task.

    • burk96 7 years ago

      For gaming purposes, I also have to recommend Manjaro purely from how great it handles installing graphics drivers. I wouldn't necessarily recommend it for someone's first Linux install, but once you know the basics in case something breaks it provides a better gaming experience out of the box.

      • int_19h 7 years ago

        Mint has been doing that for a while.

        And I believe Ubuntu has finally started doing it as well in the most recent release?

        • burk96 7 years ago

          I wouldn't be surprised if Mint and Ubuntu are both in a better state in that regard since I last used them. Last time I used Mint on my gaming rig, the bundled Mint drivers had something funky with them. I don't remember what it was but I do remember I had to reinstall them. This was around Ubuntu's 15.04 I believe?

    • _Understated_ 7 years ago

      Why is that? Is it driver-related? Or is it that RHEL is more for stability rather than speed?

      • eitland 7 years ago

        RHEL comes with a price tag (although I think they now have free developer licenses.)

        Also RHEL development moves sloooowly. This is a feature and one of the main reasons to go with RHEL instead of not only unsupported distros but also supported-but-faster-moving distros, kind of like on Windows LTSB (I know to little about both to compare them, but enough to know that in certain organzations the promise that it will stay the way it is and by default only receive security updates is a huge feature.)

        • lmns 7 years ago

          They do have their Software Collections with new major releases for nodejs, python etc., though. It is only the base system that moves slowly.

    • snvzz 7 years ago

      For gaming, Arch.

      It's rolling release, including the whole stack that's supporting games. And they have a wrapper package that will install steam and its dependencies.

  • officeplant 7 years ago

    IIRC there has been some talk that it might be RedHat powering Google Stadia.

foobarbazetc 7 years ago

Woooooooohooooooooo!

We basically packaged our own RHEL8 on top of 7 and I’m glad we don’t have to do that for 95% of the packages anymore.

acdha 7 years ago

They don't appear to have updated their official Docker registry yet but it should hopefully be available soon for anyone who needs to test things:

https://access.redhat.com/containers/?tab=images#/registry.a...

Following the pattern of https://access.redhat.com/containers/?tab=images#/registry.a... and https://access.redhat.com/containers/?tab=images#/registry.a...

cwt137 7 years ago

I only saw one beta. Thats crazy if they released with only doing one beta.

digitalsushi 7 years ago

anyone have any insights on how oracle does their intake of this to create OEL? it's always been a bit of a mystery to me.

  • pnutjam 7 years ago

    Download CentOS

    grep -rli 'centos' * | xargs -i @sed -i 's/centos]/Oracle\ Unbreakable\ Linux/gi' @

    done

    • devops4free 7 years ago

      Why would you waste cycles invoking grep and xargs and memory piping data back and forth, where pure sed can do it? ;)

      • pnutjam 7 years ago

        Oracle has some pretty beefy hardware. They can afford the cycles. ;)

    • pjmlp 7 years ago

      It is a little bit more than that.

      • digitalsushi 7 years ago

        I know there's quite a bit of testing involved to verify their unbreakable kernel stuff and ksplice stuff is compatible. I would suspect there is also the spacewalk integration stuff that is fairly different than redhat's satellite stuff.

        • pnutjam 7 years ago

          I didn't realize, until recently, that satellite 6 is no longer based on spacewalk.

      • rurban 7 years ago

        DTrace esp.

ParadisoShlee 7 years ago

I sure Love that "web consol".

  • pnutjam 7 years ago

    looks like a toned down version of webmin.

    • hadrien01 7 years ago
      • pnutjam 7 years ago

        Yeah, I know it's cockpit. I'm not sure what it brings to the table. It's already possible to lock down webmin pretty heavily if I want to trust a windows admin to do linux.

    • wazoox 7 years ago

      It's "cockpit"; I regularly give it a try, looking for a better structured, more elegant webmin replacement; alas, cockpit has like 5% of webmin features.

      • nightfly 7 years ago

        But does it at least do that 5% well?

        • wazoox 7 years ago

          It's pretty, but not that functional. It tends to map naively underlying functions to buttons, without much thinking in actual UI or UX. It's a better, cleaner base than Webmin but it remains inferior in all and every aspect other than that.

robbyt 7 years ago

With the whole IBM thing going on, I bet CentOS 8 is going to take longer than usual to be released.

  • baijum 7 years ago

    Probably Red Hat Universal Base Image would be good enough for development instead of waiting for CentOS; https://www.redhat.com/en/blog/introducing-red-hat-universal...

    • lima 7 years ago

      Nice. I'm currently building an operator and this comes in handy.

      Quick question: how do I differentiate between freely available and subscription-only containers on the Red Hat Registry?

      • Conan_Kudo 7 years ago

        If they are in the ubi namespace, they are freely available. The Container Catalog also will tell you if you can pull without a login when you look at the details of a container.

    • vamega 7 years ago

      Can these containers be run hosts that are not RHEL? It seems like it's allowed, but I'm not completely sure I read that right.

      I'm also looking to know what packages are available in RHEL8 that are not available to UBI containers. I'm not able to find information as to what subset of the RHEL package universe is available to UBI containers. If you're aware of information on this I'd love to be pointed to it.

  • lima 7 years ago

    If anything, CentOS 8 will be faster since they won't have to deal with a big migration this time.

    IBM has zero incentive to interfere with CentOS, it's the best advertising for RHEL they can get.

    • vbezhenar 7 years ago

      The best advertising would be releasing RHEL 8 for free for personal usage. I wonder, how much workstation licenses ($300 per year) they are selling.

      • CUViper 7 years ago
        • vbezhenar 7 years ago

          I would want to use it on at least 3 computers. I don't think that developer license allows that. And registering 3 different accounts probably is abuse of that system. Also I don't really do any development for RHEL, just using it for my personal computing needs.

          • fizgig 7 years ago

            I have a dev license and I can register 16 systems. Your mileage may vary, but it never hurts to try.

            • awill 7 years ago

              I have a single server in my house. Mostly used to back up all my different devices and to run Plex. I run CentOS today. I am not clear on the restrictions and if I would be allowed to use the free RHEL.

              With the hassle (subscriptions, restrictions etc..) it isn't worth it.

              I do wish RHEL would allow it for usage that doesn't make money, like personal servers.

            • foobarbazetc 7 years ago

              Do you get updates on the dev license?

  • rwmj 7 years ago

    I don't think so - I know the CentOS folk and they are working very hard on a release. There is no interference from IBM, partly because Red Hat hasn't been acquired yet, and partly because why would they kill a cash machine that's proven to work so well? Despite some nonsense you read online IBM are not stupid.

    • B1FF_PSUVM 7 years ago

      > why would they kill a cash machine that's proven to work so well?

      You haven't been around a merger & acquisition process, I take it. It's usually like the scorpion and frog parable:

      A scorpion asks a frog to carry it across a river. The frog hesitates, afraid of being stung by the scorpion, but the scorpion argues that if it did that, they would both drown. The frog considers this argument sensible and agrees to transport the scorpion. The scorpion climbs onto the frog's back and the frog begins to swim, but midway across the river, the scorpion stings the frog, dooming them both. The dying frog asks the scorpion why it stung the frog, to which the scorpion replies "I couldn't help it. It's in my nature."

      (from https://en.wikipedia.org/wiki/The_Scorpion_and_the_Frog )

  • adontz 7 years ago

    And what is CentOS 8 ETA? I found nothing at CentOS web site :-( Do I miss anything?

  • unixhero 7 years ago

    What is IBM doing with this?

  • dralley 7 years ago

    Unlikely tbh.

ilovecaching 7 years ago

With Shadowman gone and Redhat now a branch of IBM, this RHEL release is really dampened for me.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection