CentOS 7 released on x86_64
lists.centos.orgWoo. Have been using the RHEL 7 Amazon AMI images, and it's nice not to worry about shell scripts / custom supervisord stuff for your web services anymore.
My node app is deployed with a single `myapp.service` file thanks to systemd:
[Service]
ExecStart=/usr/local/bin/node --harmony /var/www/myapp/server.js
Restart=always
User=nobody
Group=nobody
Environment=PATH=/usr/bin:/usr/local/bin
Environment=NODE_ENV=production
[Install]
WantedBy=multi-user.target
Then: iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000
(since I run my app on a low port so it can run as nobody, and 3000 isn't visible to the outside) cp myapp.service /etc/systemd/system
systemctl enable myapp.service
systemctl start myapp.service
This has all the stuff you expect:- Not restarting repeatedly if the app is bouncing up and down.
- I can see how it goes with `journalctl` which reads journald messages, and those messages come from a source called `myapp` rather than the old ancient syslog facilities (local0, uucp, lpd) which everyone just ignored in favour of grepping.
You could potentially register your application with firewalld to make the iptables part even more elegant - then the port would only be open when the service was running.
That said I'm somewhat on the fence about firewalld in a server context - the zones are really designed around mobile computing use-cases, and I'm not a fan of xml as a configuration language.
I gave CentOS 7 a spin last night (been waiting for the release!).
I installed it on VirtualBox (using a Windows host pc). I must say, my first impressions are that this is a very nice release.
They implemented the same new installer from Fedora, which I really like. It makes for 2 clicks (Language and Keyboard) and then it's installing in the background while you can setup your passwords and accounts. This makes for a very fast install.
The system just seems to feel "faster". I know that is completely objective, but mind you, I did not have the VirtualBox Guest Tools installed on the VM, and it still felt "native". I'm not sure what about it specifically feels faster, just everything I guess... especially booting.
eth0 was set to "ONBOOT=no" by default, which tripped me up for a half second (since all previous releases came with all adapters up). A quick config change and restart the network service, and up and running. I was surprised there was already update packages available, including a kernel update.
VirtualBox did not want to install it's Guest Additions/Tools on the VM, due to some change with "numa", from my understanding it's memory allocation related (could be wrong). In any event, I got the "struct mm_struct has no member named numa_next_reset" error.
Some digging around and I located this patch: https://www.virtualbox.org/ticket/12638
Applied it to the VirtualBox src once it unpacked itself to the /opt directory. After the patch, the Virtualbox Tools compile ok and installed.
I have a lot more playing around to do this evening... but so far, happy to say, this is shaping up to be a great release.
How do you do "configtest" and "graceful" with systemd/systemctl
(trick question, you cannot, use old init.d scripts)
>trick question, you cannot
and 'apachectl configtest' or 'nginx -t' iirc[Unit] Description=The Apache HTTP Server After=remote-fs.target nss-lookup.target [Service] Type=notify EnvironmentFile=/etc/sysconfig/httpd ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND ExecReload=/usr/sbin/httpd $OPTIONS -k graceful ExecStop=/usr/sbin/httpd $OPTIONS -k graceful-stop KillSignal=SIGCONT PrivateTmp=true [Install] WantedBy=multi-user.targetWhy would I do a `configtest` with a thing that starts or stops a service in the first place? That's what `apachectl configtest` or whatever is for.
I wouldn't expect to go to Windows' list of services, right click one and run arbitrary commands either.
How do you do "configtest" and "graceful" with old init.d scripts?
# /etc/init.d/mysqld configtest Usage: /etc/init.d/mysqld {start|stop|status|restart|condrestart|try-restart|reload|force-reload} # /etc/init.d/mysqld graceful Usage: /etc/init.d/mysqld {start|stop|status|restart|condrestart|try-restart|reload|force-reload}configtest and graceful are "commands" used by other scripts such as httpd (apache) and nginx scripts which can take alternate signals to do other actions
but even in your example, condrestart, try-restart, those aren't supported by pure systemd scripts - however if you were using the init.d version you could do the easy-to-remember/type service mysqld condrestart
Fortunately with 7.0 you can keep using the old init.d scripts and custom commands, hopefully that will never go away with any of the later 7.x versions (I don't see why).
I'm not a systemd expert, but for configtest, you could add a 'Config' parameter and patch systemd to record the hash of each config file at start time. From then on, all .services, ever, that supplied their Config would be able to do a configtest and condrestart. DRY.
What's 'graceful'? A particular signal?
that looks so much cleaner than init.d scripts. I have heard some people refer to systemd as not very linux-esque - can someone comment on what that might mean?
Systemd isn't just replacing the init process and service manager, it encompasses many other boot and session related features. As an example, Gnome now uses parts of systemd for session management (replacing consoleskit, and others). While there are workarounds, this means no one can really use any other init system besides systemd, or things won't work correctly.
This is what many people view un-unix like, a big dependency, that can't be replaced in the chain. The systemd developers also develop udev (which actually builds out of the same source tree as systemd), and there have been a few arguments about whether certain responsibilities are that of the kernel, or udev/userspace http://lwn.net/Articles/518942/, http://lwn.net/Articles/593676/. Systemd also touts its cgroup usage, though they view that they should be the only user of cgroups, since they are in a spot to "manage things properly" (which is partially true). (Some services have also started relying on systemd's cgroups to clean up orphan processes, which is bad for other init systems that don't do this).
So no one really hates how systemd works, just how its managed, and what it aims to do. If it were a simple init system and service manager, there would probably have been no arguments.
>though they view that they should be the only user of cgroups
And they're absolutely correct. There can only be one cgroups manager on the system. cgmanager isn't finished, either.
>As an example, Gnome now uses parts of systemd for session management (replacing consoleskit, and others). While there are workarounds, this means no one can really use any other init system besides systemd, or things won't work correctly.
Gnome requires logind, which is the successor to consolekit. Gnome has repeatedly stated that they're willing to work with the BSDs et al with workarounds. You can either use the old logind, which doesn't require systemd as PID1 (due to cgroups management changes) or you can develop your own replacement to logind, one that doesn't require systemd as PID1.
People criticize these decisions, but they have no suggestions on how to fix it and, for the most part, they have no inkling of why there's a requirement.
1) Lennart Poettering
2) MAH FREEDOMS!
There are technical reasons people don't like systemd that are valid, but usually the die hard anti-systemd people boil down to hating Lennart Poettering and complaining about how the US Constitution must some where say they never have to use systemd.
There are some valid concerns (big scope, bloat, feature creep, etc) but they really haven't come to fruition, although are still valid concerns.
At the end of the day, without much effort or trouble of switching to systemd from OpenRC, my system boots faster, which is great.
Does "years of fucking with PulseAudio problems" count as its own item, or is that a subset of item 1?
I've never had issues with PulseAudio personally.
...said no one ever.
The very first thing I used to do on any new Linux install (until most distros stopped including it) was to uninstall Pulse and all of its dependencies.
I still uninstall it, even though I know it's improved a bit. I just can't get to the point of trusting it, or see the point for yet another layer of latency and bugs between programs generating audio and the sound card. They should have spent all that effort making a nice interface to dmix or something. Rather than bash ALSA for being "Linux only", then turn around and create systemd which is Linux only by design. I mean, we get it. You're the next Miguel de Icaza. The savior of Linux, coming to save us from the terror of text files, open standards, and clean dependencies. People must be forgetting the hell that was CORBA and Linux circa 2001 or so. You still can't install many apps without installing half of GNOME.
I'll say it too.
There is a lot of unfairness around PulseAudio. Sure, it had its share of bugs BUT so did Alsa. PulseAudio was pushing the envelope so crappy Alsa drivers showed they limits. Crappy sound chips too. But PA took most of the blame although it was not always it fault. It could have been handled differently, maybe.
ALSA is a piece of CRAP so its easy to compare with.
OSSv4 was the last sane thing in the linux audio department. :(
What sound card(s)?
Pulse audio on various old Thinkpads (X200s, X60, X61s) seems fine (Debian Wheezy and CentOS 6/pre-release 7) with built in sound card and a cheapo USB microphone as 'input'.
I did.
It depends what sound hardware and software you use, what the memory layout of the PulseAudio daemon happens to end up like on your system, your tolerance for audio glitches, and how long you go between reboots. The code quality's awful but if you're lucky you can miss out on the worst of the issues.
I can at least vouch for 1) drivers (arguably this is an alsa thing, but pulseaudio had a way of exposing problems), and 2) your tolerance not just for glitches, but also latency (in the form of buffers) and overall sound quality.
I suspect, based on how few people seem to be upset about pulse audio (as opposed to what one might expect) that some subset of popular sound hw worked rather well. It's just not a subset I ever owned.
The issue with drivers is a bit interesting. I wonder what pulseaudio does that alsa doesn't.
Haven't had issues across multiple systems, even back when it first came out.
I still use ALSA on my desktop system due to weird issues that only happen with PulseAudio.
I still don't trust Lennart, even though systemd looks nice from what i've seen.
Did you report the issues?
There are already several similar issues in their bug tracker with no resolution.
These all seem similar to what i've been experiencing:
https://bugs.freedesktop.org/show_bug.cgi?id=46350
https://bugs.freedesktop.org/show_bug.cgi?id=46296
This is what happened last time i decided to try PA again:
Neither, funnily enough. I've never quite understood the hate, but to be fair, if the complaints I've seen were wide-spread problems, that would make sense.
s/pulseaudio/linux/
does that suck too?
>...an abhorrent and violent slap in the face to the Unix philosophy...
I can't imagine why this didn't get more traction.
>Oh, an embedded HTTP server is loaded to read them. QR codes are served, as well.
And outright falsehoods. systemd-journald-gatewayd is entirely optional. People keep harping on a packaging mistake when it was first pushed to Fedora for testing, but repeating it so many times doesn't make it true.
>In fact, udev merged with systemd a long time ago
systemd relies on udev and dbus, but udev doesn't pull in systemd as a hard dependency: another falsehood I've seen parroted by those in support of the eudev fork.
I'd be very interested to see a qualified crypto expert on the "sealing" that journald uses. This is one of two indicators of a troubling level of arrogance from the developers. The crypto method used by journald to verify messages haven't been tampered with is called Forward Secure Sealing [0]. It's based on an invention of the brother of Lennart - the lead developer, and for a long time after first release even the whitepaper describing it in detail was "coming soon" The code he finally produced is [1] - but rather light on documentation. I'm still unaware of any proper analysis of this, and using your brothers own crypto methods and ignoring all the questions this has raised does not come across well - and appears to seriously violate the "don't roll your own crypto system" rule.
The second indicator is the attitude to bugs, of which [2] is a good example - several of the developers appear to be extremely defensive towards any suggestion of defects in their software, and simply close bugs blaming the users, other software, anything else.
I'd be hopeful that RedHat manage to reign this behavior in, but that didn't seem to work for Ulrich Drepper when he was employed by RedHat to work on glibc, and I'm not sure if it's going to work here.
That said - I'm not in the "systemd is awful" camp - I do think there's a whole bunch of things it does really well, and that a lot of the hate is really quite reactionary - but the thing that frustrates me is that between the haters and the supporters, there are important questions that are getting lost in the noise.
[0]: http://lwn.net/Articles/512895/ [1]: https://github.com/mezcalero/fsprg [2]: https://bugs.freedesktop.org/show_bug.cgi?id=76935
I just can't agree with your [2] as a problem. The actual problem (an assertion failure in systemd) was fixed, several alternative workarounds are provided in case the user can't or doesn't want to upgrade systemd immediately, and functionality changes about when and where and how to log were going on on the mailing list, as they should be, and the reporters were directed there politely, even after violent vitriolic attacks. After discussion concluded on the mailing list, systemd was changed to direct debug logs away from kmsg as soon as journald is available.
I can't find anything to complain about from the systemd team on that bug report. I'd just dismiss it as varying personal standards of politeness, but the complaints on that bug report are themselves far far worse, with vitriolic abuse and death threats, so there's got to be something else going on here.
There's also this: https://bugs.freedesktop.org/show_bug.cgi?id=73729
Their attitudes towards journal log corruption have been rather apathetic, as well, though I cannot find the particular bug report at the moment.
That discussion escalated quickly.. What i got from the thread is that systemd will only work with glibc, intentionally? If that's the case then it's kind of sad. Since systemd will become the standard glibc will be the only alternative.
Relevant work in the area is Log Hash Chaining as described in RFC 5848, which at least has been through some peer review.
I don't know why they chose to ignore that, let alone what their design is really supposed to guard against. Their design allows an attacker a window of 15 minutes where they can rewrite the log at will.
So the short of it is: Keep using remote logging. Authenticate that. Don't rely on journald.
(I too have had Drepper vibes about the whole situation for quite some time. But a new init standard was long overdue and if distros can finally rally around systemd it might be worth it.)
I like linus comment[0] about a workaround on the issue in your 3rd link.
[0]: http://lkml.iu.edu/hypermail/linux/kernel/1404.0/01331.html
From your [2]: Like for the kernel, there are options to fin-grain control systemd's logging behaviour; just do not use the generic term "debug" which is a convenience shortcut for the kernel AND the Base OS.
all I got from [2] is "if you would like to have productive discussion, move to the mailing list. if not, prepare to be banned."
Optional, but enabled by default. Most distros seem to ship with defaults and only customize flags related to library paths and FHS details (besides whatever patching they may apply). The fact that it's even there is distressing enough, really.
No one is saying udev pulls in systemd as a dependency. Where is that said?
>Optional, but enabled by default
Where is it enabled by default? In what distro does gatewayd get pulled in by default?
>No one is saying udev pulls in systemd as a dependency.
By the eudev hobbyists, whose justification for their fork was largely "we don't want to force people to use systemd" and "no, we didn't contact upstream with fixes before we forked" in their presentation.
Why do people use words like "violent" to describe the decision to change process launchers? Maybe if they'd toned down the rhetoric just a tiny bit that campaign might've worked better.
Linux did kind of get the short end of the stick with System V initialization. BSD rc scripts are also written in shell, but much cleaner (particularly when you make use of rcorder(8) dependencies).
systemd's declarative unit file syntax is easier to reason with, but comes at the expense of having to memorize a ton of options and being fundamentally dependent on the toolbox provided to you by systemd, since you can't code your way out of unconventional corner cases as easily.
The unit file syntax isn't the reason people complain, though.
Of course your systemd startup can start a sh script at ease. I think systemd is going to make my life so much easier. Being more on the dev side of devops, but still needing to deploy correctly restarting aps depending on complex systems working. i.e. NFS mounts being available for data etc...
Slackware has always used BSD init scripts and it's one of the oldest distros. You used to be able to choose any init system you wanted but it seems that systemd is becoming entwined with "Linux" in some nasty ways. I wonder how long Slackware (or any other distro) will be able to avoid using it since software like Gnome is starting to require it.
FWIW, systemd can also call scripts under /etc/init.d if that is really what floats your boat. It is fully backwards compatible, you just lose a lot of the flexibility of systemd when doing that. The redis-server package in RHEL7 (from EPEL) installs an init script and I know that systemctl starts it.
It looks like your node deployment is very clean. For your application, are you simply using npm and a package.json file?
Yes. I commit node_modules though.
Er 'low port' should be 'high port', ahem.
I always scare of CentOS/Fedora. Whenever I updated something, it will be very old packages. Leading to use 3rd repo, and without any kind of document, the next sysadmin will be in trouble. I used to install Gearman on Centos 5.9 and it was a nightmare: the original 3rd repo didn't have gearman and I have to use other repo which is complain about PHP-Common conflicting. Remi and webtactic did help at the end.
It's hard to keep Centos Update to date with latest software IMHO.
I'm not an SysAdmin but I think the idea of APT and YUM is same but why it's so hard/trouble to use YUM?
The issue is not the package manager, the issue is the repositories you're enabling. See https://iuscommunity.org/pages/TheSafeRepoInitiative.html for details.
Fedora is the RedHat "testbed" and has regular updates similar to the debian/ubuntu universes.
One of the core features of CentOS/RedHat/Scientific Linux/etc is their longterm support and package freeze (excluding critical security updates). If you are reliant on a different version of software from the default repositories (both newer or older) then you are encouraged to build and package them yourself. Additionally in the RedHat-derivative universe there is the EPEL package repository which contains a wider range of software and is kept more readily updated in addition to being maintained by the Fedora Project (essentially RedHat).
Besides using third party repos, you can also use Red Hat Software Collections for the official Red Hat offering of more recent packages (requires RHN, not available for CentOS).
https://access.redhat.com/documentation/en-US/Red_Hat_Softwa...
As another approach to mitigating dependency hell, you might consider using Docker containers for your services.
Software Collections are available for CentOS.
Very nice, thank you.
Huh? Fedora is probably the closest you get to a bleeding edge distro without going rolling release.
Only if you upgrade regularly. Support is very short and you'll get stuck without updates.
Perhaps it's what you're familiar with and the intended use case. I personally can't stand the way APT won't let you override certain behavior. RHEL / CentOS have always favored stability over being bleeding-edge, and in my experience EPEL provides enough newer software to make the system usable for most cases, although if you're wanting the latest frameworks, etc.. I can see that being incompatible with your needs. I have very rarely had any problems with broken dependencies in Red Hat repositories except for the occasional 15-minute hiccup. SUSE seems to have more problems, but still has never been enough of a problem to make me question using it in production.
I think the third-party frameworks issue can be solved with LXC / Docker. If app x requires library y and supporting utility z, this can all be put into a container without having to update the OS's versions of y and z.
Although personally I've had more instances of running into an incompatibility with the older CentOS 5 kernel than with any of the libraries in the distribution - which LXC wouldn't be able to help with.
"CentOS is an Enterprise-class Linux Distribution derived from sources freely provided to the public by Red Hat." ?
It used to be from a "Prominent North American Enterprise Linux Vendor". Something changed at some point.
They got official support is what changed.
http://www.redhat.com/about/news/press-archive/2014/1/red-ha...
Back in January there was a "[blank] is joining [blank]!" type of announcement.
http://lists.centos.org/pipermail/centos-announce/2014-Janua...
Redhat employs some of the CentOS people now, although their build process still seems to be pretty separate.
I must say that compared with the CentOS 6 release CentOS 7 is a breath of fresh air. I'll assume for the moment that this is because of Red Hat's involvement in the process. Specically the availability of build scripts and the openness of communication throughout the process.
The delay in releasing CentOS 6 and the delays in updates had me worried about the future of CentOS.
The CentOS team completely re-did their build system at the start of CentOS 6, and that was cause of the delay. This build system still has nothing in common with Red Hat's build system, and it worked pretty well for CentOS 7!
Now there are 2 kernel boot parameters required to get old style "eth#" interface names. net.ifnames=0 and biosdevname=0
CentOS 6 only required biosdevname=0
Thank you for this! I was going crazy trying to figure out how to get "eth0" back. Works perfect.
yum install net-tools is also a must
I think I've spent the first dozen hours with centos 7.0 just trying to get it to behave more like 6.5
Having to remember to run a script every time I edit grub configuration is annoying as hell too.
I mean that is so natural and easy to remember, lol - I had to make "update-grub" (like debian) an alias for itgrub2-mkconfig -o /boot/grub2/grub.cfg
512MB is the minimum memory requirement for CentOS-7
Well. Dangit. Grumble grumble...
There are quite a few choices of lightweight Linux distro. This isn't aimed at that market
Oh, I know, but I've been happily using CentOS 6 on my 128MB box, and was looking forward to CentOS 7.
It looks like the listed minimum for RHEL 6 was 1gb[1]. I don't think 128 was officially supported even then, so if you're happy with how it works now, you may still be happy with 7.
1) http://www.slac.stanford.edu/comp/unix/linux/install_RHEL6.h...
The installer requires 406mb of ram to work: http://wiki.centos.org/Manuals/ReleaseNotes/CentOS7#head-281...
Would it be possible to assign swap to a removable media device and then bootstrap the installer to run in that environment on a device with 128mb?
Probably. You'll have to hack the install procedure to enable swapping before it's being run though.
If you're serious about trying this, the fastest path is to install CentOS in a VM on a suitable machine that isn't going to spend a day in the installer swapping like crazy, and copy the image to a physical drive which you eventually stuff into the ancient low memory box.
> I've been happily using CentOS 6 on my 128MB box
I used to do that too, but then I figured there's a constructive use I could give to my old desktops when any such system reaches the age of replacement - just max it out on RAM and declare it a "home server". Now I can run a Minecraft node for my kids on it - with 2GB of Java heap, no less.
I mean, my Raspberry Pi has more than 128 MB of RAM.
Lots of RAM - like chicken soup for your server.
Oh, RAM sticks are cheap, esp. for old machines. This is true. But I use a VPS, and the 128MB instances are cheap enough to be worth taking ten minutes to shut off junk services (which improves security anyway).
It might be worth trying to install it on a larger instance, save the image, and re-launch it into a 128 MB instance. It might work that way.
When I'm building tiny Debian images, I use debootstrap, which doesn't seem to use much RAM at all.
There used to be a Fedora equivalent called febootstrap, but it looks like it's mutated a bit since then: http://people.redhat.com/~rjones/supermin/
You may be able to work around it that way.
Out of interest, what machine are you using CentOS 6 on? I find 128mb kinda tight - although I've recently found a VPS provider who's managed to get Debian Jessie working in 96!
Once upon the time I created "rinse" which is an RPM-using debootstrap-like tool.
I've handed that off now, but the project is still very much alive:
BuyVM, $15/year
The good news is BuyVM just announced they are preparing a CentOS 7 image, so the installer RAM limit may not matter.
You are right that 128mb is a bit tight, I have problems with some services that just gobble memory. Apache+PHP are terrible in that regard.
Granted it is an Enterprise level distro with iscsi, storage drivers, xfs as default, other relatively heavy things in it included by default. Running on smaller machines was never a goal, if it did, it was probably accidental.
I don't even know if they have an ARM port (maybe an ARM 64bit for those new hip low wattage ARM microservers).
I can learn to adjust to systemd
But wow I cannot stand grub2, what a mess.
Agreed. Config files 100 times bigger and more complicated. Why? Go to it's home page and find a link that says "differences between GRUB Legacy and GRUB" which simply goes to the grub .97 manual which says absolutely nothing about differences between the 2 versions. They really care about their users, that's for sure.
On Debian you usually stick to changing options in /etc/default/grub and don't touch anything in /etc/grub.d.
grub2 is pretty much the textbook definition of "overdesigned." All it has to do is transfer control to the OS... that's it. But somehow it grew code to parse filesystems, set up graphical user interfaces, load modules, and do half a dozen things that it really has no reason to do. Half the time this hairball won't even boot after you change something, because you forgot to run the correct script to refresh the other scripts, or you moved something on the disk.
Just install elilo and enjoy having a system that actually works.
Aye, and now grub2 requires a 1MB partition to boot a GPT labelled disk on a BIOS (non-EFI) system. Well, looks like we're out of primary partitions. Adios swap.
GPT labelled disks don't have a limit on the number of primary partitions
Nice! Didn't know that.
Is there something notable about this beyond just a new version?
RHEL 7 was released on June 10[1], but many people (myself included) don't have a RedHat subscription and run CentOS, the 'community' edition of RedHat.
Up until today, CentOS 6.5 was the latest release... so I can start using CentOS 7 now and enjoy the benefits that RedHat introduced in RHEL 7. You can see that the delay before CentOS releases a community version of RHEL has been much improved in recent years in this Wikipedia chart[2].
Also, iirc, this is the first major CentOS release with the CentOS project more or less under RedHat's wing[3].
[1] http://www.redhat.com/about/news/press-archive/2014/6/red-ha...
[2] http://en.wikipedia.org/wiki/CentOS#Versioning_and_releases
[3] http://www.infoworld.com/t/linux/red-hat-stamps-its-influenc...
CentOS/RHEL 7 uses systemd. That's a notable change. Also, LXC and docker should work out of the box. And xfs now instead of ext4 by default.
Release notes: http://wiki.centos.org/Manuals/ReleaseNotes/CentOS7
For a description of new features see:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterp...
You can go to https://git.centos.org which is hosting all the sources used to build the system. Of course for RPM packages that is only metadata, not the actual upstream software source.
Just out of curiosity, since it sounds like you might know: on ubuntu I can do something like sudo apt-get source packagename and the system will just download the associated source code if a source repo is setup in the system's sources list. Does RHEL/CentOS have a similar capability?
Yes it does. "yumdownloader --source package_name" will download the source RPM. yumdownloader is part of the yum-utils package.
I've never used RHEL. It doesn't seem prohibitively expensive. What advantages are there over Centos, other than support?
The support goes both directions. It is a way to get the organization you work for to indirectly donate to a variety of free software projects, and your company also gets the contractual safety net that they want.
Yes. If you have RHEL installed on hundreds (or thousands) of HP/Dell servers at a Fortune 1000 company, you'll have someone to call if the kernel on some production machine keeps dumping.
I can see a bootstrapped company using CentOS, or a company running on angel/seed money. Once a company gets Series A funding though, you have to start wondering why they wouldn't upgrade to Red Hat. The message from the company basically is they'd prefer the sysadmin to spend their nights and weekends figuring out problems, instead of making a small payment for support service. This is the type of position you want to run, not walk from.
On a job interview, a good question for a sysadmin to ask an interviewer when they say "do you have any questions to ask me?", is, "Will I support any machines, operating systems or applications that are not under a vendor support contract?" Inevitably there will be one or two legacy machines or applications, but if you get a laundry list in response, run.
I've been on both sides of the fence with regards to vendor support. In an extreme case at one company I worked with, when I'd report a slight irregularity or outage to my boss, the first question he'd ask is what is the vendor support ticket number. After all, the company was paying for the support, so it was a very high failing if an employee spent any energy in trying to solve an issue.
The problem with this, of course, was that I don't want to open a vendor ticket until I have something to give them to work on. A random program crashed? First I'd blame the programmer or some other issue, not a kernel bug. But that particular manager just wanted to pin everything on the vendors no matter what.
As an aside, since I've been supporting Linux systems for the last 1.5 decades, I've only had to call in OS vendor support twice -- once was to verify what I already new (clock skew due to a bad hardware timer chip), and another time to satisfy an app developer manager who wouldn't take responsibility for his team's code.
Actually, I should add something -- with Red Hat support, you also get access to a whole history of past case logs at access.redhat.com -- these are discoverable through a Google search, but to see the full solution you have to be logged in. And this has actually kept me from needing to call Red Hat on a number of occasions, so I guess this can count as using Red Hat's support indirectly. And their knowledge base is very well written, esp. when explaining the root cause of specific known past issues.
"The message from the company basically is they'd prefer the sysadmin to spend their nights and weekends figuring out problems, instead of making a small payment for support service."
So... if there's a friday night problem, you'd prefer staff to just be hanging, chillin' at home, and then when pressed, say "we created a ticket with redhat, nothing more to do!" ?
Critical breakages should keep people working until stuff is fixed. If you want to also pay a bit extra and involve external support people, that's fine, but it's not a magic bullet that just 'fixes' everything. You've now got to account for time to manage working with external support staff, making sure you can get them in to the affected systems, etc.
That's a false comparison.
If a machine is kernel dumping, and Red Hat is trying to diagnose and fix the machines, nobody said your sysadmins will automatically just go home.
Instead, they can use their time more productively: making a secondary/workaround, or figuring out ways to re-architect your solution around the failure points.
CentOS is a repackaged RHEL, with all the Red hat IP (logo's, artwork etc.) stuff stripped out. So it's almost the same distribution.
As such, the only advantage RHEL probably offers next to support are slightly faster updates.
And actually being able to call redhat can be very useful once in while when you are having a really bad day.
Given that the OP said "other than support", I'll assume you mean they're actually nice and supportive when call to complain about your day :)
Thanks! I somehow completely read over that part :) But my my single experience with RedHat support was a real pleasure.
They'll prioritize fixing your bugs with their product management team, but that is basically support. You pay for RHEL for support, that is all it ever was for.
The code is 100% open source.
Nicer default wallpaper
The main benefit is support. Both in package updates and longevity.
Also you may want things like their software collections (which have had a hard time making it into centos).
Cost can be a little much especially if you don't need support other than package updates.
Also workstation in the minimum if you are going to do any development as desktop doesn't come with dev tools as far as I remember.
I have loaded centos 7 onto laptop and I can't get into it its asking for localhost and password when I put it in it repeats
Post the link to the ISO you installed from.
Will I still be able to run commands like service mysqld start, etc?
Can't get into cent is 7
Am I missing something? This was submitted two hours ago, yet the images (CentOS-6.5-x86_64-bin-DVD1.iso) are the same that I had downloaded on 6/20/2014. The release notes list the new image names, but they don't appear to be pushed out to the mirrors.
Check here: http://isoredirect.centos.org/centos/7/isos/x86_64/
There's quite a few mirrors with images. However, they do recommend torrenting the images:
http://lists.centos.org/pipermail/centos-announce/2014-July/...