Kubernetes on Bare Metal
joshrendek.comInteresting, I always thought Bare Metal is synonym for actual hardware server, but the author mixes those:
> Bare metal for this conversation means a regular VM/VPS provider or a regular private provider like Proxmox with no special services - or actual hardware.
Is this a common theme nowadays?
The article itself is of course really nice, it shows that "Kubernetes is hard to set up" theme is not always right.
I agree with you. Bare metal to me always implies, well, actual physical servers made out of metal :) A better title might have been "Kubernetes Without a Cloud" or "Kubernetes on-premise". Since what the author is trying to recreate is all the ancillary services that are there for the taking when you run in a cloud provider. Load balancing, storage, cert management, firewall rules, all fronted by easy to consume API's. Which are the actual hard part of setting up a functional k8s cluster.
But he talks about VPS, that’s not on-premise. This article mangles its definitions
Calling VMs/VPS bare metal is unusual from what I see.
There is a sort of "in between" use of the term for a VPS that doesn't share the underlying real server with any other VPS instances.
Like these: https://www.scaleway.com/baremetal-cloud-servers/
That's pushing it a little for me. Certainly calling a regular VPS bare metal is silly.
The page you link claims that there is no hypervisor running, so it's not a VM, but "true" Bare Metal.
I wonder if it's a single container, running your OS on top of theirs then. I don't see how their API would work sans both a VM and containers. Unless maybe they are supplying a fixed set of tweaked distributions.
I don't know their specific setup, but Bare Metal providers generally either netboot your image, or netboot something that copies your image to local disk, and then reboot into it. That can be automated quite far, the biggest issue is having a hardware platform that can't be permanently compromised by the running OS (e.g. on a typical x86 system, the OS could reflash the BIOS or firmware of other components)
I thought Bare Metal meant dedicated hardware in the cloud. E.g. Vultr (https://www.vultr.com/pricing/baremetal/) distinguishes VPS vs bare metal as you getting access to entire physical servers without neighbours. Not entirely sure if this is standard nomenclature though.
When I read bare metal, I thought we were talking about some sort if no-kernel situation, or at least parts of the stack living in-kernel (which would have surprised me), certainly not running in userland, much less in a VM and even much less on a VPS.
The title is ridiculous.
In modern "cloud" terminology bare metal means "no virtualisation" AFAIK.
I think he means "native", as in native to the host (regardless if it's virtualized or not).
I also was confused about the title - expected to see an article about spinning up a cluster in Packet.net
I used bare metal to combine both bare VPS providers and regular hardware -- basically anything without 'cloudy' options (ie: VPCs, ELBs, etc)
I think "self-hosted" would be the right term.
Bare metal is a valid title for this article because his instructions wouldn't change if it actually was bare metal.
I don't disagree with your statement but I would have expected an article with "bare metal" in the title to cover some of the unique issues associated with bare metal. Like measuring, planning, and managing capacity.
In my (perhaps unique) experience measuring planning & managing capacity is more of a problem when using VPS's. Bare metal has so much more capacity per node than small VPS's that the minimum 3 node cluster will handle a few hundred small sites before hitting any sort of limit. So why bother measuring. :)
In my experience, adding more VMs when capacity is needed is trivial to automate on a platform like AWS. With real physical machines it takes advanced planning and actual physical work to deploy more machines. Even with a managed hosting provider there is work to do to automate the provisioning.
Adding a new bare metal machine or a new VM to the setup in the original article is exactly the same amount of work, automated or not.
How so? If I need a new VM I just click (or make an api call) and it's almost instantly ready to go. If I want a new physical server I have to order it and get it delivered to my cage and then have someone rack and wire it before I can even start provisioning. There is a lot more involved in the second case.
Slightly off topic, but I feel like calling an internet server running an OS "bare-metal" is a disgrace to what "bare-metal" originally and historically referred to, i.e. computer hardware without any OS. Maybe IaaS providers are running out of creative names that they are now polluting other techs' namespace?
To me, bare metal has meant running on an OS that’s directly on the hardware, instead of a virtual machine. However, I can see your point about what it could mean (i.e. the software itself is compiled to directly run on the hardware with no OS), but I’m struggling to think of a time when anything worked like that. You have to go really, really far back in history to apply that definition to general purpose computing systems. I’m left thinking that it would only really apply to embedded systems and hardware controllers, and many of those now even have some kind of micro-os that runs on them.
Not virtualized doesn't mean bare metal. I think the term would make more sense to you if you worked through "Linux from Scratch." There are things between embedded and running a full Linux OS.
What would those things be? Maybe it's a continuum but concepts are not continuous. Surely there are various degrees of OS but if it's anything that manages the "bare" hardware it's something like an OS. Arguably you could also call Xen Dom0 an operating system.
The younger you are, the higher up you operate in the pile of layers that modern computing has become, with little to no understanding of what's beneath.
That's a little unfair. The article shows options to run on "real" bare-metal hardware, but the same tools will work on cloud instances as well.
While real hardware has metal, that's not what bare metal means.
From your profile I see that you are interested in embedded systems, so that probably explains why you are upset. After almost two decades of virtualization I think "bare metal" has become a common term for server computing without VM. I am old but had no problems understanding the term as it was meant. Meanings of words vary with context and change over time.
I used Kubespray on Container Linux with Calico. Maybe I got lucky but I had it working perfectly almost first try. I think I needed to handle like one error in the entire Ansible playbook.
Agreed, I've set up about 6 different clusters using kubespray and never had any issues. Kubeadm is nice for dev clusters but there are still a bunch of hoops you have to jump through to set up an HA cluster using kubeadm.
Thanks for this reccomendation! I just checked out the repo and this looks really interesting, I'll have to try it out.
What's the best approach for on-premise k8 clusters?