helm.md

9 min read Original article ↗

Let’s talk about helm

I understand that Helm is supposed to be this helpful package manager for Kubernetes, kind of like what apt-get is to Ubuntu. But let me be clear, and the analogy ends at a superficial level. Helm is far from a perfect tool. It comes with issues that make it a serious challenge for Kubernetes management.

Let's start with the basic problem: complexity. It's ironic because Helm is supposed to simplify deploying applications on Kubernetes. However, more often than not, Helm ends up complicating things. Helm charts can be undeniably complex to create and manage. You might argue that once you master it, it gets easier. However, shouldn't a tool like this be more intuitive so it's not a steep learning curve?

Then there's the issue of debugging. It is unnecessarily tedious with Helm. Helm doesn't make it easy to figure out what's wrong if something goes wrong during deployment. The error messages are often cryptic and not useful. Also, Helm can sometimes leave resources dangling in your cluster if something goes wrong during an install or upgrade.

Helm also raises a lot of questions about security. For example, in Helm v2, Tiller was a major concern. Tiller was the in-cluster component of Helm that had access to manage resources in all namespaces, potentially becoming a dangerous attack vector. In Helm v3, while Tiller was removed, concerns remain about how Helm interacts with Kubernetes and the permissions it needs.

And let's not forget the release upgrades. Helm doesn't provide a clear path for upgrade scenarios where you might want to gradually roll out changes, monitor for issues, and potentially roll back. It's mostly an all or nothing upgrade situation.

The community-supported Helm charts are another horrendous pain in the ass! The concept is admirable in theory - a community-driven initiative to support the development and maintenance of ready-to-use Helm charts. But in practice? It's another story. The quality and maintenance of these charts are inconsistent, to say the least. And while some charts are managed and updated meticulously, others are practically abandoned, leading to a patchwork landscape of reliability and trustworthiness.

Many community charts don't keep pace with the rapid evolution of Kubernetes and the applications they're meant to deploy. They quickly fall out of date and, as a result, can cause a great deal of frustration when they don't work as expected. They're supposed to be a convenience, but they can be anything but.

There's also a significant problem with community charts regarding customization. Charts are often created with a specific use case and may not cover all possible configurations or use cases. If your use case differs even slightly, you may have to fork the chart and maintain your version. And you'd better be prepared to keep it updated because that's a responsibility you've just taken on. Then there's the question of packaging and distribution. Helm charts are packaged in tgz archives and distributed via a special HTTP server. This can make them difficult to inspect before installing, and it also requires you to set up and maintain this HTTP server if you want to distribute your own charts.

The sheer number of community-supported charts is overwhelming. It's challenging to sift through them and find a chart that suits your needs. And once you've found a potential match, there's the challenge of verifying the chart. Is it reliable? Is it secure? Has it been adequately maintained? These are crucial questions, and the answers aren't always clear.

Helm, it's a tool that has found favor among a wide variety of users, some of whom may not have an extensive background in Kubernetes or cloud-native infrastructure. Its promise of simplicity, along with an abundant collection of shitty community charts, can be very appealing to those just starting their journey or looking for a quick hack to deploy applications without getting into the complexities of Kubernetes.

However, this 'black box' convenience also creates a knowledge gap. Without a deep understanding of Kubernetes' inner workings, users may find themselves at a disadvantage when something goes wrong or when their needs extend beyond the capabilities of an existing Helm chart. In essence, they become reliant on the abstraction layer provided by Helm, which is not inherently wrong. Still, it can become a limiting factor when deeper customization or troubleshooting is required.

This isn't to say that all Helm users are less technically savvy; indeed, many highly skilled engineers use Helm successfully. But it does point to a trend where Helm, with its simplifying abstraction layer, attracts users who want to deploy applications on Kubernetes without necessarily understanding Kubernetes deeply. While this approach can work in some scenarios, it often leads to challenges when things do not work as expected or when more complex Kubernetes configurations are required.

A better way forward (possibly)

Let's shift our focus to Kustomize and GitOps, and discuss why this combination is a better solution for managing Kubernetes deployments than Helm.

Kustomize is part of Kubernetes (kubectl). This means you work with the same language, constructs, and tooling. You don't need to learn a new language or paradigm, unlike with Helm charts. This simplifies things and reduces the potential for mistakes and misunderstandings.

Kustomize takes a different approach to managing environments. Instead of parameterizing a set of templates, Kustomize applies transformations to a base set of Kubernetes manifests. This makes it much easier to reason about what the final applied configuration will look like, unlike Helm where you can sometimes be left guessing.

Kustomize fits perfectly into the GitOps paradigm. With GitOps, you manage your infrastructure and deployments through Git. Your desired state is described declaratively using Kubernetes manifests, and changes are automatically applied to your cluster. This fits Kustomize like a glove, as its whole model is built around declarative configuration.

GitOps provides a clear and auditable history of all changes. Every modification to your system is stored in Git, providing a comprehensive audit trail. Moreover, the code review process in Git can be used to enforce quality control, ensuring that only approved changes are applied.

GitOps and Kustomize can provide increased reliability and faster recovery. Since your entire system state is stored in Git, you can quickly restore or recreate your clusters. Also, because changes are applied automatically, you can respond more quickly to incidents.

Kustomize allows for custom resource definition (CRD) support out of the box. This is a significant benefit over Helm, which often struggles with CRDs. Kustomize was built to work natively with Kubernetes API objects, including CRDs.

Kustomize and GitOps provide a more intuitive, flexible, and robust system for managing Kubernetes deployments than Helm. It removes a lot of the complexity and uncertainty introduced by Helm, and leverages the power of Git to provide a version-controlled, auditable, and automated system.

Back to the basics (possibly)

Using plain old Kubernetes manifest files, managed directly in Git. It's a return to the basics, but with a good reason. Perhaps I’m just old but this tried and true method leave little room for mistakes once some good patterns have been established.

First off, working with raw Kubernetes manifest files means you're working directly with the Kubernetes API. There's no abstraction layer to confuse things or to maintain. If you understand the Kubernetes API, you understand your deployment. You're not fighting with a chart's quirks, and there's no middleman obscuring what's happening under the hood. In the event of an error, you know exactly what's wrong without having to parse through any cryptic error messages.

Directly using Kubernetes manifests also gives you more control and flexibility. You're not confined to a pre-defined chart or structure. You can arrange and manage your manifests however you like, tailoring them to the specific needs of your project.

Adding these manifest files to Git adds the benefits of version control. This means every change is tracked, providing an audit trail and making it easier to find and fix problems. It also means you can leverage all the tools and workflows that Git provides, like branches, pull requests, and continuous integration/continuous deployment (CI/CD) pipelines.

Another advantage of Git is the code review process. With Helm charts, it's often difficult to review changes because you're not just reviewing the underlying Kubernetes resources, but also how the templating language is being used. With raw manifests in Git, reviews are more straightforward, which can lead to higher code quality and fewer mistakes.

Moreover, managing Kubernetes resources directly in Git is the foundational principle of GitOps. Your Git repository becomes the single source of truth for your infrastructure. Any changes to the repository are automatically applied to your infrastructure, providing a declarative and automated approach to deployment. This also makes it easier to recover from issues since you can roll back to any previous state stored in Git.

While using raw Kubernetes manifests with Git might seem like a step back in terms of complexity, it provides more control, transparency, and flexibility. It allows you to leverage all the powerful features of Git and fits perfectly with the GitOps paradigm. As with any tool or approach, it's not a one-size-fits-all solution, but for many, it's a more straightforward and effective way to manage Kubernetes deployments.

Conclusion

I understand that we've had a fair share of Helm usage here. It's like water under the bridge at this point, a part of our history that has contributed to our learning curve. But let's be clear, moving forward. Helm is off the table. No more Helm.

I'm not saying this out of spite or without reason. As much as it has served us in the past, Helm has shown its flaws over time. It's complicated, sometimes obscure, often frustrating, and more often than not, it's a black box of complexities that makes our lives harder. It's a beast that we've been wrestling with, and frankly, it's not worth the struggle.

This may sound harsh, but I view Helm as a malignancy in our tech stack. It's cancer that has the potential to grow uncontrollably, obscuring our understanding, complicating our workflows, and making us dependent on a tool that more often obstructs than assists.

That's a situation I won't stand for. As someone ultimately responsible for the health and efficiency of our technical environment, I cannot in good conscience let this continue. I won't let Helm be the ticking time bomb in our infrastructure.

Now, let's talk about the future. There are other solutions out there – Kustomize, raw Kubernetes manifests managed in Git, GitOps, or other tools that offer simplicity and transparency. I'm open to discussing these alternatives or others and finding what fits our needs best. What matters is that we take a route that does not involve Helm.

Damon