Press enter or click to view image in full size
When pkl was announced, I immediately had a project in mind. I had been brainstorming some ways to handle a kubernetes templating task and I realized pkl might be a suitable hammer.
The problem:
My company has a B2B product that currently uses a single-tenant environment setup where each customer gets their own API, database, subdomain, etc. If this were a multi-tenant setup, then at a minimum we would need one server and its supporting resources, but instead we need to provision one for every single customer and managing that many resources in K8s can quickly become untenable.
The solution:
Write one template, and make it easy to generate and apply changes to all environments. Build any supporting tooling (maybe bash scripts?) to simplify the process and make it easy to run. The right solution should output the individual tenants’ resources into separate folders so that we can easily add or remove tenants as necessary, and also see what changes would be made to an individual tenant after a change to the template. While it’s somewhat manual, and there are going to be a million enhancements to build out long-term, diff, apply, and delete are the core functions that I need in the short term.
kubectl diff -f /tenant-b
kubectl apply -f /tenant-b
kubectl delete -f /tenant-bpkl
What is pkl? Apple describes pkl as “A configuration as code language with rich validation and tooling.” In short, you can write configuration (as code) files using pkl and export them to various formats and types, like yaml. While it may not be the best solution to my problem it is a solution and I’m interested in trying out new tools and languages, so let’s get to it!
Disclaimer: This is definitely not the best solution, it’s just my first functional version. I’d love to hear other solutions or improvements you would make.
Round 1 — The bare minimum
To get started, I just wanted to write some pkl that would let me specify a tenant and generate some yaml that would be useful. I figured I’d start by just generating a service or an ingress definition and go from there. I read in the tenant name from an environment variable and print out a simple ClusterIP service. I did some trial and error, and relied heavily on the pkl language reference. I also specified that the output should use a YamlRenderer so that it always comes out in a format I can use with K8s.
tenant_name = read("env:TENANT")service {
apiVersion = "v1"
kind = "Service"
metadata {
name = "singletenant-api-" + tenant_name
}
spec {
ports = new Listing {
new {
["port"] = 4443
["name"] = "https"
["targetPort"] = 443
}
}
selector {
["app"] = "singletenant-api-" + tenant_name
}
type = "ClusterIP"
}
}
output {
value = service
renderer = new YamlRenderer {}
}
This works great for one service, and now you could use some bash-fu to generate yamls for each of our tenants.
for tenant in $(cat tenants.txt); do
TENANT=$tenant pkl eval round1.pkl > ./output1/$tenant.yaml;
doneIt would be easy enough to do this for a bunch of different resources if you wanted to duplicate it…
for tenant in $(cat tenants.txt); do
mkdir ./output/$tenant
TENANT=$tenant pkl eval service.pkl > ./output1/$tenant/service.yaml;
TENANT=$tenant pkl eval depl.pkl > ./output1/$tenant/depl.yaml;
...
doneThis works in theory, but it’s really not great. I’d rather not need to use bash to make this work. Let’s take a stab at loops for the next iteration.
Round 2 — Looping
At this point I started relying pretty heavily on the language reference. Here’s a few of the relevant ones:
My round 2 used a static list defined in the file and then looped over each of them to generate a service before looping through them again and outputting them to unique folders.
tenant_names = List("tenant1", "tenant2")serviceObjects {
for (_name in tenant_names) {
[_name] {
apiVersion = "v1"
kind = "Service"
metadata {
name = "singletenant-api-" + _name
}
spec {
ports = new Listing {
new {
["port"] = 4443
["name"] = "https"
["targetPort"] = 443
}
}
selector {
["app"] = "singletenant-api-" + _name
}
type = "ClusterIP"
}
}
}
}
output {
files {
for (_name in tenant_names) {
["./" + _name + "/service.yaml"] {
value = serviceObjects[_name]
renderer = new YamlRenderer {}
}
}
}
}
Now we can specify a list of tenants in a file, and using a single pkl command we can generate all of the yaml files.
$ pkl eval -m output2 round2.pkl
output2/tenant1/service.yaml
output2/tenant2/service.yaml
$ cat output2/tenant1/service.yaml
apiVersion: v1
kind: Service
metadata:
name: singletenant-api-tenant1
spec:
ports:
- port: 4443
name: https
targetPort: 443
selector:
app: singletenant-api-tenant1
type: ClusterIPRound 3 — pkl-k8s
It was around this time that I found apple/pkl-k8s and apple/pkl-k8s-examples. Apple went ahead and created existing data types for k8s objects and some example code to build them. I imported the K8sResource and Service objects and used those to generate a new service. One of the awesome things about this is that theoretically our Yaml output should always conform to K8s standards. Plus, there’s no need to spend time trying to figure out the format, instead we can look at the source code or the docs to see exactly what types and fields are expected!
Get Brian Sizemore’s stories in your inbox
Join Medium for free to get updates from this writer.
K8s Service Object Spec — These are the docs generated from the source code. Or if you prefer, go find the service source itself and take a look at how everything works.
// import "@k8s/K8sResource.pkl"
// import "@k8s/api/core/v1/Service.pkl"import "package://pkg.pkl-lang.org/pkl-k8s/k8s@1.0.1#/K8sResource.pkl"
import "package://pkg.pkl-lang.org/pkl-k8s/k8s@1.0.1#/api/core/v1/Service.pkl"
// Use -p tenant=NAME to specify prop
tenant_name = read("prop:tenant")
resources: Listing<K8sResource> = new {
new Service {
metadata {
name = "singletenant-api-" + tenant_name
}
spec {
ports {
new {
port = 4443
targetPort = 443
name = "https"
}
}
selector {
["app"] = "singletenant-api-" + tenant_name
}
type = "ClusterIP"
}
}
}
output {
value = resources
renderer = (K8sResource.output.renderer as YamlRenderer) {
isStream = true
}
}
% pkl eval round3.pkl -p tenant=tenant1
apiVersion: v1
kind: Service
metadata:
name: singletenant-api-tenant1
spec:
ports:
- port: 4443
name: https
targetPort: 443
type: ClusterIP
selector:
app: singletenant-api-tenant1You may have noticed I swapped the tenant_name value to read from a prop instead of an environment variable which makes the bash loop generation feel a little more reliable. There’s a few other resources that pkl can read from.
for t in $(cat tenants.txt); do
pkl eval round3.pkl -p tenant=$t > output3/$t.yaml;
doneRound 4 — Looping over K8sResources
I spent a long time trying to figure out how to loop over a typed Listing that matches K8sResource, but eventually I found a reasonable way to accomplish it. I’m adding the tenant name into the annotations of my service and using that to determine the filepath of the output file. This lets me pass in a list of tenants and output one directory for each tenant with all of the necessary yaml files.
import "package://pkg.pkl-lang.org/pkl-k8s/k8s@1.0.1#/K8sResource.pkl"
import "package://pkg.pkl-lang.org/pkl-k8s/k8s@1.0.1#/api/core/v1/Service.pkl"tenant_names = List("tenant1", "tenant2")
services: Listing<K8sResource> = new {
for (_name in tenant_names) {
new Service {
metadata {
name = "singletenant-api-" + _name
annotations {
["generated-by"] = "pkl"
["name"] = _name
}
}
spec {
ports {
new {
port = 4443
targetPort = 443
name = "https"
}
}
selector {
["app"] = "singletenant-api-" + _name
}
type = "ClusterIP"
}
}
}
}
output {
files {
for (s in services) {
["./" + s.metadata.annotations["name"] + "/service.yaml"] {
value = s
renderer = new YamlRenderer {}
}
}
}
}
$ pkl eval round4.pkl -m output4
output4/tenant1/service.yaml
output4/tenant2/service.yamlPutting a bow on it… for now
Now that we’ve got a reasonable handle on how this works, we need to add a couple more pieces. For my use case, an ingress resource and a deployment. I need to make sure they are all getting printed in appropriate places, and test out the flow to see if the essential pieces (apply, diff, delete) are working like I want.
Below is the code to generate everything. The deployment and ingress objects were significantly more complicated, but using the docs and the source code itself I was able to step through what data types were expected and figure out how to pass the right objects and types. I also went ahead and included a mapping for Env var names to their secrets so that I can easily add K8s secrets into environment variables within the deployments.
import "package://pkg.pkl-lang.org/pkl-k8s/k8s@1.0.1#/K8sResource.pkl"
import "package://pkg.pkl-lang.org/pkl-k8s/k8s@1.0.1#/api/apps/v1/Deployment.pkl"
import "package://pkg.pkl-lang.org/pkl-k8s/k8s@1.0.1#/api/core/v1/Service.pkl"
import "package://pkg.pkl-lang.org/pkl-k8s/k8s@1.0.1#/api/networking/v1/Ingress.pkl"
import "package://pkg.pkl-lang.org/pkl-k8s/k8s@1.0.1#/api/core/v1/EnvVar.pkl"/* Specify this with the -m ./output
path_prefix = "./output/"
*/
tenants = List(
"tenant-a",
"tenant-b",
"tenant-c",
"tenant-d"
)
image_tag = "1.0.0"
pod {
requests {
memory = "600Mi"
cpu = "500m"
}
}
varAndSecret = Map(
"DB_USER", "single-tenant-api",
"DB_PASS", "single-tenant-api",
"DNS_SERVER", "shared-config"
)
Deployments: Listing<K8sResource> = new {
for (_name in tenants) {
new Deployment {
metadata {
name = "single-tenant-api-" + _name
annotations {
["generated-by"] = "pkl"
["tenant-name"] = _name
}
}
spec {
replicas = 2
selector {
matchLabels {
["app"] = "single-tenant-api-" + _name
}
}
strategy {
rollingUpdate {
maxSurge = 1
maxUnavailable = 1
}
type = "RollingUpdate"
}
template {
metadata {
creationTimestamp = null
labels {
["app"] = "single-tenant-api-" + _name
}
name = "single-tenant-api-" + _name
}
spec {
containers {
new {
image = "index.docker.io/account/api:" + image_tag
name = "single-tenant-api-" + _name
imagePullPolicy = "Always"
resources {
requests {
["cpu"] = pod.requests.cpu
["memory"] = pod.requests.memory
}
limits {
["memory"] = pod.requests.memory
}
}
terminationMessagePath = "/dev/termination-log"
terminationMessagePolicy = "File"
envFrom {
new {
configMapRef {
name = "single-tenant-api-configmap"
}
}
}
env = new Listing<EnvVar> {
new {
name = "TENANT"
value = _name
}
for (_var,_secret in varAndSecret) {
new {
name = _var
valueFrom {
secretKeyRef {
name = _secret
key = _var
}
}
}
}
}
}
}
imagePullSecrets {
new {
name = "imagePullKey"
}
}
restartPolicy = "Always"
schedulerName = "default-scheduler"
securityContext {}
terminationGracePeriodSeconds = 30
}
}
}
}
}
}
Services: Listing<K8sResource> = new {
for (_name in tenants) {
new Service {
metadata {
name = "single-tenant-api-" + _name
annotations {
["generated-by"] = "pkl"
["tenant-name"] = _name
}
}
spec {
ports {
new {
port = 4443
targetPort = 443
name = "https"
}
}
selector {
["app"] = "single-tenant-api-" + _name
}
type = "ClusterIP"
}
}
}
}
Ingresses: Listing<K8sResource> = new {
for (_name in tenants) {
new Ingress {
metadata {
name = "single-tenant-api-"+_name
annotations {
["cert-manager.io/issuer"] = "letsencrypt"
["generated-by"] = "pkl"
["tenant-name"] = _name
}
}
spec = new {
ingressClassName = "nginx"
tls = new Listing<Ingress.IngressTLS> {
new {
secretName = _name + "-api-singletenant-com"
hosts = new Listing<String> {
_name + ".api.singletenant.com"
}
}
}
rules = new Listing<Ingress.IngressRule>{
new {
host = _name + ".api.singletenant.com"
http = new Ingress.HTTPIngressRuleValue {
paths = new Listing<Ingress.HTTPIngressPath> {
new {
path = "/"
pathType = "Prefix"
backend = new Ingress.IngressBackend {
service = new Ingress.IngressServiceBackend {
port = new Ingress.ServiceBackendPort {
number = 4443
}
name = "single-tenant-api-" + _name
}
}
}
}
}
}
}
}
}
}
}
// Customize the yamlrender configuration here
CustomRenderer = new YamlRenderer{
}
output {
files {
for (deployment in Deployments) {
["./" + deployment.metadata.annotations["tenant-name"] + "/deployment.yaml"] {
value = deployment
renderer = CustomRenderer
}
}
// Generate Service yaml files
for (service in Services) {
["./" + service.metadata.annotations["tenant-name"] + "/service.yaml"] {
value = service
renderer = CustomRenderer
}
}
for (ingress in Ingresses) {
["./" + ingress.metadata.annotations["tenant-name"] + "/ingress.yaml"] {
value = ingress
renderer = CustomRenderer
}
}
}
}
Having access to the docs was invaluable here. Figuring out the syntax for creating new objects within the Ingress took a while but I have a much better understanding of how to approach it now.
Take a look at the definition of Ingress here and the docs here and trace through the types to get a better understanding. If you want further customizations and aren’t sure what is acceptable search the docs for an object and look at all of its available properties.
Now let’s verify that apply, diff, and delete are working as expected. Here’s some sample output of my final pkl code in action.
% pkl eval round5.pkl -m round5
round5/tenant-a/deployment.yaml
round5/tenant-b/deployment.yaml
round5/tenant-c/deployment.yaml
round5/tenant-a/service.yaml
round5/tenant-b/service.yaml
round5/tenant-c/service.yaml
round5/tenant-a/ingress.yaml
round5/tenant-b/ingress.yaml
round5/tenant-c/ingress.yaml% kubectl apply -f round5/tenant-b
deployment.apps/single-tenant-api-tenant-b created
ingress.networking.k8s.io/single-tenant-api-tenant-b created
service/single-tenant-api-tenant-b created
Now my team pushed a new release and bumped the image version to 1.0.1. I’ll update the image_tag in my code and use diff to see what happens.
% pkl eval round5.pkl -m round5
round5/tenant-a/deployment.yaml
round5/tenant-b/deployment.yaml
round5/tenant-c/deployment.yaml
round5/tenant-a/service.yaml
round5/tenant-b/service.yaml
round5/tenant-c/service.yaml
round5/tenant-a/ingress.yaml
round5/tenant-b/ingress.yaml
round5/tenant-c/ingress.yaml% kubectl diff -f round5/tenant-b
envFrom:
- configMapRef:
name: single-tenant-api-configmap
- image: index.docker.io/account/api:1.0.0
+ image: index.docker.io/account/api:1.0.1
imagePullPolicy: Always
name: single-tenant-api-tenant-b
resources:
And of course, whenever you need to clean up one set of infrastructure you can do it like so…
% kubectl delete -f round5/tenant-b
deployment.apps "single-tenant-api-tenant-b" deleted
ingress.networking.k8s.io "single-tenant-api-tenant-b" deleted
service "single-tenant-api-tenant-b" deletedFinal Thoughts
There are a couple more changes I’ll be making internally, and some other things I’d like to figure out, but this is “good enough” for now. First off, I plan to pull the list of tenants dynamically from a file. For now, I’ll plan to do it in bash using curl, but eventually I hope to find a good way to do it within pkl.
$ curl https://some-api.com/api/tenants -o ./tenants.txt
$ pkl eval project.pklI’m also interested in automating the deployment of new clients using Github actions. At the moment, I haven’t seen any Github Actions for pkl from Apple or anyone else. I’m curious to see what tools pop up in the coming weeks, and for anyone looking for an impactful new project, you might want to take a shot at it.
You can find all of the code used in this blog in my Github repository.
I’d love to hear from you about how you are solving similar problems. Does pkl seem like a good idea? Do you have a better solution using something else?