Part 1: Multi-tenant Thanos with Jsonnet
Table of Contents
This guide has multiple parts, youre reading now Part 1. Had to do this to prevent the blog post from getting way too huge to manage and also not to tire the poor reader. The rest of the parts will be added below as they come
It all started with the “what if’s”
- what if I don’t need Helm to generate a shit ton of manifests and pick my
brain on understanding whatever the fuck the author of the Chart wanted
to say on this
_helpers.tpl
- what if Kustomize is simply not enough for keeping my code DRY because I have to apply too much brain gymnastics to format my code according to Kustomize’s requirements
- what if THERE’S MORE THAN THAT?
Well, there was actually more: jsonnet
If you check your basic tutorial on YouTube or any blog post on the net, nearly everyone keeps saying YAML is the language of data, well this is what’s built upon. Generating data, be it YAML or JSON.
Toolchain #
Like all carpenters out there, you’re gonna need some basic tooling to get this story working
Setting up your IDE (NeoVIM) #
DISCLAIMER(!) althought jsonnet
language support is obnoxiously crap
at the current point in time, you’re not going to get 100% flexibility in other
IDEs except NeoVIM. Just wanted to put this out there so that we smash all
dreams from the beginning and be real.
- install jsonnet CLI
- install jsonnet-bundler
- install jsonnet-language-server
- configure it in your NeoVIM CoC, you can see my Nix Flake for reference
Generating the YAML manifests #
By default, jsonnet -J vendor -m manifests example.jsonnet
command will dump
JSON files in the ./manifests
dir.
The -J vendor
flag we’re passing instructs jsonnet where to find the
dependencies we’ve pulled from the net.
Therefore, we’re going to need gotojsonyaml to convert the JSON files to
Kubernetes YAML manifests, as seen in this build.sh
script that everyone
shoves up our throats as the default way to go.
Bash scripts #
Personally, I despise using Bash scripts simply for this operation of converting
JSON to YAML because the above mentioned shell script is fairly limited. Sure,
one can argue it’s just a starting point or example to get started BUT
you’ll notice the script craps-out midway while rendering through kube-prometheus
jsonnet
files …
That’s why I’ve written a small Go program helper to tackle this operation in a much safer and faster way see codeberg.org/dminca/tiny-programs/jsonnet-convert
Shell scripts have been for a long time considered the glue code of architecture but what this glue code will never contain is UNIT TESTS. Let’s admit it, no one in their lives that’ve written shell scripts, have written a single unit test for them because the only supported way (which came fairly late) was bats and it’s horrendous.
Programming this in Go has many advantages
- everything is contained in that package
- unit tests sit close to the main code
- dependencies are tracked
- dependencies are cryptographically checksumed so ++Security
Can shell scripts do that? NOPE.
Project structure #
Since we mentioned we’re going to use kluctl to tackle the deployment of all generated manifests, we need to structure it in a way that’s not getting too messy to get out of control, here’s what I propose
.
├── kube-thanos-jsonnet
│ ├── manifests
│ ├── vendor
│ ├── first-tenant-example.jsonnet
│ ├── jsonnetfile.json
│ ├── jsonnetfile.lock.json
│ ├── kube-prometheus-example.jsonnet
│ └── thanos-example.jsonnet
├── vars
│ ├── common.yml
│ └── prd.yml
├── .gitignore
├── .kluctl.yml
├── deployment.yml
└── README.md
This is the most basic kluctl project structure that you could come up with
# .kluctl.yml
discriminator: "app.kubernetes.io/instance={{ target.name }}"
targets:
- name: prd
context: production-kubernetes-cluster-fqdn
args: {}
- name: stg
context: staging-kubernetes-cluster-fqdn
args: {}
# deployment.yml
vars:
- file: ./vars/common.yml
- file: ./vars/{{ target.name }}.yml
deployments:
- path: kube-thanos-base
git:
url: https://github.com/thanos-io/kube-thanos
ref: 6fedb045db2aeb0a4a880f77bfdfb5d4580f51f9
path: jsonnet
Use vars/common.yml
and vars/prd.yml
to define environment specific variables
for Kluctl to use.
Wrap-up #
What did we achieve in the first part
- we’ve set-up our development environment to make it easier to develop in Jsonnet
- we’ve setup a basic directory structure for working with
kluctl
in the next parts
Stay tuned, in the next part we’re going to adjust the project structure and make room for our tenants or even plan the whole tenant setup together. Think about questions like
- how many teams are there?
- do SREs need acccess to ALL the metrics?
- and many others