Cloud Native Rejekts EU 2025

31 March 2025 London, United Kingdom

Kubenix: Declare Your K8s Workloads Fully Declarative

slides.pdf

youtube.com

Abstract

Kubenix allows the generation of Kubernetes manifests by leveraging Nix modules. On top of OpenAPI, Kubenix exposes the core Kubernetes API for the functional language Nix. This enables a fully declarative description of Kubernetes workloads with the best reproducibility, thus making YAML templating obsolete.

Kubenix’s Helm wrapper provides access to the large ecosystem of the de-facto package manager for Kubernetes while preserving Nix’s qualities. With the ability to build reproducible OCI container images with Nix, Kubenix both simplifies and improves the definition of Kubernetes workloads.

After briefly introducing Nix itself, this talk will showcase Kubenix with practical use cases ranging from simple Kubernetes manifests to complex application stacks.

Let’s make our Kubernetes workloads both declarative and reproducible!

Transcript

Intro

Hello everyone,

today I want to present you a tool called Kubenix, which allows you to declare your Kubernetes workloads fully reproducible.

echo $(whoami)

The slides of this presentation can be found online by scanning this QR code.

Let me briefly introduce myself: My name is Arik Grahl and I work as a Senior Software Engineer at SysEleven. If you want to reach out, feel free to contact me via any of those channels.

In order to overcome limitations of flexibility and modularity of YAML, Helm templates those structured documents.

Templating of YAML with Helm

In order to overcome limitations of flexibility and modularity of YAML, Helm templates those structured documents.

However, this only mitigates some if its shortcomings, while introducing other issues. This becomes evident in this example.

There is an awful lot of control structures in this template, it tries to do some validation during rendering phase of the in this required statement.

You need to juggle with the indentation of arbitrary snippets and control structures like this if tend to span across hundreds of lines of code.

I think we deserve better.

Introducing Kubenix

Kubenix can actually streamline the definition of complex application stacks.

Kubenix is built on Nix which is a domain specific language. It is a purely functional language, which is lazy evaluated. As a DSL it is purpose-built, so you wouldn’t develop a full application with this language. Nix as a technology has the claim to enable systems, which are reproducible, declarative, and reliable.

Therefore, Kubenix leverages so called NixOS modules to define arbitrary Kubernetes manifests. In doing so it enables modularity through inheritance and composability. Kubenix basically implements an OpenAPI specification so it brings additional type safety during build time.

Definition of a Webapplication with Kubenix (1/4)

Let’s have a look at a small example of a webapplication.

Since Nix is a functional language usually every expression is a function. This is how you would define a function and with the let you can bind local variables like app, version, and host to a scope. You can also construct more complex types: Here we are inheriting the previously defined variables app and version into the attribute set labels.

To define actual Kubernetes manifests we are importing the corresponding Kubenix module and can start defining our resources. Our webapplication consists of a deployment, service and ingress, each named by the variable app, so in this case nginx.

Definition of a Webapplication with Kubenix (2/4)

Now, let’s have a closer look at those three resources.

To define a Kubernetes resource, we simply follow the usual structure of those resources, in this case we define the spec field of a deployment. For the matchLabels we can use the previously defined labels conveniently. Those labels can be further reused by inheriting it into the template.metadata field as well. Then we simply define a container image, which reflects the app and the version.

Definition of a Webapplication with Kubenix (3/4)

The definition of the service is equally easy, by simply defining its spec field.

Again, we are using the labels as a selector, which ensures, that this service will always match the pods of the corresponding deployment. The ports are a list, which is also defined straightforward in Nix. We specify the protocol and port and are done for the service.

Note, that the port has a different type than the protocol, which is in contrast to Helm is actually checked.

Definition of a Webapplication with Kubenix (4/4)

Let’s move on to the last resource of our webapplication.

For the ingress we can directly define the spec.rules. Here, we are inheriting the previously defined host and define the paths as a list. We want to match all requests here, so we define the root path and use Prefix as a pathType. Now we can select the backend.service by its previously defined name and use the well-know port.number.

And in less than 40 lines of code, we have defined a webapplication consisting of three Kubernetes resources, with minimal boilerplate overhead and additional type safety, ready to be re-used elsewhere.

Building and Applying

Now let’s have a look how we can build and apply this.

Here, we can simply use the standard Nix tooling and build the code, which I have just demonstrated. This gives us a v1/List of those resources, which we can inspect. Or we can move forward and simply apply this to a cluster of our choice.

Consuming a Helm Chart with Kubenix (1/2)

So, wouldn’t it be cool if we somehow could benefit from existing ecosystems dealing with packaging of Kubernetes workloads?

Conveniently, Kubenix also comes with a Helm module, which allows us to consume arbitrary Helm charts. Again, we define a function and specify the version of the Helm chart we want to consume as a local variable. Now, we are importing the Helm module and can start defining Helm charts right away. Therefore, we are fetching our Helm chart, specified by its version, … repo URL. We give this chart a name … … and specify a content-based hash in order to satisfy the reproducibility requirement of Nix. Then we can directly specify any values, which are supported by the Helm chart. In this example we are setting the replicaCount to 2.

Consuming a Helm Chart with Kubenix (2/2)

Did you ever had a Helm chart, which didn’t support templating a value you would need?

Well, Kubenix has you covered. Again, in the same boilerplate code we are now import both the Kubernetes and the Helm module. We define the Helm chart as before, but now pretend, that the replicaCount isn’t supported. We can fix this, by simply defining the kubernetes.resources as in our very first example. Here, we enforce the replicas of the ingress-nginx-controller deployment to have the value 2.

This is how we can “fix” arbitrary Helm charts. Since we are already here, we can as well define our previously demonstrated webapplication as part of a larger stack.

Consisting of this deployment, the service and the ingress.

Further Readings

This brings me to the end of my talk.

Here are again the slides of this presentation and a link to the Kubenix project.

If you like to learn more about our product, which uses Nix and Kubenix under the hood you get further information here.