ContainerDays Conference 2025

09 September 2025 Hamburg, Germany

Kubenix: Declare Your K8s Workloads Fully Declarative

slides.pdf

youtube.com

Abstract

Kubenix allows the generation of Kubernetes manifests by leveraging Nix modules. On top of OpenAPI, Kubenix exposes the core Kubernetes API for the functional language Nix. This enables a fully declarative description of Kubernetes workloads with the best reproducibility, thus making YAML templating obsolete.

Kubenix’s Helm wrapper provides access to the large ecosystem of the de-facto package manager for Kubernetes while preserving Nix’s qualities. With the ability to build reproducible OCI container images with Nix, Kubenix both simplifies and improves the definition of Kubernetes workloads.

After briefly introducing Nix itself, this talk will showcase Kubenix with practical use cases ranging from simple Kubernetes manifests to complex application stacks.

Let’s make our Kubernetes workloads both declarative and reproducible!

Transcript

Intro

Hello, everyone, and thank you for joining my talk today!

Today I want to showcase a technology called Kubenix, which enables us to define our Kubernetes workloads in a fully reproducible fashion.

If you’d like to follow along or reference these slides later, just scan this QR code or visit the link provided.

echo $(whoami)

Before we get started, let me quickly introduce myself.

My name is Arik Grahl and I am a Senior Software Engineer at SysEleven, a cloud service provider based in Berlin. At SysEleven we operate several data centers in Germany and benefit from our own network infrastructure. At the core is our OpenStack-based cloud offering with a managed Kubernetes on top of it. Within my team, we build the highest abstraction layer for end users, a software supply-chain management system that handles application lifecycle management on top of our infrastructure. My day-to-day busines involves developing Kubernetes controllers in Golang and, of course, packaging with Nix.

If anything in this talk sparks your interest, feel free to connect with me via any of the channels listed here. I’m always happy to chat!

YAML

As you know, Kubernetes, at its core, is a distributed operating system. Under the hood, it relies on Golang types and Protobuf for defining objects. But for most end users and operators, what we actually interact with is the API, which speaks JSON and, far more commonly, YAML.

YAML is a great transport format for APIs. However, using YAML as the source of truth for complex infrastructure brings its own set of challenges:

Especially for complex documents it becomes evident, that it is very verbose and quite repetitive.

It is not easily possible to extend the document and organize it into fragments.

The provided concepts for abstraction and reusability are not sufficient enough, which result in challenging inheritance.

Type safety is weak, so confusing type errors can occur.

And these issues only grow as your systems scale.

Tooling to Generate Structured Documents

Of course, none of this is news to anyone who’s worked with Kubernetes for more than a week.

In response, the ecosystem has sprouted a variety of tools for generating YAML and JSON:

Jsonnet, Kustomize, Helm, Helmfile, you name it.

But a critical observation: Most of these tools don’t solve the underlying problems with plain YAML. Instead, they offer mitigation strategies, essentially layering more abstraction and indirection on top, often introducing entirely new classes of issues. It’s a bit like patching leaks in a boat, instead of building a better one.

Helm (1/2)

Among these tools, Helm has emerged as the de facto standard for packaging and deploying applications in the Kubernetes ecosystem. Whenever there is a cloud-native application, chances are incredible high, that you will find a Helm chart.

It even offers some rudimentary dependency management, by referencing downstream charts.

However, at its core, it’s simply a templating engine built around Golang’s text/template library. While technically sound, this approach essentially means we’re templating a structured language, which is more error prone than it appears at first.

Helm (2/2)

This has real consequences. Let’s look at a typical Helm template.

You’ll notice heavy use of prefixed conditionals, which can be confusing at first glance.

Even more challenging, templates sometimes enforce configuration validation at render time. That’s mixing validation with presentation, making things harder to debug.

YAML fragments are scattered throughout, and without schema validation, it’s all too easy to make a syntactic mistake that only shows up much later.

Injecting values can require carefully managed indentation, introducing a whole class of errors.

Perhaps worst of all, these if conditions often span entire files, sometimes hundreds of lines, making it far too easy to forget an end or miss a logic branch entirely. Over time, your templates begin to resemble a giant bowl of spaghetti.

Entering Nix

Luckily, we can do better. Let me introduce you to Nix, the basis that powers Kubenix, and a real solution to many of the problems we’ve just seen.

With Nix, we can make systems that are: reproducible, declarative and reliable.

Let’s look at what makes Nix so powerful, and how we can leverage its unique properties to escape the YAML/templating rabbit hole.

What is Nix?

Now, before we go further, let’s clarify terminology, because “Nix” can refer to several things:

First, there’s NixOS: a functional, Linux-based operating system built on top of Nixpkgs. It’s immutable by design, and upgrades are atomic and reversible.

Then there’s Nixpkgs: The package manager, managed via the Nix language. But Nixpkgs is also a large monorepository. With over 100 thousand packages, it is even one of the largest in open source. It includes all dependencies for a complete and self-contained build chain.

And at the heart of everything: The Nix domain-specific language. It’s a purely functional, lazily evaluated language, designed as the language for the Nix package manager.

Nix DSL: Purely Functional

Since the Nix language is central to Kubenix, let’s explore a few of its features. We start with the ones related to the language’s functional nature.

The first property is, that every valid piece of Nix code is an expression that returns a value.

For example, a string literal is an expression that returns a string.

Same for a more complex expression: An attribute set, which holds the string from before at the key x. Again, when evaluating this expression, its value is returned.

The second property of a functional language is, that evaluating a Nix expression will always yield a data structure.

To illustrate this, we are defining a function f, which receives an argument x and returns it as it is. When calling f with a string for instance the function evaluates and returns the respective data structure.

If we now call the same function f with a different argument, for instance the attribute set from the example before, we receive the passed data structure. This illustrates as well the concept of parametric polymorphism as we can pass different types to the same function. Generally, Nix is dynamic but strongly typed, meaning types are check during evaluation time and are strictly enforced afterwards.

The last important property of a functional language is that evaluation does not execute a sequence of operations.

Let’s define a variable x with a certain value and another variable y, which depends on this variable. When evaluating, we get the expected result based on both variables.

However, we can also define y and x in a different order. When evaluating y, we get the same value and no error whatsoever. Nix is about describing data and relationships between data, not about executing steps in sequence.

Nix DSL: Lazy Evaluated

Another key property: Nix is lazily evaluated. This means nothing is actually computed until it’s needed.

To illustrate this, let’s define an attribute set with two keys, one holding an exception and another key our well-known string. If we evaluate the key x of our attribute set here, Nix throws an exception as expected.

However, for the exact same attribute set we evaluate the key y, we successfully get the respective value, without any errors.

Lazy evaluation helps avoid unnecessary computation and makes composability much easier.

Nix DSL: Purpose-Built

Last but not least, Nix is purpose-built for system packaging.

This means, that a real-world Nix expression in Nixpkgs usually returns a so-called derivation.

This derivation can be encapsulated by a higher abstraction, providing language-specific common builders, like here for Golang.

For instance, if we want to build the commandline tool kubectl as a common Golang example, we would specify the package name, its version and the subpackages we are interested in.

We can then provide the necessary source code from GitHub, identified by an owner, repo and Git revision, guarded by a hash.

This is all it takes to package a binary with Nix, thanks to the rich utility functionality. You are not intended to understand this example in-depth. It should just give you a rough idea, how Nix is used in the real world and illustrate its role as a purpose-built language.

Introducing Kubenix

Now that we understand what makes Nix so compelling, let’s zoom in on Kubenix, the bridge between the Nix and Kubernetes ecosystems.

Kubenix leverages the power of NixOS modules to define Kubernetes manifests, giving us advantages that go far beyond what templating tools can offer.

Why is this exciting? For starters:

Boilerplate code is slashed: no more endless copy/paste YAML!

True type safety: You get early feedback if something doesn’t match the expected schema.

Composability and inheritance: Nix’s language features let you abstract, extend, and override configurations with ease.

First-class integration: Kubenix makes it trivial to pull in existing Helm charts and we can use container images, which are built with Nix, declaratively and reproducibly. Keep these strengths in mind as we run through some concrete examples.

Defining a K8s Manifest with Plain Nix (1/2)

Let’s first see what it would look like to write a Kubernetes manifest using “plain” Nix.

We would have some sort of variables like name or version, which will be used throughout our Nix expression.

The writeText function encapsulates a derivation, which output a file and generators.toYAML turns a Nix attribute set into a YAML string, the payload for our pod.yaml file.

We can add static data like apiVersion and kind and construct the metadata by inheriting the previously defined name.

When defining spec.containers, we can then again reuse this pattern and inject the name once more.

With string manipulation, we can then construct the container’s image URI based on the name and version. The ports definition would resemble again some static definitions. This is already pretty neat, there is no templating of structured documents involved and we can abstract and re-used common values like the name here. But this is, ultimately, just code that produces YAML, the underlying process is still “build then hope”.

Defining a K8s Manifest with Plain Nix (2/2)

And here’s the catch: There’s no type safety.

Let’s say you make a mistake and accidentally set containerPort to the string “EIGHTY” rather than the integer 80. Nix is perfectly happy to generate this YAML file, just like helm template would.

As you can see by the error message, this will break only if passed to a downstream component like the Kubernetes API server.

The OpenAPI Specification: A Standard to Describe HTTP APIs

This is where the OpenAPI specification comes in. The OpenAPI specification is a standard to describe HTTP APIs.

As you can see in this excerpt, Kubernetes, like many modern APIs, publishes a comprehensive OpenAPI specification that defines, down to every property and field, what valid manifests look like, including types, descriptions, and which fields are required.

In this specification you can also find the definition of nested values like the ContainerPort. Here we can see for instance, that this field needs to be an integer and not a string.

Wouldn’t it be great if we could use this machine-readable contract to catch errors before they ever reach the cluster?

Kubenix Implements the K8s OpenAPI Specification (1/2)

That’s exactly what Kubenix does. By implementing Kubernetes’ OpenAPI specification, Kubenix enables Nix to validate your manifest definitions during build time.

The first benefit is actually, that it reduces the boilerplate code for us significantly. Let’s revisit our Pod example, but this time using Kubenix.

Again, we start with the variables name and version.

Now, we are importing Kubenix’ Kubernetes module, which holds the complete Kubernetes API.

Then we can conveniently start defining the Pod’s container. Kubenix takes care of setting the Pod’s metadata.name to the defined ${name} here and also sets the container’s name attribute accordingly.

Just like in the example before, we can use string manipulation to craft the container’s image URI.

A definition of a port named http and the respective containerPort would look like this.

Since the OpenAPI specification dictates certain standards, we can easily get rid of a lot of boilerplate code, without relying on templating. I find it pretty impressive how we can define the previous example very coherently in just this few lines of code.

Kubenix Implements the K8s OpenAPI Specification (2/2)

But here’s the real win: type safety at build time.

If you make the same mistake as before, assigning “EIGHTY” to the containerPort, Kubenix immediately tells you: “Hey, this should be a signed integer, not a string!” You catch the error instantly, right when you write the code, not at deploy time. The same goes for other common pitfalls: Unknown field names? Invalid values? Missing required keys? All flagged up front.

Defining a Webapplication with Kubenix (1/4)

So far we’ve only looked at toy examples, a simple Pod. But what about real applications? Let’s see how Kubenix scales up to a full web application stack.

Suppose we want to deploy Nginx. We set up our parameters: name, version, and port. We can now construct our labels based on this variables, similar to the first pure Nix example before.

Again, we are importing Kubenix’ Kubernetes module and can now define our resources.

Those consist of a Deployment, Service, and HTTPRoute. Everything is cleanly parameterized by our variables, no repetition, easy to extend or override later. Now, let’s drill into each part.

Defining a Webapplication with Kubenix (2/4)

First, the Deployment.

Here we can take advantage of the previously defined labels by specifying the matchLabels and can directly move on defining the template.

Again, we are using the labels and inherit those into the metadata section.

Similar to the previous examples we are defining the container’s image URI and are done for the deployment. Pretty simple, right?

Defining a Webapplication with Kubenix (3/4)

Next up: the Service.

Once more we are using the labels as a selector to make sure, that this service will select appropriate Pods from the Deployment.

Defining ports is simple, we inherit the port variable and specify that we are serving TCP traffic. Again, the code reads like a direct translation of what we want to accomplish.

Defining a Webapplication with Kubenix (4/4)

Finally, we set up the HTTPRoute, in other words, how the application is accessed publicly.

This consists of a list of hostnames and rules.

We want to match all paths, therefore we can use this static rule here.

Then we are defining our backendRefs.

Therefore, we are inheriting the previously defined port number and use the app variable to derive the name. This makes sure, that we are matching the Service, which is already coupled to the Deployment. As you can see, Kubenix doesn’t just let you “write YAML with Nix”. It enables you to build, validate, and compose whole applications in a way that’s clean, reliable, and fully reproducible from end to end.

Consuming a Helm Chart with Kubenix (1/2)

Although Kubenix offers a powerful alternative to Helm’s templating model, its creators recognize that Helm itself has grown into much more than just a templating engine. It’s an ecosystem in its own right, with thousands of ready-made charts for Kubernetes workloads. With Kubenix we can integrate them without any friction.

How does this work? First, we simply import the Helm module provided by Kubenix.

Then, in a fully declarative fashion, we specify our Helm release. In this example, we’re installing Cilium as our Container Network Interface.

We start by using Kubenix’s built-in Helm fetcher, which references the chart’s repo, name, and version and pins it by content hash to guarantee reproducibility.

All Helm values can be defined in Nix directly, so customizing your deployment, such as enabling Gateway API support, is as easy as updating a strongly-typed config. This approach means you can manage large, complex Helm values without mental overhead. With Helmfile for instance you would template these values and effectively adding yet another layer of templating on top.

Consuming a Helm Chart with Kubenix (2/2)

But Kubenix doesn’t just let you use Helm in isolation, it enables you to mix and match Helm releases with your own custom Kubernetes resources, all in a single, unified configuration.

Therefore, let’s revisit our example from before for one last time.

We import both the Kubernetes and Helm modules from Kubenix, then define our Kubernetes resources as before: Deployment, Service, and HTTPRoute.

Just like in the previous example we would then define our Helm releases. For instance our Cilium release, which was defined by a chart and some values.

The integration is seamless: Kubenix merges all your defined resources into a consolidated manifest, and even detects colliding resources at build-time, preventing accidental overwrites. This means you can add your own custom functionality on top of third-party Helm charts, for example to bridge gaps where a Helm chart doesn’t quite fit your needs. At the same time, we finally get a proper dependency management for Kubernetes, just like we have demonstrated with this Gateway API example and Cilium here. Best of all, we are no longer limited by what the chart authors chose to expose via values. We are no longer bound to the templating of the Helm chart and can simply override any subset of the generated resources on the Nix side. In short: Kubenix opens up the entire Helm chart ecosystem to you, on your own terms.

Building and Applying

Let’s take a look at what the workflow and toolchain looks like in practice.

Once you’ve defined your Kubernetes workloads, using either pure Nix or Kubenix, it’s as straightforward as running nix build. The build produces a single output: A symlink called result that points into the Nix store. Inside, you’ll find a Kubernetes v1/List object containing all your manifests, neatly assembled. Want to inspect your output? You don’t need any special tools.

For instance, with jq you can quickly visualize what kinds of resources are bundled.

You’ll see a summary of everything defined: Deployments, Services, HTTPRoutes, and more.

And when you’re ready to apply your infrastructure, just hand the output straight to kubectl.

Kubernetes creates all resources in one go. It’s smooth, transparent, and integrates perfectly with familiar workflows.

The real power here? With Nix we have one single tool to build everything, from configuration to the rendered manifests.

In fact, Nix even gives us fully reproducible and pure builds: Everything can be tracked, versioned, and rebuilt identically, perfect for debugging and auditing.

These properties make the whole workflow very GitOps friendly, if you like to roll this out for your team.

Since Nix is very powerful, we can add arbitrary custom tooling to the process if this is needed, without an extra burden for developers.

In short: Nix and Kubenix turn the complicated puzzle of Kubernetes configuration management into a single, predictable, and developer-friendly pipeline.

End-To-End Definition of a K8s Workloads: Container

But why stop at just managing manifests? Wouldn’t it be even better if we could define and build our container images using the same, unified toolchain? With Nix, we can do exactly that!

Let me give you a quick taste: here’s how you could fully define and containerize Nginx within your stack.

We start by pulling the desired Nginx version directly from Nixpkgs, guaranteeing we always get the exact build we expect.

Then we can define our container by using buildLayeredImage, a fully deterministic and very efficient container image builder.

For the container image, we inherit the name and tag it with our version.

For the Entrypoint we are referencing the binary from the corresponding Nixpkgs package and specify some common arguments.

Finally, when defining our Kubernetes Pod, the image field references the previously defined container. We need to make sure, that this URL will be served, but we can now define the entire stack completely in Nix.

End-To-End Definition of a K8s Workloads: SBOM and SLSA

Here’s where end-to-end approach really shines: All this precise composition unlocks deep visibility and security.

Let’s say you want a full Software Bill of Materials for your workload. No problem: Just install sbomnix with a single command, and you can generate a complete SBOM directly from your build output.

You’ll see everything down to some libraries like glibc just from the Pod manifest.

You want invisible supply chain attestation? With the provenance commandline tool from the same package, it’s equally easy to generate a full SLSA provenance manifest, documenting exactly how and from what dependencies your final artifact was built.

Again, we are seeing the expected dependencies like glibc, the nginx binary or the Nginx container itself.

End-To-End Definition of a K8s Workloads: Dependency Graph

And finally, for those who love a good visual: The sbomnix project also provides a commandline tool to generate beautiful dependency graphs.

See, right before your eyes, every relationship: Your Kubernetes manifest points to your custom container image. That image points to exactly the packages that went into it. Shared dependencies, like builder scripts, are clearly mapped, not duplicated, not hidden.

It’s a perfect illustration of the end-to-end, reproducible architecture made possible by Nix and Kubenix.

Conclusion

Let’s recap what we’ve explored:

With Nix we get a very powerful and flexible, purely functional, lazily evaluated language, which is purpose-built for system packaging.

The language drives Nixpkgs, which offers a large and attractive ecosystem consisting of utility functionality and different system packages.

With Kubenix we can significantly reduce our boilerplate code and it provides us true type safety for Kubernetes, thanks to the OpenAPI specification.

Furthermore, through its means of abstraction, it allows us to manage complex stacks.

It integrates well with existing ecosystems such as Helm on the one hand and OCI containers on the other side.

In my opinion, this unified approach represents the package management solution for the “platform to build platforms”.

Questions & Answers

This brings me to the end of my presentation.

If you like to go over the slides once more, feel free to scan this QR code.

Here is the GitHub project of Kubenix, if you like to learn more about it.

At SysEleven we are building a cloud native software supply chain management system on basis of Nix. If you are interested in this product, feel free to scan this last QR code here.

I am happy to answer some questions.