19 February 2026 Barcelona, Spain
Beyond Docker Builds: Declarative, Reproducible and Secure OCI Containers with Nix
Abstract
The Open Container Initiative (OCI) standardized the foundation of cloud-native infrastructure. However, most build systems lack determinism due to network access during builds, leading to non-reproducible artifacts and complicating software supply chain security (SSCS). While OCI supports layering for storage and cache efficiency, reflecting shared dependencies across artifacts remains complex.
Nix, as a package manager, enables declarative and reproducible builds in hermetic, network-isolated sandboxes, requiring all dependencies to be specified up front for long-term reproducibility.
Dependencies are treated as first-class citizens, making it easy to generate accurate Software Bill of Materials.
With dockerTools in the Nix standard library, these benefits reach the OCI ecosystem.
This talk highlights the advantages of fully declarative, reproducible OCI builds with Nix, offering deep insights and benefits to SSCS.
Let’s not just build containers, let’s declare them reproducibly!
Transcript
Intro
Hello, everyone, and thank you for joining!
Today I want to show how you can build declarative, reproducible and secure OCI containers with Nix.
If you’d like to follow along or reference these slides later, just scan this QR code or visit the link provided.
echo $(whoami)
Before we get started, let me quickly introduce myself.
My name is Arik Grahl and I am a Senior Software Engineer at SysEleven, a cloud service provider based in Berlin. At SysEleven we operate several data centers in Germany and benefit from our own network infrastructure. At the core is our OpenStack-based cloud offering with a managed Kubernetes on top of it. My day-to-day busines involves developing Kubernetes controllers in Golang and, of course, packaging with Nix.
If anything in this talk sparks your interest, feel free to connect with me via any of the channels listed here. I’m always happy to chat!
Golang Example Application (1/2)
Before we go into details about Containers and Nix, I want to briefly show you a Golang application, which should represent our example throughout the presentation. It is not necessary to understand this application in-depth, the code should just illustrate, what is important for the packaging in terms of dependencies and reproducibility.
The main package contains some basic imports, which are coming from the Go standard library, so these are not important when packaging the application.
Our application will use SQLite and therefore uses this well-known library.
From the Golang point of view this an external dependency, which has some implications for packaging on its own.
Furthermore, it also introduces some requirements for the underlying system as it requires C bindings for the SQLite C library.
So we have already found some aspects for packaging, let’s have a look if there is more during runtime. The program starts by setting up a SQLite database, implements a single HTTP handler and finally starts the HTTP server on TCP/8080.
So far there a no further implications during runtime for packaging the application.
Golang Example Application (2/2)
Let’s have a look at the HTTP handler. For each request it queries the cache database for a reasonable fresh entry. If we do not find such an entry we have a cache miss or stale read and prepare a request to the upstream service. We perform the request, ensure closing the response and read the response completely. Finally, it is ensured that the cache is repopulated with recent upstream data via an upsert. The request is answered with either the information from the cache or with a fresh upstream response.
At first, this might look irrelevant for dependencies and packaging, but you might have noticed that the communication to the upstream service is transport encrypted with TLS. To initiate this HTTPS connection, we require CA certificates to check for the validity of the certificate presented by the remote.
Build an Application with Docker: Reproducibility (1/2)
With the explicit and implicit dependencies identified, we can now containerize our application.
First, we start with the traditional approach of a Dockerfile.
The goal is to produce a valid container in terms of safety and correctness, while being as reproducible as possible.
Our builder image is the official Golang image in the Alpine flavor, which uses musl as their libc implementation. A common misconception is that container tags are immutable, but in fact they can change over time. That’s why we are using a digest, which is a content-based hash and therefore immutable. If it would not be present, it would error out, so the first step is reproducible.
Our application has some system requirements. Since it requires C bindings, we would need a compiler and development headers for our libc implementation musl. Furthermore, the SQLite library we are using requires its headers to be present. In this step the build environment downloads contents from the internet in more or less undefined versions. While this is most likely safe and will not break, it is not guaranteed, that your coworker will end with the exact same contents, when building this. Therefore, this step is unfortunately not even close to being reproducible.
After that we can copy over the static source code files, which is pretty much reproducible.
The final step of the first stage builds the application with enabled C bindings.
The Go toolchain leverages a go.mod file to pin exact version and content hashes of all external libraries it uses.
Therefore, we should get a defined version of our external SQLite library, which is a requirement for a reproducible build.
For demonstration purposes, I haven’t specified a working directory different to the $GOPATH, which leads to the go.mod file being ignored.
This issue is easy to mitigate, but illustrates, that it really depends if a build itself is reproducible.
In this multi-stage build, our actual runtime container is based on scratch.
Building on an empty base image is reproducible per definition.
We copy over the previously built binary from the builder container and the reproducibility boils down to wether the payload was built reproducibly before.
From the same container we are also copying over the musl library against which our binary was linked. Since it’s unclear if this wasn’t updated in the process of installing the system requirements, the reproducibility is not guaranteed.
To enable communication to the upstream server via HTTPS, we also copy over CA certificates from the original builder container. I would argue, that they are copied over unchanged from the builder container. Since this is pinned to a digest, it means it is most likely reproducible.
Finally, we specify the binary as the ENTRYPOINT of our container.
This static configration is reproducible per definition.
Quite some steps for a really simple application, but still manageable, I would say. Your view may differ, but given the fact that we put a lot of effort into trying to be reproducible, the outcome is far from being optimal.
Build an Application with Docker: Reproducibility (2/2)
Let’s draw an iterim conclusion on reproducibility when building with Docker.
You will end up with an impure artifact for potentially every RUN statement which utilizes a network stack.
Maybe you let this sink in for a second because this is theoretically a lot.
Same applies for a FROM statement, which uses solely a tag without a digest, because this is not immutable and can change over time.
Then there is the gray area, where “it depends”.
Statements like RUN can always invoke potentially non-reproducible commands and even COPY can operate on potentially impure data.
For the last category of reproducible statements, I would consider a FROM with content addressable hashes.
Same applies for COPY of static data and other static payload such as configurations like an ENTRYPOINT.
OCI Format: Reusability of Shared Dependencies (1/2)
Reproducibility is one thing. Let’s now have a look how well we can reuse potentially shared dependencies when building with Docker. Therefore, we do a quick excursion on the OCI format.
We take the previously built container image and export it in the OCI format.
I am using podman for this because it natively supports the OCI format.
The first file specifies the oci-layout, which is usually this simple JSON document.
The second file is an index file, again in the JSON format. It enumerates all the manifests we are holding in this container image, for example manifests for different CPU architectures.
The rest of the container image consists of so-called blobs, which are files with content addressable sha256 hashes.
Attentive listeners may recognize this blob by its hash.
It is the manifest, which was previously referenced by the index.json file.
This document now specifies a configuration and a list of layers.
Each of these items are blobs identified by its digest.
We are now continuing with the configuration blob, which is specified in the config section of the manifest.
This last JSON document in this container image holds some metadata such as created, architecture, and os.
Futhermore, under config it specifies the container’s Entrypoint among potential other options.
The rootfs attribute specifies how the actual payload layers of the container should be constructed.
The next blob is a gzipped tar ball containing the application’s built binary in a directory structure. This is followed by a blob containing a gzipped tar ball containing the musl library. And the last blob is an archive holding the final missing artifact, the CA certificates in a nested directory structure.
This is the whole anatomy of our container in the OCI format. We can see that on the one hand we have a series of metadata and configuration: The index file, the manifest, and the configuration. On the other hand we have the actual payload in form of binaries, libraries and other files.
OCI Format: Reusability of Shared Dependencies (2/2)
So what does this mean for reusability of shared dependencies?
First of all, if we cannot reproduce builds, there is no reusability, because we cannot guarantee predictable outputs.
But say, if we have perfect reproducibility, we would strive to reuse as much OCI blobs as possible. Since those blobs are identified by its digests and are therefore immutable, the container runtime usually stores them in a central place. This also means same blobs can potentially be used across different containers.
As illustrated before, on the one hand we have manifests and configurations. These JSON documents have a relatively small data size and have exlusively artififact-specific content. Therefore, we are not interested in the reusability of these blobs.
On the other hand we have the actual container layers. The size of these gzipped tar archives is potentially large. We distinguish between artifact-specific payload like the binary of our example application and shared dependencies such as the musl library.
These are the kind of blobs we a very much interested in reusing across different containers.
So the container image of our example application is basically constructed of the application binary, the musl library, and the CA certificates.
For example, we are using another container in our stack, for instance a Nginx server as a reverse proxy for our application.
This container consists of the nginx binary and could ideally use the exact same musl library and ideally the exact same CA certificates.
This is not impossible to achieve when building container images with Docker, but there is no explicit tooling for it, which would streamline this process. Also the imperative fashion of a Dockerfile is not particularly helpful for achieving reusability of shared dependencies across different OCI images.
Entering Nix
Luckily, we can do better. Let me introduce you to Nix, a powerful build system, which can address the exact issues we’ve just seen.
With Nix, we can make systems that are: reproducible, declarative and reliable.
Let’s look at what makes Nix so powerful, and how we can leverage its unique properties to get declarative, reproducible and secure OCI containers.
What is Nix?
Now, before we go further, let’s clarify terminology, because “Nix” can refer to several things:
First, there’s NixOS: A functional, Linux-based operating system built on top of Nixpkgs. It’s immutable by design, and upgrades are atomic and reversible.
Then there’s Nixpkgs: The package manager, managed via the Nix language. But Nixpkgs is also a large monorepository. With over 120 thousand packages, it is even one of the largest in open source. It includes all dependencies for a complete and self-contained build chain.
And at the heart of everything: The Nix domain-specific language. It’s a purely functional, lazily evaluated and purpose-built language.
Nix DSL: Purpose-Built
Since the property of being purpose-built is of great importance for our use case, I would like to discuss it in more detail here. Nix is the domain specific language for the Nix package manager.
This means, that a real-world Nix expression in Nixpkgs usually returns a so-called derivation, which describes the package payload.
In this example, we are defining an anonymous function with two arguments: pkgs defaulting to the system’s nixpkgs and system defaulting to the current system.
The anonymous function returns a derivation, which describes our package we want to build.
The derivation uses the previously defined system and defines further attributes such as a name and a version.
To construct the package’s payload, we can define a builder, which is a simple shell script in this case.
This script produces a shell one-liner and stores it into the $out variable, which is provided by Nix itself.
The builder is wrapped up by making the previously produced output executable.
This is pretty low-level and usually you would not write bare derivations in favor of library functions for common languages and frameworks, which give a higher abstraction for derivations.
First and foremost there is stdenv.mkDerivation, which encapsulates commonly used build tooling and a compiler.
This is usually the basis for C and C++ programs, but the build inputs can also be extended to support builds for Java.
For languages which have a more opinionated way of building software, there are even higher level abstractions such as buildPythonPackage.
There is buildNpmPackage to package JavaScript or TypeScript applications.
With buildDotnetModule you can build for instance C# and F# applications.
buildGoModule is commonly used to package Go applications, while buildRustPackage is the library function to package a Rust project.
There is even support for complete frameworks like the library function buildFlutterApplication.
Packaging the Example Application: Build a Binary (1/2)
Before we build an OCI container with Nix, let’s focus on building a binary for our example application.
Just like in the example before, we are using an anonymous function, which receives the argument pkgs defaulting to the system’s nixpkgs.
With let we define some local variables like the name of our package and its version.
The function returns now some of the previously introduced library functions, which abstracts away common attributes for packaging a Golang application.
We are inherting the previously defined version into the scope and set the pname to the respective defined value of name.
As an input src we are passing our current working directory.
Since we have external dependencies beyond the Golang standard library, we need to specify a vendorHash, which basically represents the hash of the vendored dependencies as specified by go.sum.
Packaging the Example Application: Build a Binary (2/2)
With this Nix expression, we can now move on to actually build the specified package.
We do so by invoking nix build against this Nix expression.
This gives us a symlink pointing to the output of the underlying Nix derivation in the Nix store.
If we follow this symlink, we see, that we just built a dynamically linked executable, just as we intended to.
Therefore, we can just execute this binary in the background and test our packaged application by sending a request to the endpoint it is intended to listen on.
We are seeing, that the application is actually doing, what it’s supposed to do, providing us the current weather information.
It is indeed very simple to package applications with Nix in a fully declarative fashion and build them reproducibly.
Packaging the Example Application: Build an OCI Image (1/2)
Building the binary of an application is one thing, but we originally wanted to build a container.
So let’s have a look how we can go from here. This is the Nix expression, which we just used to build the binary.
First, we move the buildGoModule to a local variable, so that we can access it later on.
Since we are not intending to build a binary, our anonymous function now doesn’t return buildGoModule anymore.
Instead, we are returning dockerTools.buildLayeredImage one of Nix’s abstractions for OCI containers.
We are inheriting the name into this derivation and specifying the tag with a prefixed version via string interpolation.
In contrast to the Dockerfile before, we are constructing the container image declaratively.
Instead of COPY statements, this means, the contents are basically a list of packages, we want to include in the container image.
Here, we are adding the cacert package from Nixpkgs, which bundles the necessary CA certificates for our example application.
We could also put our previously defined bin to this list, but it is more elegant to simply reference it as our container’s Entrypoint.
The string interpolation will make sure, that we are referencing the actual binary file from the package in this configuration, while the Nix builder will automatically include this reference as a dependency.
Packaging the Example Application: Build an OCI Image (2/2)
So how can we now turn this Nix expression into an OCI image?
This is actually not different to the previous workflow of building a binary:
We are again just invoking nix build against the Nix expression.
Again, we are getting a result symlink, but now pointing to a gzipped tar ball in the Nix store.
This archive represents an OCI image.
From there on, we can stick to the usual tooling, for instance, use docker load to make it available to our container runtime.
This enables us to start a container with docker run.
Just like in the example before, we can check with curl if the now containerized application responds as expected.
And indeed, we are getting the expected weather data.
We evolved our previous example by wrapping the output in an OCI image.
By doing so, we have preserved both the declarative approach and the reproduciblity guarantee, thanks to Nix.
Fortunately, the tooling didn’t get more complicated, everything still boils down to nix build and the output can be a frictionless drop-in replacement for existing workflows.
Revisiting Reusability of Shared Dependencies (1/2)
Now, that we have built an OCI container with Nix, let’s revisit the topic of shared dependencies again. I mentioned earlier another container, complementary to our example application, for instance Nginx as a reverse proxy. Let’s quickly sketch such a container.
Again, we are using an anonymous function accepting pkgs as an argument and returning a buildLayeredImage.
The name of our container image will be nginx and it will be tagged with the version of nginx present in Nixpkgs.
Similar to the example before, for the Entrypoint, we are specifying the binary of nginx from Nixpkgs and set some common options.
This is all it takes to declare a reproducible container for an existing application coming from Nixpkgs.
Revisiting Reusability of Shared Dependencies (2/2)
We can now build this Nix expression representing an Nginx container and export this container image in the OCI format. When looking at the output, we are now not interested in the metadata and only looking into the blobs. Among other blobs, there is a OCI layer, which includes the binary of Nginx. Beside this application specific layer, we also have a layer containing the musl library.
Let’s now look at the layers of the OCI image of our example application, which we have earlier built with Nix. Again, we are focusing on the blobs only and see that it holds the application’s binary. What’s very interesting now is, that it also has an OCI layer containing the musl library, among other. And the best property is, that this layers happen to be the exact same layer as for the other container, indicated by the same hash.
So we can see, that each and every dependency translates to an OCI layer. As a result, same dependencies translate to the same layer and therefore can be shared across different OCI images. This is really a big benefit saving a lot of disk space and network bandwidth and even boosting container startup time. Nix and even more the self-contained nature of Nixpkgs ensures a sane dependency management, so that we end up with the exact same defined version of the musl library here.
Software Supply Chain: SBOM and SLSA (1/2)
Speaking of which dependencies.
When looking once more at our example from before, we see, that the SBOM is basically our Nix expression. There is no need to scan the output for artifacts and in particular, there is no package manager needed inside or outside of our container to enumerate them. This is significantly reducing, if not even structurally eliminating both false positives and false negatives.
Nix also improves the provenance attestation of containers, which are built with it.
As explained for the SBOMs, we get precisely declared inputs, build instructions and fixed-output fetchers like the vendorHash.
Nix builds the container in a sandboxed environment without any network access.
Given the same inputs, we can reproduce the output with Nix, which enables verification and not just attestation.
Another attribute for supply chain attestation is that we get all transitives dependencies with Nix as first-class entities.
Beside runtime dependencies, this involves also build-time dependencies.
Software Supply Chain: SBOM and SLSA (2/2)
How does this look in practice?
There a many options, but I want to quickly present a collection of tools called sbomnix, which we can install like this.
The first tool, named like the collection itself, helps us to convert built Nix derivations to SBOMs, here in the SPDX format. We won’t go through this in-depth, I just want to illustrate, that it includes our well-known dependencies like the musl library or the CA certificates among others.
The second provenance tool from this collection allows us to compute a SLSA provenance manifest.
We notice that in addition to the previous SBOM, we are getting a digest for each dependency and also build-time dependencies such as the gcc compiler are included.
We get all this software supply chain insights, basically without any extra efforts.
Comparing OCI Image Sizes
Let’s have a look at the bare numbers and compare the OCI image sizes of different building approaches.
We start looking into the Distroless base image used as a multi-stage build with Docker. The largest fraction of the container image goes to the application. Then there is the layer containing the libc, which is based on glibc for Distroless. The third part are the CA certificates, which are relatively small. Finally there is a quite significant amount of other components, which are basically unused by our example application. This container image based on Distroless totals with almost 16 mebibyte in size.
Next, we are comparing a multi-stage build with Docker on basis of Alpine Linux. The size of the layer holding the application is roughly the same, although it is now built against musl, the libc implementation of Alpine. But the layer with musl is significantly smaller compared to the corresponding layer of Distroless. The CA certificates have roughly the same size, while the other overhead is also smaller compared to Distroless. The size of this container image is a bit more than 11 mebibytes.
The next data set reflects the multi-stage build with Docker on basis of Scratch, exactly as outlined earlier. The application, which was built against musl has the exact same size as before. Also the layer containing the musl library has roughly the same size, because both are originating from Alpine. Same applies for the CA certificates. This approach has no overhead per definition, which makes this container image roughly 8 mebibyte in size.
When I started this research, I assumed that the Scratch approach would be somewhat the ideal baseline in terms of size. But I think you can already guess where this is going.
So let’s have a look how the Nix approach looks like. Very surprisingly, the application is significant smaller than all previous approaches, regardless if the application was built against musl or glibc. The layer holding the libc is in turn larger than all previous approaches. Also the CA certificates are larger, but this is insignificant in relation to the overall size. The overhead with the Nix based container image comes in as the second smallest, directly after Scratch, which does not have any. Thus, this container image totals with less than 6 mebibyte in size.
I find it pretty impressive, that the declarative Nix approach has both the best reproduciblity and composability on the on hand and also the smallest footprint on the other hand.
Advanced Use Cases: Nix OCI
Before we wrap it up, I want to present you three advanced use cases. We star with a server software, which I call Nix OCI, which makes it obsolete to push your OCI artifacts over to a registry. When building our well-known Nix expression, we are receiving an OCI image in the Nix store. This is the server side, on the client side, we want to consume this container, for example by a Pod. The container runtime requests the manifest for the OCI image from the server. Since a corresponding file is present on the server side, it answers successfully. The client then obtains the manifest. To serve the manifest, the server inspects the archive and serves the manifest layer. The client then obtains the same manifest, now addressed by a digest and receives the same response. Then the client request the next layer, which is the configuration and the server responds with it. Afterwards, the client requests the first binary layer, which is delivered by the server. This continues until eventually all remaining layers are served.
The Nix OCI server is basically a fully-compliant but read-only OCI registry backed solely by a local Nix store. Since it leverages standard OCI protocols, it is very flexible to use.
Advanced Use Cases: Nix-Snapshotter
The second use case is nix-snapshotter, which provides a similiar interface for building an image.
It generates a similar OCI image as a tar archive in the Nix store.
When starting a container, we can reference this image with the prefix nix:0, thanks to the resolvedByNix attribute in the Nix expression.
This gets resolved by an unaltered containerd, which makes use of the nix-snapshotter plugin.
This plugin is aware of the archive’s layers and can provide its contents.
If you can afford installing containerd plugins, nix-snapshotter provides native understanding of Nix packages to containerd including garbage collection and makes an OCI registry obsolete.
Advanced Use Cases: Flox Imageless Kubernetes
The last use case I am presenting is Flox Imageless Kubernetes.
It doesn’t use traditional container images in favor of Flox environments.
An example pod could look like this.
Under the hood containerd gets extended by a custom shim, which is indicated by the runtimeClassName in the manifest.
This shim basically performs a flox pull of the environment specified in the respective annotation and prepares the container filesystem.
The wrapped runc then effectively executes a flox activate with the command specified by the pod’s container.
So the Flox containerd shim abstracts away traditional container images, while you can hold on to the established Kubernetes API.
You can learn more about this approach in Morgan Helton’s upcoming talk at the SCALE conference.
Conclusion (1/3)
Now it’s time to recap what we have learned today:
For Docker, we have seen, that Dockerfiles basically consist of a series of imperative steps.
This further contributes negatively to the reproducibility of the already impure build.
We do not find an abstraction of shared dependencies, which would enable us to share them across different container images.
Conclusion (2/3)
Nix on the other hand is declarative and reproducible per design. We have learned, that Nixpkgs is fully self-contained and provides an excellent dependency management. Moreover, it is a large and attractive ecosystem, providing us over 120 thousand packages.
With dockerTools, we can transfer all these strengths directly into the OCI ecosystem.
First and foremost, this enables us to define OCI containers declaratively and build them reproducibly.
We are even able to reuse shared dependencies across the boundaries of an OCI image.
For OCI images built with Nix, we can generate precise and reliable SBOMs and attest steps in the build process.
Due to the minimal nature of the OCI images generated with Nix, they are small in size, and we have demonstrated that we can even deliver a build that is smaller than those based on Alpine and Scratch.
Conclusion (3/3)
We have seen that the tooling does not need to become complex. In fact, OCI images built with Nix can be a drop-in replacement.
If you want to avoid handing over artifacts to an OCI registry, you can use Nix OCI as an OCI registry serving artifacts based on a local Nix store.
If you control your container runtime, you can leverage nix-snapshotter to bring native understanding of Nix to containerd.
With Flox Imageless Kubernetes you can even run pods directly from reproducible, centrally managed Flox environments.
Questions & Answers
This brings me to the end of my presentation.
If you like to go over the slides once more, feel free to scan this QR code.
If you are interested in Nix, you might want to check out the material I have collected on my website.
At SysEleven we are building a cloud native software supply chain management system on basis of Nix. If you are interested in this product, feel free to scan this last QR code here.
I am happy to answer some questions.