WebAssembly is ideal for cloud-native apps. A shift from Krustlets to runwasi should simplify managing Wasm nodes in Azure Kubernetes Service.
Itโs interesting to see how cloud-native runtimes are evolving. Although containers make it simple for applications to bring their own runtimes to clouds, and offer effective isolation from other applications, they donโt offer everything we want from a secure application sandbox. Bringing your own userland solves a lot of problems, but itโs a horizontal isolation not vertical. Container applications still get access to host resources.
Thatโs why WebAssembly (often shortened to Wasm) has become increasingly important. WebAssembly builds on the familiar JavaScript runtime to provide a sandbox for both server-facing and user-facing code. Binaries written in familiar languages, including the memory-safe and type-safe Go and Rust, can run on Wasm in the browser and use WASI (WebAssembly System Interface) as native applications that donโt need a browser host.
There are some similarities between WASI and Node.js, but the biggest difference is perhaps the most important: Youโre not limited to working in JavaScript. WASI doesnโt give you all the APIs you might expect from a runtime like .NET or Java, but itโs evolving fast, giving you a way to run the same code on everything from Raspberry Pi-like devices on the edge, on hyperscale clouds, and on x64 and Arm hardware. With only one compiler and one development platform, you can use familiar tools in familiar ways.
WebAssembly in Kubernetes
Wasm and WASI have advantages over working with containers: Applications can be small and fast and can run at near-native speeds. The Wasm sandbox is more secure, too, as you need to explicitly enable access to resources outside the WebAssembly sandbox.
Each year at the Cloud Native Computing Foundationโs KubeCon, the Wasm Day pre-conference gets bigger and bigger, with content thatโs beginning to cross over into main conference sessions. Thatโs because WebAssembly is seen as a payload for containers, a way of programming sidecar services such as service meshes, and an alternative way to deliver and orchestrate workloads to edge devices. By providing a common runtime for Kubernetes based on its own sandbox, itโs able to add an extra layer of isolation and security for your code, much like running in Hyper-Vโs secured container environmentย that runs containers in their own virtual machines on thin Windows or Linux hosts.
By orchestrating Wasm code through Kubernetes technologies such asย Krustlets and WAGI, you can start to use WebAssembly code in your cloud-native environments. Although these experiments run Wasm directly, an alternative approach based on WASI modules using containerd is now available in Azure Kubernetes Service.
Containerd makes it easier to run WASI
This new approach takes advantage of how Kubernetesโ underlying containerd runtime works. When youโre using Kubernetes to orchestrate container nodes, containerd would normally use a shim to launch runc and run a container. With this high-level approach, containerd can support other runtimes with their own shims. Making containerd flexible allows it to support multiple container runtimes, and alternatives to containers can be controlled via the same APIs.
The container shim API in containerd is simple enough. When you create a container for use with containerd, you specify the runtime youโre planning to use by using its name and version. This can also be configured using a path to a runtime. Containerd will then run with a containerd-shim- prefix so you can see what shims are running and control them with standard command-line tools.
Containerdโs adaptive architecture explains whyย removing Dockershim from Kubernetes was important, as having multiple shim layers would have added complexity. A single self-describing shim process makes it easier to identify the runtimes currently in use, allowing you to update runtimes and libraries as necessary.
Runwasi: a containerd shim for WebAssembly
Itโs relatively easy to write a shim for containerd, enabling Kubernetes to control a much wider selection of runtimes and runtime environments beyond the familiar container. The runwasi shim used by Azure takes advantage of this, behaving as a simple WASI host using a Rust library to handle integration with containerd or the Kubernetes CRI (Container Runtime Interface) tool.
Although runwasi is still alpha-quality code, itโs an interesting alternative to other ways of running WebAssembly in Kubernetes, as it treats WASI code as any other pod in a node. Runwasi currently offers two different shims, one that runs per pod and one that runs per node. The latter shares a single WASI runtime across all the pods on a node, hosting multiple Wasm sandboxes.
Microsoft is using runwasi to replace Krustlets in its Azure Kubernetes Service. Although Krustlet support still works, itโs recommended to move to the new workload management tool by moving WASI workloads to a new Kubernetes nodepool. For now, runwasi is a preview, which means itโs an opt-in feature and not recommended for use in production.
Using runwasi for WebAssembly nodes in AKS
The service uses feature flags to control what youโre able to use, so youโll need the Azure CLI to enable access. Start by installing the aks-preview extension to the CLI, and then use the az feature register command to enable the WasmNodePoolPreview.
az feature register โnamespace โMicrosoft.ContainerServiceโ โname โWasmNodePoolPreviewโ
The service currently supports both the Spin and slight application frameworks. Spin is Fermyonโs event-driven microservice framework with Go and Rust tools, and slight (short for SpiderLightning) comes from Microsoftโs Deis Labs, with Rust and C support for common cloud-native design patterns and APIs. Both are built on top of the wasmtime WASI runtime from the Bytecode Alliance. Wasmtime support ensures that itโs possible to work with tools like Windows Subsystem for Linux to build and test Rust applications on a desktop development PC, ready for AKSโs Linux environment.
Once youโve configured AKS to support runwasi, you can add a WASI nodepool to an AKS cluster, connect to it with kubectl, and configure the runtime class for wasmtime and your chosen framework. You can now configure a workload built for wasm32-wasi and run it. This is still preview code, so you have to do a lot from the command line. As runwasi evolves, expect to see Azure Portal tools and integration with package deployment services, ensuring applications can deploy and run quickly.
This should be an ideal environment for tools like Bindle, ensuring that appropriate workload versions and artifacts are deployed on appropriate clusters. Code can run on edge Kubernetes and on hyperscale instances like AKS, with the right resources for each instance of the same application.
Previews like this are good for Azureโs Kubernetes tool. They let you experiment with new ways of delivering services as well as new runtime options. You get the opportunity to build toolchains and CI/CD pipelines, getting ready for when WASI becomes a mature technology ready for enterprise workloads.
Itโs not purely about the technology. Interesting long-term benefits come with using WASI as an alternative to containers. As cloud providers such as Azure transition to offering dense Arm physical servers, a relatively lightweight runtime environment like WASI can put more nodes on a server, helping reduce the amount of power needed to host an application at scale and keeping compute costs to a minimum. Faster, greener code could help your business meet sustainability goals.


