Eons ago, we announced we were working on Fly Kubernetes. It drummed up enough excitement to prove we were heading in the right direction. So, we got hard to work to get from barebones “early access” to a beta release. We’ll be onboarding customers to the closed beta over the next few weeks. Email us at [email protected] and we’ll hook you up.
Fly Kubernetes is the “blessed path"™️ to using Kubernetes backed by Fly.io infrastructure. Or, in simpler terms, it is our managed Kubernetes service. We take care of the complexity of operating the Kubernetes control plane, leaving you with the unfettered joy of deploying your Kubernetes workloads. If you love Fly.io and K8s, this product is for you.
What even is a Kubernete?
So how did this all come to be—and what even is a Kubernete?
You can see more fun details in Introducing Fly Kubernetes.
If you wade through all the YAML and CNCF projects, what’s left is an API for declaring workloads and how it should be accessed.
But that’s not what people usually talk / groan about. It’s everything else that comes along with adopting Kubernetes: a container runtime (CRI), networking between workloads (CNI) which leads to DNS (CoreDNS). Then you layer on Prometheus for metrics and whatever the logging daemon du jour is at the time. Now you get to debate which Ingress—strike that—Gateway API to deploy and if the next thing is anything to do with a Service Mess, then as they like to say where I live, "bless your heart”.
Finally, there’s capacity planning. You’ve got to pick and choose where, how and what the Nodes will look like in order to configure and run the workloads.
When we began thinking about what a Fly Kubernetes Service could look like, we started from first principles, as we do with most everything here. The best way we can describe it is the scene from Iron Man 2 when Tony Stark discovers a new element. As he’s looking at the knowledge left behind by those that came before, he starts to imagine something entirely different and more capable than could have been accomplished previously. That’s what happened to JP, but with K3s and Virtual Kubelet.
OK then, WTF (what’s the FKS)?
We looked at what people need to get started—the API—and then started peeling away all the noise, filling in the gaps to connect things together to provide the power. Here’s how this looks currently:
- Containerd/CRI → flyd + Firecracker + our init: our system transmogrifies Docker containers into Firecracker microVMs
- Networking/CNI → Our internal WireGuard mesh connects your pods together
- Pods → Fly Machines VMs
- Secrets → Secrets, only not the base64’d kind
- Services → The Fly Proxy
- CoreDNS → CoreDNS (to be replaced with our custom internal DNS)
- Persistent Volumes → Fly Volumes (coming soon)
Now…not everything is a one-to-one comparison, and we explicitly did not set out to support any and every configuration. We aren’t dealing with resources like Network Policy and init containers, though we’re also not completely ignoring them. By mapping many of the core primitives of Kubernetes to a Fly.io resource, we’re able to focus on continuing to build the primitives that make our cloud better for workloads of all shapes and sizes.
A key thing to notice above is that there’s no “Node”.
Virtual Kubelet plays a central role in FKS. It’s magic, really. A Virtual Kubelet acts as if it’s a standard Kubelet running on a Node, eager to run your workloads. However, there’s no Node backing it. It instead behaves like an API, receiving requests from Kubernetes and transforming them into requests to deploy on a cloud compute service. In our case, that’s Fly Machines.
So what we have is Kubernetes calling out to our Virtual Kubelet provider, a small Golang program we run alongside K3s, to create and run your pod. It creates your pod as a Fly Machine, via the Fly Machines API, deploying it to any underlying host within that region. This shifts the burden of managing hardware capacity from you to us. We think that’s a cool trick—thanks, Virtual Kubelet magic!
Speedrun
You can deploy your workloads (including GPUs) across any of our available regions using the Kubernetes API.
You create a cluster with flyctl
:
fly ext k8s create --name hello --org personal --region iad
When a cluster is created, it has the standard default
namespace. You can inspect it:
kubectl get ns default --show-labels
NAME STATUS AGE LABELS
default Active 20d fly.io/app=fks-default-7zyjm3ovpdxmd0ep,kubernetes.io/metadata.name=default
The fly.io/app
label shows the name of the Fly App that corresponds to your cluster.
It would seem appropriate to deploy the Kubernetes Up And Running demo here, but since your pods are connected over an IPv6 WireGuard mesh, we’re going to use a fork with support for IPv6 DNS.
kubectl run \
--image=ghcr.io/jipperinbham/kuard-amd64:blue \
--labels="app=kuard-fks" \
kuard
And you can see its Machine representation via:
fly machine list --app fks-default-7zyjm3ovpdxmd0ep
ID NAME STATE REGION IMAGE IP ADDRESS VOLUME CREATED LAST UPDATED APP PLATFORM PROCESS GROUP SIZE
1852291c46ded8 kuard started iad jipperinbham/kuard-amd64:blue fdaa:0:48c8:a7b:228:4b6d:6e20:2 2024-03-05T18:54:41Z 2024-03-05T18:54:44Z shared-cpu-1x:256MB