Reader

Creating a ClickHouse Cluster on Raspberry Pis

| Cloud Native Computing Foundation | Default

Want a hands-on way to explore Kubernetes and ClickHouse®—without spinning up cloud VMs? In this post, we’ll build a home-lab cluster of Raspberry Pi 5 boards that mimics a high-availability setup. Whether you’re a cloud-native developer looking to broaden your bare-metal and networking skills or simply a tinkerer who loves pushing Pi hardware to its limits, this project offers a fun, cost-effective way to get practical Kubernetes experience in your own home. We’ll cover everything from preparing the Pis and installing K3s to spinning up a replicated ClickHouse cluster managed by the Altinity Operator for ClickHouse.

Why Do This?

Besides the inherent nerdiness of it, there are some practical reasons:

  • I’m approaching this project with a background as a cloud-native app developer. That means I’ve rarely dealt with bare-metal or on-premises networking. Building this cluster is a great way to learn about concepts I typically don’t see when using cloud services.
  • It’s also handy to have a Kubernetes cluster available for experimentation without waiting for new AWS VMs to provision. Tools like vCluster make this even more attractive for tasks such as quickly testing Helm charts.

Overview

  1. The Hardware
  2. Prepping the Pis
  3. Installing K3s
    1. Installing the Control Plane
    2. Connecting to K3s
    3. Installing Cilium
    4. Joining the Worker Nodes
  4. Installing ClickHouse
    1. Installing the Altinity Operator
    2. Creating a ClickHouse Cluster with Helm

The Hardware

For this tutorial, we’re using three Raspberry Pi 5 (8GB) models. You could certainly use any other ARM- or x86-based System-on-Chip (SoC) or mini PC—K3s is widely compatible as a lightweight Kubernetes distribution. We specifically chose Raspberry Pi 5 boards because of the availability of a PCIe port. Combined with a compatible HAT, this lets us use NVMe drives at relatively high speeds—much faster than what’s possible with a MicroSD card.

Bill of Materials:

  • 3× Raspberry Pi 5 (8GB model)
  • 3× Geekworm X1001 PCIe-to-M.2 HAT
  • 3× Official Raspberry Pi 5 Active Coolers
  • 300W USB-C power supply*
  • 1× MicroSD card (8GB–32GB)**
  • Optional: Micro-HDMI to HDMI adapter, monitor, USB keyboard, mouse

The Raspberry Pi 5 runs at a slight variation from the standard USB-C spec. They may complain of undervolting if you use a typical USB-C supply. Even though this cluster will only draw ~30–60W at peak, I used a 300W power supply because it was the only one I found that could deliver more than 15 watts on multiple ports simultaneously. Official Raspberry Pi power supplies are a good choice when reliability is crucial.

**We’ll just use the MicroSD card to flash the OS onto each NVMe drive. Larger cards will take longer to copy the image, so if you can find an 8GB card, that’s ideal.

A Note on Memory

We’re using the 8GB model, but if you have the newer 16GB version, more RAM is always helpful. You may struggle to run a multi-node K3s setup with ClickHouse on 4GB or less, though with some tuning, it can be done.

Step 1: Set Up the Raspberry Pis

First, we’ll set up our Raspberry Pi 5 boards so they can boot from their NVMe drives rather than a standard microSD card. This gives us improved performance and reliability.

  1. Install Raspbian Lite on an SD card
    • Grab the latest Raspbian (Lite) image from the official Raspberry Pi website.
    • Flash it onto a microSD card using your favorite tool (e.g., balenaEtcher, Raspberry Pi Imager, or dd on Linux).
    • Using the Raspberry Pi Imager tool is preferred since it allows you to easily create a bootable image preloaded with your username and password and/or SSH public key. 
    • Be sure to also enable SSH under the `Services` tab (if using the Raspberry Pi Imager)
  2. Boot the Pi
    • Insert the flashed SD card into the Pi.
    • Connect the Pi to power and wait for it to boot up.
    • If you’re using a monitor and keyboard, you can log in directly. Otherwise, you can SSH in after identifying the Pi’s IP address on your local network (typically you can do this by checking the client list on your router’s control panel).
  3. Copy the disk image to the NVMe
    • We’ll clone the SD card’s filesystem onto the attached NVMe drive using dd.

Be cautious with this command—it’s destructive. Double-check your source (/dev/mmcblk0 or /dev/mmcblk0pX) and destination (/dev/nvme0nX) devices before running. Here’s the basic form:

sudo dd if=/dev/mmcblk0 of=/dev/nvme0n1 bs=32M
  • This can take a while, depending on the size of your SD card and the performance of your drives. Larger SD cards will take longer, so an ~8GB SD card is best.
  1. Shut down, remove the SD card, and reboot
    • Once the dd operation completes, power down the Pi, remove the SD card, and power the Pi back on.
    • The Pi should now boot from the NVMe drive.
  2. Change the Pi hostname
    • Assign each Pi a unique hostname so you can tell them apart. You can do this via sudo raspi-config or by editing /etc/hostname and /etc/hosts.
    • For example, name them pi1, pi2, and pi3.
  3. Expand the filesystem

After cloning from the SD to the NVMe, you will have unused disk space. Use raspi-config to expand the root file system to use all space available on the NVMe:

sudo raspi-config --expand-rootfs
  1. Enable cgroups

Kubernetes requires certain cgroup settings to be enabled. Edit

/boot/firmware/cmdline.txt (or /boot/cmdline.txt depending on your distribution) 

cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
  • Make sure it’s on the same line as the other parameters, just separated by spaces.
  1. Update the Pi firmware

This applies the new cgroup settings:

sudo rpi-update
  1. Reboot the Pi
    • A quick sudo reboot will apply all the changes.
    • Repeat these steps for each Pi in your cluster (pi1, pi2, pi3).

Step 2: Install K3s

Now that our Pis are prepped, we’ll install a minimal Kubernetes distribution: K3s from Rancher [LINK]. We’ll set up one Pi as the control-plane node and the others as worker nodes.

  1. Install the control-plane node on pi1

SSH into pi1 and run:

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" sh -s - --flannel-backend none
  • We’re disabling Flannel here (–flannel-backend none) because we’ll use the Cilium CNI in a later step.
  1. Set up pi2 and pi3 as worker nodes

On pi1, retrieve the K3S token from:

/var/lib/rancher/k3s/server/node-token

SSH into pi2 (and similarly pi3) and run:

curl -sfL https://get.k3s.io | K3S_URL=https://pi1:6443 K3S_TOKEN=[YOUR_TOKEN] sh -s -
  • Replace [YOUR_TOKEN] with the actual token you grabbed from pi1. This command tells your worker nodes how to join the pi1 control plane.
  1. Copy the k3s.yaml and set it as your kubeconfig

On your local machine, do:

scp user@pi1:/etc/rancher/k3s/k3s.yaml ./k3s.yaml

You will need to edit the file and replace `https://127.0.0.1:6443` with `https://pi1:6443`, then set the file as your kubeconfig temporarily (You could also merge this into your default kubeconfig file if you wish):

export KUBECONFIG=./k3s.yaml
  • Now you can manage the cluster from your local machine using kubectl commands.
  1. Install Cilium as your CNI

We’re installing Cilium for networking and load-balancing features. We enable L2 announcements so that later we can create LoadBalancer services that are able to request External IPs from our Router’s DHCP pool:

helm upgrade cilium cilium/cilium --version 1.16.5 \
  --namespace kube-system \
  --reuse-values \
  --set l2announcements.enabled=true \
  --set k8sClientRateLimit.qps=30 \
  --set k8sClientRateLimit.burst=60 \
  --set kubeProxyReplacement=true \
  --set k8sServiceHost=pi1 \
  --set k8sServicePort=6443
  • Make sure you have Helm installed and that your helm CLI is pointed at the correct cluster (KUBECONFIG=./k3s.yaml).

Step 3: Install the Altinity Operator

With Kubernetes in place, it’s time to install the Altinity Operator for ClickHouse. This operator makes ClickHouse deployments a breeze.

kubectl apply -f https://raw.githubusercontent.com/Altinity/clickhouse-operator/master/deploy/operator/clickhouse-operator-install-bundle.yaml

That’s it! Once this completes, the operator will be ready to manage ClickHouse instances within your cluster.

Step 4: Create a Replicated ClickHouse Cluster

We’ll now create a simple replicated ClickHouse setup using Altinity’s Helm chart.Add the Altinity Helm repo and install

helm repo add altinity https://helm.altinity.com
helm install clickhouse-dev --create-namespace --namespace clickhouse altinity/clickhouse \
  --set keeper.enabled=true \
  --set clickhouse.replicasCount=2
  • We’re enabling ClickHouse Keeper (keeper.enabled=true) to handle the coordination tasks that Zookeeper would normally manage.
  • We set replicasCount=2 for a small replicated setup. Adjust as you see fit if you want more replicas. By default, the 3 Keeper replicas will each require a unique node. You can force the ClickHouse instances onto unique nodes by setting the value `clickhouse.antiAffinity = true`
  1. Test Queries
    Once the installation completes, you can run a few queries to confirm that your cluster is healthy:

Check Zookeeper state (via Keeper):

SELECT * FROM system.zookeeper WHERE path = '/'

Create a replicated table:

CREATE TABLE IF NOT EXISTS test_rep ON CLUSTER `{cluster}`
(
  `number` UInt32,
  `created_at` DateTime DEFAULT now()
)
ENGINE = ReplicatedMergeTree
ORDER BY number;

Insert some test data:

INSERT INTO test_rep (number)
SELECT number
FROM system.numbers
LIMIT 10;

Verify replication across nodes:

SELECT hostName(), *
FROM clusterAllReplicas('{cluster}', default.test_rep)
ORDER BY 1 ASC, 2 ASC;
  • You should see the same data from each replica, along with its hostName.

That’s it! You now have a functioning Kubernetes home lab running ClickHouse in a replicated setup. Congratulations on making it this far—enjoy all the benefits of a local high-availability environment, plus the added satisfaction of doing it on Raspberry Pi 5s.