I’ve found a lot of guides on how to get Kubernetes set up on Raspberry Pis (Raspberry Pi 4 specifically), but I haven’t found any recently that go through the process using containerd instead of Docker. I recently worked through this process and thought I’d share my experience.
Before we get started, I wanted to talk about my current setup a little bit as it’s not 100% the latest/stock.
- Firstly, I’m running Ubuntu 20.04 Server on each of my Raspberry Pi nodes. I’m currently holding off on upgrading to 22.04 as when we did that at work, it broke our Kubernetes cluster. I currently don’t want to fight with that at home. Hopefully, the build instructions will work on 22.04 as a fresh install, but YMMV.
- Secondly, I’ve gone through the process of booting my Raspberry Pis from SSD instead of microSD cards — I’ll make a post about how I did that in the future. This shouldn’t change the build procedure, but I wanted to call it out.
Preparation and prerequisites
What you’ll need:
- Raspberry Pis – ideally you’d want at least three. This way you can have one control-plane and two workers.
- Storage – No matter how you’re booting your pis, you’ll want to make sure you’ve got a decent amount of free space for ephemeral storage (this is the space containers will use as “local” storage unless you’re mounting persistent storage somewhere else, like via NFS). For SD Cards, 64GB or 128GB should suffice. For SSDs, you can say the same, but they’re so cheap now I’ve been going with 250GB SSDs just because.
- Networking – you’ll want these Pis to be able to talk to each other and the internet (you need to get your container images from somewhere).
- (Optional) If you’ve got a PoE switch and PoE hats for your Pis, I’d recommend going this route. It really simplifies the cable management.
Assuming you’ve got all the equipment and the OS installed you’ll want to go through some prep work.
- Disable swap and remove it from
/etc/fstab
:- From the command line run:
sudo swapoff -a
- Remove the swap mount from
/etc/fstab
From this:/dev/sda1 / ext4 defaults 1 1 /dev/hda1 /usr ext4 defaults 1 1 /dev/sda5 swap swap defaults 0 0
To this:
/dev/sda1 / ext4 defaults 1 1 /dev/hda1 /usr ext4 defaults 1 1
- From the command line run:
- Remove Docker if it’s already present:
sudo apt-get remove docker docker-engine docker.io containerd runc
sudo snap rm docker
- Ensure the docker repository is added and enabled. This sounds unintuitive, why would we want to make sure the docker repo is added if we’re removing
docker
and usingcontainerd
? For Ubuntu,containerd
is distributed by Docker, not thecontainerd
project. You can read about it here.- Install pre-requisites:
sudo apt update && sudo apt install ca-certificates curl gnupg lsb-release
- Ensure the keyrings directory exists:
sudo mkdir -p /etc/apt/keyrings
- Add the Docker repo keyring:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
- Add the Docker repo:
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
- Install pre-requisites:
- Install
containerd
sudo apt update && sudo apt install -y containerd.io
- Ensure the containerd config directory exists:
sudo mkdir -p /etc/containerd
- Ensure the containerd config has the default values:
containerd config default | sudo tee /etc/containerd/config.toml
- Ensure the required modules are loaded at boot:
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF
- If you’re going to try to run things before rebooting, go ahead and
modprobe
those two modules.sudo modprobe overlay br_netfilter
- If you’re going to try to run things before rebooting, go ahead and
- Restart the
containerd
service:sudo systemctl restart containerd
- Enable cgroups limit support
sudo sed -i '$ s/$/ cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1/' /boot/firmware/cmdline.txt
- Configure IPTables to see bridged traffic
-
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
- Apply the configuration without requiring a reboot
sudo sysctl --system
-
Kubernetes time!
Keep in mind that all the example commands here are for Ubuntu 20.04. I’ll leave notes where things need to change for newer versions.
- Add the Kubernetes repo keyring:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
- Add the Kubernetes repo:
-
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF
- If you are using a newer version of Ubuntu, check the official docs to see if there’s a new repo. For Ubuntu 18.04 – 22.04 the
kubernetes-xenial
repo is what you use, after 22.04, there may be new repos.
- If you are using a newer version of Ubuntu, check the official docs to see if there’s a new repo. For Ubuntu 18.04 – 22.04 the
-
- Install
kubelet
,kubeadm
, andkubectl
sudo apt update && sudo apt install -y kubelet kubeadm kubectl
- Mark the packages as held. We don’t want them to automatically update as there’s a specific upgrade procedure you need to follow when upgrading a cluster.
sudo apt-mark hold kubelet kubeadm kubectl
- Configure
containerd
as the container runtime:- Edit
/var/lib/kubelet/kubeadm-flags.env
with your editor of choice and add thecontainerd
runtime flags. -
KUBELET_KUBEADM_ARGS="--pod-infra-container-image=k8s.gcr.io/pause:3.9 --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
- The important bit is
--container-runtime-endpoint=unix:///run/containerd/containerd.sock
- The important bit is
- Edit
- Ensure
forward_ipv4
is enabled:- If
cat /proc/sys/net/ipv4/ip_forward
does not return1
, run the next command -
# as root echo 1 > /proc/sys/net/ipv4/ip_forward
- If
Configuring the control-plane
node
At this point all of your nodes should have the required software installed. Now we can start building the cluster. There are essentially two types of nodes: master
/control-planes
and workers
. You’ll see control-plane
nodes referred to as master
as well — I think it’s new terminology to move away from certain terms, but I might be wrong there. master
/control-plane
nodes handle management of the cluster. What containers run on which nodes. Typically, control-plane
nodes don’t run your application containers, just infrastructure-level ones. They’re busy and important, ya know.
- Generate a token:
TOKEN=$(sudo kubeadm token generate)
- Store this somewhere if you want. I have, but I’ve never run into a reason where I needed it again.
- Initialize the cluster:
sudo kubeadm init --token=${TOKEN} --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "<your control-plane ip>:<port>" --upload-certs
- Make sure you update your
--control-plane-endpoint
to the IP of the control-plane node you’re on. - You can configure the
--pod-network-cidr
to whatever you want, as long as it doesn’t already exist on your network. For most people the default should work. - Take note of the join commands it outputs, you’ll use these later.
- The output should look something similar to this:
-
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a Pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: /docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
-
- For control-planes it should look something like this:
-
kubeadm join 192.168.2.114:6443 --token zqqoy7.9oi8dpkfmqkop2p5 \ --discovery-token-ca-cert-hash sha256:71270ea137214422221319c1bdb9ba6d4b76abfa2506753703ed654a90c4982b \ --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07
- If you don’t see a command for joining a control plane, you can construct your own with the following:
-
kubeadm init phase upload-certs --upload-certs kubeadm token create --print-join-command
- Take the output of each command and construct your join command by taking the ouput from
kubeadm token create --print-join-command
and appendingcontrol-plane --certificate-key <key-value>
-
-
- For workers it should look something like this:
-
kubeadm join 192.168.2.114:6443 --token zqqoy7.9oi8dpkfmqkop2p5 \ --discovery-token-ca-cert-hash sha256:71270ea137214422221319c1bdb9ba6d4b76abfa2506753703ed654a90c4982b
-
- The output should look something similar to this:
- Make sure you update your
- Make sure you can run
kubectl
commands without having to beroot
or usesudo
everywhere-
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
- This will copy the administrative config to your user. This shouldn’t be openly shared.
-
- Install
flannel
as the CNI — Container Network Interface:curl -sSL https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml | kubectl apply -f -
Joining subsequent control-plane
nodes to the cluster
Once the main control-plane
node has initialized the cluster, you can start joining nodes to it. Depending on the type of node you’re joining, you’ll use different commands. Refer back to the join commands kubeadm init
gave you. You’ll run that on each control-plane
node as root
or with sudo
in your cluster.
Joining the worker
nodes to the cluster
On each worker
node, run the join command kubeadm init
gave you as root
or with sudo
.
Validating your cluster
Now that you’ve joined all your nodes, you can check to make sure they’re all reporting back correctly and are in a Ready state!
kubectl get nodes -o wide
The output should look something like this:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kube-node1 Ready control-plane,master 7d20h v1.27.3 192.168.10.22 <none> Ubuntu 20.04.6 LTS 5.4.0-1089-raspi containerd://1.6.12 kube-node2 Ready control-plane,master 7d20h v1.27.3 192.168.10.23 <none> Ubuntu 20.04.6 LTS 5.4.0-1089-raspi containerd://1.6.12 kube-node3 Ready <none> 7d19h v1.27.3 192.168.11.194 <none> Ubuntu 20.04.6 LTS 5.4.0-1089-raspi containerd://1.6.12 kube-node4 Ready <none> 7d19h v1.27.3 192.168.11.187 <none> Ubuntu 20.04.6 LTS 5.4.0-1089-raspi containerd://1.6.12 kube-node5 Ready <none> 7d19h v1.27.3 192.168.11.179 <none> Ubuntu 20.04.6 LTS 5.4.0-1089-raspi containerd://1.6.12 kube-node6 Ready <none> 7d19h v1.27.3 192.168.10.244 <none> Ubuntu 20.04.6 LTS 5.4.0-153-generic containerd://1.6.12
Assuming all goes well, you’re ready to start deploying to your cluster!