From someone who has no idea how to deploy a Local Kubernetes Cluster on Debian 11.

Kubernetes is an amazingly powerful tool… yada yada yada. Chances are you’re here because you’ve already heard how great it is. So, if you’ve had the same idea as me and want to set up a local k8s cluster to see what all the fuss is about, here’s how I did it.
Prerequesits:
- At least 2 fresh Debian 11 installs (I’m using 1 master & 3 workers)
- Miniumum of 2GB of RAM, I found 4 to be more comfortable (especially for the Master Node)
- It’s helpful to install without a swap partition as we have to disable swap in any case.
- Decide on a container runtime
- I’m using containerd since it’s the default in cloud providers.
- Or have a look here
- Decide on a CNI Provider for cluster networking
None of the commands here are my own – most are taken from the Kubernetes documentation. This tutorial is just trying to save you some time and collect all the needed Googling in one place.
Ok, lets get started.
PART 1: THINGS TO DO ON ALL THE NODES!
These steps need to be done on all the nodes in your cluster. I thought I’d be smart and do this on one VM, clone it as many times as I needed and then change the hostnames individually. Seemed to work fine. Or you could use tmux… probably as broard as it is long.
- Install containerd
# Update the apt package index and install packages to allow apt to use a repository over HTTPS
$ sudo apt-get update
$ sudo apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release
# Add Docker’s official GPG key (containerd is distributed by Docker)
$ sudo mkdir -p /etc/apt/keyrings
$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Set up Docker repository:
$ echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# install
$ sudo apt-get update
$ sudo apt-get install -y containerd.io
$ sudo mkdir -p /etc/containerd
$ sudo containerd config default | sudo tee /etc/containerd/config.toml
2. Tell containerd to use systemd as the cgroup driver
I can’t use vi yet so we’ll nano the config file:
sudo nano /etc/containerd/config.toml
In the file, find the line:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
Underneath that line, find:
SystemdCgroup = false
and change it to:
SystemdCgroup = true
It should be there already since we created the default contained config, but if not just add it.
Save and exit, then reload the changes:
$ sudo systemctl restart containerd
Since k8s v1.22, kubeadm will default to using systemd as the cgroup driver unless otherwise specified. As long as we’ve told containerd to use the systemd cgroup, we’re all set.
3. Make sure the correct modules are loaded and change some kernel settings for networking to work properly
# modules
$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
# kernel stuff
$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# reload those changes
$ sudo modprobe overlay
$ sudo modprobe br_netfilter
$ sudo sysctl --system
# OR simply reboot
$ sudo reboot
4. Disable swap if needed
Kubelet requires swap to be disabled to run. If you installed Debian without a swap partition, you’re all set. Double check by running
$ free -h
total used free shared buff/cache available
Mem: 5.8Gi 169Mi 5.2Gi 1.0Mi 465Mi 5.4Gi
Swap: 0B 0B 0B
Note availbale swap is 0B. If it’s anything other than 0, disable swap with:
$ sudo swapoff -a
NOTE: This only disables swap until the next reboot. To permanenetly disable swap:
$ sudo nano /etc/fstab
And comment out the line about swap. Save and exit.
5. Install Kubernetes! Getting close now. (taken directly from Kubernetes Documentation)
Update the apt
package index and install packages needed to use the Kubernetes apt
repository:
$ sudo apt-get update
$ sudo apt-get install -y apt-transport-https ca-certificates curl
Download the Google Cloud public signing key:
$ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
Add the Kubernetes apt
repository:
$ echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Update apt
package index, install kubelet, kubeadm and kubectl, and pin their version:
$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl
# following command stops these packages from being automatically changed and potentially breaking the cluster (very cool!)
$ sudo apt-mark hold kubelet kubeadm kubectl
PART 2: STOP TMUXING! THE FOLLOWING STEPS ARE UNIQUE TO EACH NODE
At this point I cloned my VM to create 3 more. They will be my 3 worker nodes. Before we go any further though, we need to change IP addresses and hostnames, since they’re all the same after the clone.
6. Update hostnames etc.
I’ll set static IPs for each node using:
$ sudo nano /etc/network/interfaces
and editing accordingly.
My interfaces file now looks like this (for reference, use your own IPs)
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
allow-hotplug eth0
iface eth0 inet static
address 192.168.3.130
netmask 255.255.255.0
gateway 192.168.3.1
dns-nameservers 192.168.3.1
Reload the config using
$ sudo systemctl restart networking
Once you’ve reconnected the SSH session with the new IP, set the new hostname for each node.
$ sudo nano /etc/hostname
# reboot or reload the changes with:
$ /etc/init.d/hostname.sh start
THIS NEXT PART IS MORE IMPORTANT THAN I THOUGHT! I didn’t do this the first 5 times I tried to deploy Kubernetes and spent a whole day troubleshooting why my kube API kept dropping out and none of my control plane pods wanted to stay running.
k8s resolves things via hostname (including itself), which in the /etc/hosts file points to 127.0.1.1 (the loopback interface) by default. I remember reading somewhere that the kubeAPI only listens on the node’s main interface (192.168.3.130 in my case) – so if all the services are looking on 127.0.1.1, they’ll never find anything and freak out.
To prevent such a headache, edit the /etc/hosts file to include all the nodes and their IPs:
$ sudo nano /etc/hosts
It should look something like this:
127.0.0.1 localhost
192.168.3.130 k8s-master-01
192.168.3.131 k8s-worker-01
192.168.3.132 k8s-worker-02
192.168.3.133 k8s-worker-03
# IPv6 stuff below here
blah blah blah
colons and shit
:: idk how IPv6 works yet
PART 3: CONFIGUE THE MASTER – RUN THESE ON THE CONTROL-PLANE NODE ONLY!
7. Initialise the control-plane node with kubeadm.
$ sudo kubeadm init --pod-network-cidr=111.222.0.0/16
The CIDR is important for the install of the CNI, which we’ll do next. You can set this to anything as long as it doesn’t clash with your local network, otherwise you might have problems.
The end of your output should look like this:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.3.130:6443 --token 3qyqnp.jsiuqx7cloc8vmjv \
--discovery-token-ca-cert-hash sha256:995f755400edc6345864911137670dfb6db0cd428b7d391ca90f176b260be14fa
First, let’s do what it says:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
This will make the kubectl
command work for your non-root user.
To check everything is working before moving on, use
Since k8s has been helpful so far, we’ll take it’s advice and install a CNI. The control-plane won’t fully initilise until it detects a network plugin.
8. Install Calico CNI
Install the Operator, which manages installation, upgrade and general lifecycle of a calico cluster on ALL your nodes. Pretty neat.
$ kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml
Grab the manifest provided by Calico:
$ cd $HOME
$ curl https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml -O
Since we provided our own CIDR when we ran kubeadm init
, we need to edit the CIDR in the manifest to match.
$ nano ./custom-resources.yaml
Finally, create the manifest and install.
$ kubectl create -f custom-resources.yaml
To see the k8s magic in action, run
$ watch kubectl get pods --all-namespaces
When the Calico pods have created, the coredns pods will also finish initilising. It will look like this:
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-system calico-kube-controllers-86dff98c45-j5j96 0/1 Pending 0 20m
calico-system calico-node-jcwpz 1/1 Running 0 20m
calico-system calico-typha-54998dc76f-zg8wt 1/1 Running 0 20m
kube-system coredns-6d4b75cb6d-7nz2q 1/1 Running 0 49m
kube-system coredns-6d4b75cb6d-x4dfk 1/1 Running 0 49m
kube-system etcd-k8s-master-01 1/1 Running 0 50m
kube-system kube-apiserver-k8s-master-01 1/1 Running 0 49m
kube-system kube-controller-manager-k8s-master-01 1/1 Running 0 50m
kube-system kube-proxy-hxn2p 1/1 Running 0 49m
kube-system kube-scheduler-k8s-master-01 1/1 Running 0 49m
tigera-operator tigera-operator-5dc8b759d9-g9d8b 1/1 Running 0 23m
Now we’re ready to add some nodes!
PART 4: CONFIGUE THE WORKERS – RUN THESE ON WORKER NODES ONLY!
9. Add the worker nodes to the cluster
On each worker, run the provided token from kubeadm init
:
$ sudo kubeadm join 192.168.3.130:6443 --token 3qyqnp.jsiuqx7cloc8vmjv \
--discovery-token-ca-cert-hash sha256:995f755400edc6345864911137670dfb6db0cd428b7d391ca90f176b260be14fa
If you’ve lost the token, you can get a new one with:
$ sudo kubeadm token create --print-join-command
By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired this command comes in handy.
10. Check your nodes are connected by running the following on the control-plane node.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master-01 Ready control-plane 65m v1.24.2
k8s-worker-01 Ready <none> 11m v1.24.2
k8s-worker-02 Ready <none> 38s v1.24.2
k8s-worker-03 Ready <none> 32s v1.24.2
To make things pretty, we can label the worker nodes:
$ kubectl label node k8s-worker-01 node-role.kubernetes.io/worker=worker
# so now it's all proper! :)
NAME STATUS ROLES AGE VERSION
k8s-master-01 Ready control-plane 74m v1.24.2
k8s-worker-01 Ready worker 21m v1.24.2
k8s-worker-02 Ready worker 10m v1.24.2
k8s-worker-03 Ready worker 10m v1.24.2
That’s it! You’re ready to deploy whatever you want on your kubernetes cluster. Now the real work starts…