In this blog, we're going to explore Kubernetes . However, there is a
VERY FIRST question to be answered: what is the
relationship between Kubernetes and
Docker ? Let's start our journey of
today:
Kubernetes (commonly stylized as k8s) is an open-source container-orchestration system for automating application deployment, scaling, and management.
(cited from Wikipedia )
On Raspberry Pi , a
lightweight variant of Kubernetes
is normally preferred. A variety of choices are available:
Minikube
vs. kind vs. k3s - What should I use? elaborates the differences
among Minikube , kind and k3s . Its final table is cited as follows:
runtime
VM
container
native
supported architectures
AMD64
AMD64
AMD64, ARMv7, ARM64
supported container runtimes
Docker , CRI-O , containerd , gVisor
Docker
Docker , containerd
startup time: initial/following
5:19 / 3:15
2:48 / 1:06
0:15 / 0:15
memory requirements
2GB
8GB (Windows, MacOS)
512 MB
requires root?
no
no
yes (rootless is experimental)
multi-cluster support
yes
yes
no (can be achieved using containers)
multi-node support
no
yes
yes
project page
minikube
kind
k3s
Here in my case, I'm going to use k3s
to manage and monitor the cluster. The following 2 blogs are strongly
recommended from me. - Run
Kubernetes on a Raspberry Pi with k3s - Kubernetes
1.18 broke “kubectl run”, here’s what to do about it
2. Preparation
Let's take a look at the IP info of
ALL 4 Raspberry
Pi s. Let's take pi04 as the example this time.
pi01 , pi02 , pi03 are
having very similar IP info as
pi04 .
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether b8:27:eb:c1:b8:76 brd ff:ff:ff:ff:ff:ff inet 192.168.1.247/24 brd 192.168.1.255 scope global noprefixroute eth0 valid_lft forever preferred_lft forever inet6 2001:569:7fb8:c600:ca9b:d758:2534:8a9d/64 scope global dynamic mngtmpaddr noprefixroute valid_lft 7480sec preferred_lft 7180sec inet6 fe80::80b5:3a3e:defe:24aa/64 scope link valid_lft forever preferred_lft forever 3: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether b8:27:eb:94:ed:23 brd ff:ff:ff:ff:ff:ff 4: wlan1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether e8:4e:06:35:16:6d brd ff:ff:ff:ff:ff:ff inet 192.168.1.245/24 brd 192.168.1.255 scope global dynamic noprefixroute wlan1 valid_lft 86060sec preferred_lft 75260sec inet6 2001:569:7fb8:c600:5996:3c23:ef08:c367/64 scope global dynamic mngtmpaddr noprefixroute valid_lft 7480sec preferred_lft 7180sec inet6 fe80::a0ff:56e3:77f5:9bf2/64 scope link valid_lft forever preferred_lft forever
As mentioned in A
Cluster of Raspberry Pis (1) - Configuration , pi04
is an old Raspberry
Pi 3 Model B Rev 1.2 1GB , which is unfortunately with a broken
Wifi interface wlan0 . Therefore, I've
got to insert a Wifi dongle in order to have
Wifi wlan1 enabled.
3. k3s Installation and Configuration
3.1 k3s Installation on Master Node pi01
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 pi@pi01:~ $ curl -sfL https://get.k3s.io | sh - [INFO] Finding release for channel stable [INFO] Using v1.18.3+k3s1 as release [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.3+k3s1/sha256sum-arm.txt [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.3+k3s1/k3s-armhf [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Creating /usr/local/bin/kubectl symlink to k3s [INFO] Creating /usr/local/bin/crictl symlink to k3s [INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service. [INFO] systemd: Starting k3s pi@pi01:~ $ sudo kubectl get nodes NAME STATUS ROLES AGE VERSION pi01 Ready master 2m42s v1.18.3+k3s1
If we take a look at IP info, one additional
flannel.1 interface is added as follows:
1 2 3 4 5 6 7 8 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default link/ether 3e:53:91:ce:b9:46 brd ff:ff:ff:ff:ff:ff inet 10.42.0.0/32 brd 10.42.0.0 scope global flannel.1 valid_lft forever preferred_lft forever inet 169.254.238.88/16 brd 169.254.255.255 scope global noprefixroute flannel.1 valid_lft forever preferred_lft forever inet6 fe80::59c5:fbf9:aaf7:46fb/64 scope link valid_lft forever preferred_lft forever
3.2 k3s Installation on Work Node pi01, pi02,
pi03
Before moving forward, we need to write down node
token on the master node , which will be used
while the other work nodes join in the cluster.
1 2 pi@pi01:~ $ sudo cat /var/lib/rancher/k3s/server/node-token K10cedbc6396ab68f6bee0d2df3eb005f0ff9ea17275aed2763b6bf07a06e83ce47::server:9a0d84f3bd8044b341c95f967badd5d3
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 pi@pi0X:~ $ curl -sfL http://get.k3s.io | K3S_URL=https://192.168.1.253:6443 \ > K3S_TOKEN=K10cedbc6396ab68f6bee0d2df3eb005f0ff9ea17275aed2763b6bf07a06e83ce47::server:9a0d84f3bd8044b341c95f967badd5d3 sh - [INFO] Finding release for channel stable [INFO] Using v1.18.3+k3s1 as release [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.3+k3s1/sha256sum-arm.txt [INFO] Skipping binary downloaded, installed k3s matches hash [INFO] Skipping /usr/local/bin/kubectl symlink to k3s, command exists in PATH at /usr/bin/kubectl [INFO] Skipping /usr/local/bin/crictl symlink to k3s, command exists in PATH at /usr/bin/crictl [INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service [INFO] systemd: Enabling k3s-agent unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service. [INFO] systemd: Starting k3s-agent
, where X=2, or 3, or 4 .
3.3 Take a Look on pi01
1 2 3 4 5 6 pi@pi01:~ $ sudo kubectl get nodes NAME STATUS ROLES AGE VERSION pi02 Ready <none> 5m7s v1.18.3+k3s1 pi03 Ready <none> 4m18s v1.18.3+k3s1 pi01 Ready master 20m v1.18.3+k3s1 pi04 Ready <none> 11s v1.18.3+k3s1
3.4 Access Raspberry Pi Cluster from
PC
We can further configure our PC to be able to access the Raspberry Pi Cluster. For
details, please refer to Run
Kubernetes on a Raspberry Pi with k3s . On my laptop, I can do:
1 2 3 4 5 6 ✔ kubectl get nodes NAME STATUS ROLES AGE VERSION pi02 Ready <none> 3h27m v1.18.3+k3s1 pi03 Ready <none> 3h26m v1.18.3+k3s1 pi04 Ready <none> 3h22m v1.18.3+k3s1 pi01 Ready master 3h42m v1.18.3+k3s1
We can even specify the role name by the following command:
1 2 ✔ kubectl label nodes pi0X kubernetes.io/role=worker node/pi0X labeled
, where X=2, or 3, or 4 .
Let's take a look at all nodes again:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 64 ✔ kubectl get nodes NAME STATUS ROLES AGE VERSION pi04 Ready worker 2d7h v1.18.3+k3s1 pi03 Ready worker 2d7h v1.18.3+k3s1 pi01 Ready master 2d7h v1.18.3+k3s1 pi02 Ready worker 2d7h v1.18.3+k3s1 66 ✘ kubectl get pods --all-namespaces ~ NAMESPACE NAME READY STATUS RESTARTS AGE kube-system helm-install-traefik-m49zm 0/1 Completed 0 2d7h kube-system svclb-traefik-djtgw 2/2 Running 2 2d7h kube-system svclb-traefik-cn76f 2/2 Running 2 2d7h kube-system local-path-provisioner-6d59f47c7-5gz67 1/1 Running 1 2d7h kube-system svclb-traefik-7bwtl 2/2 Running 2 2d7h kube-system coredns-8655855d6-wg68z 1/1 Running 1 2d7h kube-system traefik-758cd5fc85-vn7xg 1/1 Running 1 2d7h kube-system metrics-server-7566d596c8-v2vgp 1/1 Running 1 2d7h kube-system svclb-traefik-r9x5v 2/2 Running 4 2d7h kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-ct6lh 1/1 Running 0 43h kubernetes-dashboard kubernetes-dashboard-7b544877d5-s4spx 0/1 CrashLoopBackOff 489 43h 67 ✔ kubectl get pods ~ No resources found in default namespace.
4. Create Deployment
1 2 3 73 ✔ kubectl create deployment nginx-sample --image=nginx ~ deployment.apps/nginx-sample created
After a while, nginx-sample will be successfully
deployed.
1 2 3 4 5 6 7 8 9 10 11 12 79 ✔ kubectl get deployments ~ NAME READY UP-TO-DATE AVAILABLE AGE nginx-sample 1/1 1 1 12m 80 ✔ kubectl get pods ~ NAME READY STATUS RESTARTS AGE nginx-sample-879f5cf56-kg6xf 1/1 Running 0 12m 81 ✔ kubectl get pods -o wide ~ NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-sample-879f5cf56-kg6xf 1/1 Running 0 12m 10.42.0.15 pi01 <none> <none>
Now, let's expose this service and take a look from the browser:
1 2 3 4 5 6 7 8 82 ✔ kubectl expose deployment nginx-sample --type="NodePort" --port 80 ~ service/nginx-sample exposed 83 ✔ kubectl get services ~ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 96m nginx-sample NodePort 10.43.195.237 <none> 80:30734/TCP 72s