Back to Canada for around 1 month. It's the time for me to write something. Good to know that PTI is released open source a couple of days ago. Let's take a look at its performance today. Sorry that I seriously have NO time to investigate the details, having been busy in a robotic arm project. Therefore, a couple of resultant images are displayed as follows to demonstrate PTI's performance.

Style Change on Myself

When I was young Aligned Image
When I was young Aligned Image
Inversion before PTI PTIed Image
Inversion before PTI PTIed Image
Rotation Left before PTI Rotation Left PTIed Image
Rotation Left before PTI Rotation Left PTIed Image
Rotation Right before PTI Rotation Right PTIed Image
Rotation Left before PTI Rotation Left PTIed Image
No Smile before PTI No Smile PTIed Image
No Smile before PTI No Smile PTIed Image
Smile before PTI Smile PTIed Image
Smile before PTI Smile PTIed Image
Young before PTI Young PTIed Image
Young before PTI Young PTIed Image
Old before PTI Old PTIed Image
Old before PTI Old PTIed Image

Style Change to Another Person

afro
Afro
mohawk
Afro

Music day. This is THE traditional Chinese culture. Revolution culture HAS NEVER BEEN.

ENet was published in 2016... Hmmmm... It's a shame that I just got to know it... Anyway, the performance seems to be REALLY GOOD...

πŸ‘

In fact, this video is somewhat stolen. I would like to advertise for SOME publication. I'll let you know a couple of months later.

It's sad to hear Donald J. Trump is detected as positive and now a patient, please refer to Trump's positive Covid-19 test throws country into fresh upheaval. I really would suggest everybody, particularly me, to be ALWAYS positive and patient. Kelowna, I'm coming to you again.

πŸ˜‰

Have NO time for blogging these days.

https://youtu.be/UibIz3hLLLo

In this blog, we're going to explore Kubernetes. However, there is a VERY FIRST question to be answered: what is the relationship between Kubernetes and Docker? Let's start our journey of today:

1 Kubernetes

Kubernetes (commonly stylized as k8s) is an open-source container-orchestration system for automating application deployment, scaling, and management. (cited from Wikipedia)

On Raspberry Pi, a lightweight variant of Kubernetes is normally preferred. A variety of choices are available:

Packages Description
MicroK8s MicroK8s is only available for 64-bit Ubuntu images. (Cited from How to build a Raspberry Pi Kubernetes cluster using MicroK8s: Setting up each Pi)
k3s, k3d, k3d
Minikube
kind
Kubeadm

Minikube vs. kind vs. k3s - What should I use? elaborates the differences among Minikube, kind and k3s. Its final table is cited as follows:

minikube kind k3s
runtime VM container native
supported architectures AMD64 AMD64 AMD64, ARMv7, ARM64
supported container runtimes Docker, CRI-O, containerd, gVisor Docker Docker, containerd
startup time: initial/following 5:19 / 3:15 2:48 / 1:06 0:15 / 0:15
memory requirements 2GB 8GB (Windows, MacOS) 512 MB
requires root? no no yes (rootless is experimental)
multi-cluster support yes yes no (can be achieved using containers)
multi-node support no yes yes
project page minikube kind k3s

Here in my case, I'm going to use k3s to manage and monitor the cluster. The following 2 blogs are strongly recommended from me. - Run Kubernetes on a Raspberry Pi with k3s - Kubernetes 1.18 broke β€œkubectl run”, here’s what to do about it

2. Preparation

Let's take a look at the IP info of ALL 4 Raspberry Pis. Let's take pi04 as the example this time. pi01, pi02, pi03 are having very similar IP info as pi04.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether b8:27:eb:c1:b8:76 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.247/24 brd 192.168.1.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 2001:569:7fb8:c600:ca9b:d758:2534:8a9d/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 7480sec preferred_lft 7180sec
inet6 fe80::80b5:3a3e:defe:24aa/64 scope link
valid_lft forever preferred_lft forever
3: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether b8:27:eb:94:ed:23 brd ff:ff:ff:ff:ff:ff
4: wlan1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether e8:4e:06:35:16:6d brd ff:ff:ff:ff:ff:ff
inet 192.168.1.245/24 brd 192.168.1.255 scope global dynamic noprefixroute wlan1
valid_lft 86060sec preferred_lft 75260sec
inet6 2001:569:7fb8:c600:5996:3c23:ef08:c367/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 7480sec preferred_lft 7180sec
inet6 fe80::a0ff:56e3:77f5:9bf2/64 scope link
valid_lft forever preferred_lft forever

As mentioned in A Cluster of Raspberry Pis (1) - Configuration, pi04 is an old Raspberry Pi 3 Model B Rev 1.2 1GB, which is unfortunately with a broken Wifi interface wlan0. Therefore, I've got to insert a Wifi dongle in order to have Wifi wlan1 enabled.

3. k3s Installation and Configuration

3.1 k3s Installation on Master Node pi01

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
pi@pi01:~ $ curl -sfL https://get.k3s.io | sh -
[INFO] Finding release for channel stable
[INFO] Using v1.18.3+k3s1 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.3+k3s1/sha256sum-arm.txt
[INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.3+k3s1/k3s-armhf
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service β†’ /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
pi@pi01:~ $ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
pi01 Ready master 2m42s v1.18.3+k3s1

If we take a look at IP info, one additional flannel.1 interface is added as follows:

1
2
3
4
5
6
7
8
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
link/ether 3e:53:91:ce:b9:46 brd ff:ff:ff:ff:ff:ff
inet 10.42.0.0/32 brd 10.42.0.0 scope global flannel.1
valid_lft forever preferred_lft forever
inet 169.254.238.88/16 brd 169.254.255.255 scope global noprefixroute flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::59c5:fbf9:aaf7:46fb/64 scope link
valid_lft forever preferred_lft forever

3.2 k3s Installation on Work Node pi01, pi02, pi03

Before moving forward, we need to write down node token on the master node, which will be used while the other work nodes join in the cluster.

1
2
pi@pi01:~ $ sudo cat /var/lib/rancher/k3s/server/node-token
K10cedbc6396ab68f6bee0d2df3eb005f0ff9ea17275aed2763b6bf07a06e83ce47::server:9a0d84f3bd8044b341c95f967badd5d3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
pi@pi0X:~ $ curl -sfL http://get.k3s.io | K3S_URL=https://192.168.1.253:6443 \
> K3S_TOKEN=K10cedbc6396ab68f6bee0d2df3eb005f0ff9ea17275aed2763b6bf07a06e83ce47::server:9a0d84f3bd8044b341c95f967badd5d3 sh -
[INFO] Finding release for channel stable
[INFO] Using v1.18.3+k3s1 as release
[INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.3+k3s1/sha256sum-arm.txt
[INFO] Skipping binary downloaded, installed k3s matches hash
[INFO] Skipping /usr/local/bin/kubectl symlink to k3s, command exists in PATH at /usr/bin/kubectl
[INFO] Skipping /usr/local/bin/crictl symlink to k3s, command exists in PATH at /usr/bin/crictl
[INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO] systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service β†’ /etc/systemd/system/k3s-agent.service.
[INFO] systemd: Starting k3s-agent

, where X=2, or 3, or 4.

3.3 Take a Look on pi01

1
2
3
4
5
6
pi@pi01:~ $ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
pi02 Ready <none> 5m7s v1.18.3+k3s1
pi03 Ready <none> 4m18s v1.18.3+k3s1
pi01 Ready master 20m v1.18.3+k3s1
pi04 Ready <none> 11s v1.18.3+k3s1

3.4 Access Raspberry Pi Cluster from PC

We can further configure our PC to be able to access the Raspberry Pi Cluster. For details, please refer to Run Kubernetes on a Raspberry Pi with k3s. On my laptop, I can do:

1
2
3
4
5
6
ξ‚° βœ” ξ‚° kubectl get nodes
NAME STATUS ROLES AGE VERSION
pi02 Ready <none> 3h27m v1.18.3+k3s1
pi03 Ready <none> 3h26m v1.18.3+k3s1
pi04 Ready <none> 3h22m v1.18.3+k3s1
pi01 Ready master 3h42m v1.18.3+k3s1

We can even specify the role name by the following command:

1
2
ξ‚° βœ” ξ‚° kubectl label nodes pi0X kubernetes.io/role=worker
node/pi0X labeled

, where X=2, or 3, or 4.

Let's take a look at all nodes again:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
 64 ξ‚° βœ” ξ‚° kubectl get nodes
NAME STATUS ROLES AGE VERSION
pi04 Ready worker 2d7h v1.18.3+k3s1
pi03 Ready worker 2d7h v1.18.3+k3s1
pi01 Ready master 2d7h v1.18.3+k3s1
pi02 Ready worker 2d7h v1.18.3+k3s1
66 ξ‚° ✘ ξ‚° kubectl get pods --all-namespaces
ξ‚² ~
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system helm-install-traefik-m49zm 0/1 Completed 0 2d7h
kube-system svclb-traefik-djtgw 2/2 Running 2 2d7h
kube-system svclb-traefik-cn76f 2/2 Running 2 2d7h
kube-system local-path-provisioner-6d59f47c7-5gz67 1/1 Running 1 2d7h
kube-system svclb-traefik-7bwtl 2/2 Running 2 2d7h
kube-system coredns-8655855d6-wg68z 1/1 Running 1 2d7h
kube-system traefik-758cd5fc85-vn7xg 1/1 Running 1 2d7h
kube-system metrics-server-7566d596c8-v2vgp 1/1 Running 1 2d7h
kube-system svclb-traefik-r9x5v 2/2 Running 4 2d7h
kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-ct6lh 1/1 Running 0 43h
kubernetes-dashboard kubernetes-dashboard-7b544877d5-s4spx 0/1 CrashLoopBackOff 489 43h
67 ξ‚° βœ” ξ‚° kubectl get pods
ξ‚² ~
No resources found in default namespace.

4. Create Deployment

1
2
3
 73 ξ‚° βœ” ξ‚° kubectl create deployment nginx-sample --image=nginx
ξ‚² ~
deployment.apps/nginx-sample created

After a while, nginx-sample will be successfully deployed.

1
2
3
4
5
6
7
8
9
10
11
12
 79 ξ‚° βœ” ξ‚° kubectl get deployments
ξ‚² ~
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-sample 1/1 1 1 12m
80 ξ‚° βœ” ξ‚° kubectl get pods
ξ‚² ~
NAME READY STATUS RESTARTS AGE
nginx-sample-879f5cf56-kg6xf 1/1 Running 0 12m
81 ξ‚° βœ” ξ‚° kubectl get pods -o wide
ξ‚² ~
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-sample-879f5cf56-kg6xf 1/1 Running 0 12m 10.42.0.15 pi01 <none> <none>

Now, let's expose this service and take a look from the browser:

1
2
3
4
5
6
7
8
 82 ξ‚° βœ” ξ‚° kubectl expose deployment nginx-sample --type="NodePort" --port 80 
ξ‚² ~
service/nginx-sample exposed
83 ξ‚° βœ” ξ‚° kubectl get services
ξ‚² ~
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 96m
nginx-sample NodePort 10.43.195.237 <none> 80:30734/TCP 72s

1. Container VS. Virtual Machine

Briefly refer to: What’s the Diff: VMs vs Containers and Bare Metal Servers, Virtual Servers, and Containerization.

2. Docker VS. VMWare

Briefly refer to: Docker vs VMWare: How Do They Stack Up?.

For me, I used to use Virtual Box (a virtual machine) a lot, but now start using Docker (the leading container).

3. Install Docker on Raspberry Pi

I heavily refer to Alex Ellis' blogs: - Get Started with Docker on Raspberry Pi - Hands-on Docker for Raspberry Pi - 5 things about Docker on Raspberry Pi - Live Deep Dive - Docker Swarm Mode on the Pi

3.1 Modify /boot/config.txt

Hostname /boot/config.txt
pi01
1
2
3
4
5
6
pi@pi01:~ $ tail -5 /boot/config.txt
[all]
#dtoverlay=vc4-fkms-v3d
start_x=1
gpu_mem=128
#arm_64bit=1
pi02
1
2
3
4
5
6
pi@pi02:~ $ tail -5 /boot/config.txt

[all]
#dtoverlay=vc4-fkms-v3d
start_x=1
gpu_mem=64
pi03
1
2
3
4
5
6
  pi@pi03:~ $ tail -5 /boot/config.txt

[all]
#dtoverlay=vc4-fkms-v3d
start_x=1
gpu_mem=32
pi04
1
2
3
4
5
6
pi@pi04:~ $ tail -5 /boot/config.txt

[all]
#dtoverlay=vc4-fkms-v3d
start_x=1
gpu_mem=32

3.2 Docker Installation

3.2.1 Installation

To install Docker is ONLY one single command:

  • curl -sSL https://get.docker.com | sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
pi@pi01:~ $ curl -sSL https://get.docker.com | sh
# Executing docker install script, commit: 26ff363bcf3b3f5a00498ac43694bf1c7d9ce16c
Warning: the "docker" command appears to already exist on this system.

If you already have Docker installed, this script can cause trouble, which is
why we're displaying this warning and provide the opportunity to cancel the
installation.

If you installed the current Docker package using this script and are using it
again to update Docker, you can safely ignore this message.

You may press Ctrl+C now to abort this script.
+ sleep 20
+ sudo -E sh -c apt-get update -qq >/dev/null
+ sudo -E sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
+ sudo -E sh -c curl -fsSL "https://download.docker.com/linux/raspbian/gpg" | apt-key add -qq - >/dev/null
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sudo -E sh -c echo "deb [arch=armhf] https://download.docker.com/linux/raspbian buster stable" > /etc/apt/sources.list.d/docker.list
+ sudo -E sh -c apt-get update -qq >/dev/null
+ [ -n ]
+ sudo -E sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/null
+ sudo -E sh -c docker version
Client: Docker Engine - Community
Version: 19.03.11
API version: 1.40
Go version: go1.13.10
Git commit: 42e35e6
Built: Mon Jun 1 09:20:15 2020
OS/Arch: linux/arm
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.11
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 42e35e6
Built: Mon Jun 1 09:14:09 2020
OS/Arch: linux/arm
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

sudo usermod -aG docker pi

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
containers which can be used to obtain root privileges on the
docker host.
Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
for more information.

Let's take a look at the IP info of pi01.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
pi@pi01:~ $ ip -c address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether dc:a6:32:b1:22:36 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.253/24 brd 192.168.1.255 scope global dynamic noprefixroute eth0
valid_lft 47285sec preferred_lft 36485sec
inet6 2001:569:7fb8:c600:ca77:4302:3dbb:5b94/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 7469sec preferred_lft 7169sec
inet6 fe80::bdf2:287f:4b0a:abc5/64 scope link
valid_lft forever preferred_lft forever
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether dc:a6:32:b1:22:37 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.252/24 brd 192.168.1.255 scope global dynamic noprefixroute wlan0
valid_lft 47282sec preferred_lft 36482sec
inet6 2001:569:7fb8:c600:d413:3cdf:1f56:7056/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 7469sec preferred_lft 7169sec
inet6 fe80::7f55:cb40:9bf7:c903/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:ff:90:ba:f3 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever

Clearly, there is one more interface docker0 added into pi01's IP info, but it's now DOWN.

3.2.2 Uninstall

By the way, to uninstall Docker, 2 commands are required:

  • sudo apt remove docker-ce
  • sudo ip link delete docker0

Beside have Docker uninstalled, you may have to manually remove the IP link to docker0.

3.2.3 Enable/Start Docker

Then, let's enable and start docker service, and add user pi into group docker to be able to run docker as user root.

  • sudo systemctl enable docker
  • sudo systemctl start docker
  • sudo usermod -aG docker pi
1
2
3
4
5
pi@pi01:~ $ sudo systemctl enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
pi@pi01:~ $ sudo systemctl start docker
pi@pi01:~ $ sudo usermod -aG docker pi

Afterwards, reboot all Raspberry Pi.

Now, let's check the installed Docker version:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
pi@pi0X:~ $ docker version
Client: Docker Engine - Community
Version: 19.03.11
API version: 1.40
Go version: go1.13.10
Git commit: 42e35e6
Built: Mon Jun 1 09:20:15 2020
OS/Arch: linux/arm
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.11
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 42e35e6
Built: Mon Jun 1 09:14:09 2020
OS/Arch: linux/arm
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683

, where X=1, or 2, or 3, or 4.

3.3 Test ARM Image Pulled from Docker Hub

3.3.1 Docker Pull, Docker Run, Docker Exec

Due to respective architectures of these 4 Raspberry Pis, I tried to use Docker Hub image arm64v8/alpine for pi01, and Docker Hub image arm32v6/alpine for pi02, pi03 and pi04. However, it finally turns out that Pi4 64-bit raspbian kernel is NOT quite stable yet.

pi01 ~ Docker Hub arm64v8 is NOT stable yet.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
pi@pi01:~ $ docker pull arm64v8/alpine
Using default tag: latest
latest: Pulling from arm64v8/alpine
b538f80385f9: Pull complete
Digest: sha256:3b3f647d2d99cac772ed64c4791e5d9b750dd5fe0b25db653ec4976f7b72837c
Status: Downloaded newer image for arm64v8/alpine:latest
docker.io/arm64v8/alpine:latest
pi@pi01:~ $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
arm64v8/alpine latest 62ee0e9f8440 11 days ago 5.32MB
pi@pi01:~ $ docker run -it -d arm64v8/alpine
4deed42b3a29231bde40176c5c385d9f04bbf5188383b2ede4a09383360d69bb
pi@pi01:~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4deed42b3a29 arm64v8/alpine "/bin/sh" 49 seconds ago Restarting (159) 8 seconds ago inspiring_mendeleev
pi@pi01:~ $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4deed42b3a29 arm64v8/alpine "/bin/sh" 51 seconds ago Up 2 seconds inspiring_mendeleev
pi@pi02:~ $

It looks Docker Hub images from repository arm64v8 are NOT able to stably work on my Raspberry Pi 4 Model B Rev 1.4 8GB with Pi4 64-bit raspbian kernel. To further demonstrate this, I also tested some other images from repository arm64v8 and even deprecated aarch64, as follows:

1
2
3
4
5
6
7
8
9
10
11
12
pi@pi01:~ $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine latest 3ddac682c5b6 12 days ago 4.77MB
arm64v8/alpine latest 62ee0e9f8440 12 days ago 5.32MB
arm64v8/ubuntu latest dbc66a3d7b82 6 weeks ago 66.7MB
aarch64/ubuntu latest 5227400055a2 3 years ago 122MB
pi@pi01:~ $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e06eaba12e82 alpine "/bin/sh" 2 hours ago Up 2 hours relaxed_cerf
ba8d858d36f0 aarch64/ubuntu "/bin/bash" 2 hours ago Exited (159) 2 hours ago nifty_sutherland
08346701599a arm64v8/ubuntu "/bin/bash" 2 hours ago Exited (159) 2 hours ago jolly_fermi
ccda6c95f81d arm64v8/alpine "/bin/sh" 4 hours ago Exited (159) 4 hours ago nifty_gates
None of the three images arm64v8/alpine, arm64v8/ubuntu, aarch64/ubuntu can UP for 3 seconds.
😊
πŸ˜”

Well, I can pull alpine and keep it UP instead of arm64v8/alpine though.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
pi@pi01:~ $ docker pull alpine
Using default tag: latest
latest: Pulling from library/alpine
b4b72e716706: Pull complete
Digest: sha256:185518070891758909c9f839cf4ca393ee977ac378609f700f60a771a2dfe321
Status: Downloaded newer image for alpine:latest
docker.io/library/alpine:latest
pi@pi01:~ $ docker run -it -d alpine
e06eaba12e82d9266aa3dacd8eb0484e385f810e08806f0bec282bc3b5122a85
pi@pi01:~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e06eaba12e82 alpine "/bin/sh" 10 seconds ago Up 2 seconds relaxed_cerf
pi@pi01:~ $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e06eaba12e82 alpine "/bin/sh" 12 seconds ago Up 5 seconds relaxed_cerf
ba8d858d36f0 aarch64/ubuntu "/bin/bash" 3 minutes ago Exited (159) 2 minutes ago nifty_sutherland
08346701599a arm64v8/ubuntu "/bin/bash" 28 minutes ago Exited (159) 28 minutes ago jolly_fermi
ccda6c95f81d arm64v8/alpine "/bin/sh" 2 hours ago Exited (159) 2 hours ago nifty_gates
pi@pi01:~ $ docker exec -it e06eaba12e82 /bin/sh
/ # cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.12.0
PRETTY_NAME="Alpine Linux v3.12"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
/ # uname -a
Linux e06eaba12e82 5.4.44-v8+ #1320 SMP PREEMPT Wed Jun 3 16:20:05 BST 2020 aarch64 Linux
/ # exit
pi@pi01:~ $

However, due to a bunch of the other instabilities, I downgrade the system from Pi4 64-bit raspbian kernel to Raspberry Pi OS (32-bit) with desktop 2020-05-27. And it's interesting that image from arm32v6/alpine is exactly the same as image directly from alpine, namely image ID 3ddac682c5b6.

By the way, please refer to my posts: - https://github.com/alpinelinux/docker-alpine/issues/92, - https://www.raspberrypi.org/forums/viewtopic.php?f=29&t=250730&p=1677106#p1677005 - https://www.raspberrypi.org/forums/viewtopic.php?f=29&t=250730&p=1677106#p1677106

pi0X ~ Docker Hub arm32v6 is tested.

Let's take pi02 as an example.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
pi@pi02:~ $ docker pull arm32v6/alpine
Using default tag: latest
latest: Pulling from arm32v6/alpine
Digest: sha256:71465c7d45a086a2181ce33bb47f7eaef5c233eace65704da0c5e5454a79cee5
Status: Image is up to date for arm32v6/alpine:latest
docker.io/arm32v6/alpine:latest
pi@pi02:~ $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
arm32v6/alpine latest 3ddac682c5b6 3 weeks ago 4.77MB
pi@pi02:~ $ docker run -it -d arm32v6/alpine
66b813b4c07426294e01f3c7b2ff03f6608f6be10f8f7bf9fde581f605e6a5bc
pi@pi02:~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
66b813b4c074 arm32v6/alpine "/bin/sh" 10 seconds ago Up 5 seconds intelligent_gates
pi@pi02:~ $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
66b813b4c074 arm32v6/alpine "/bin/sh" 12 seconds ago Up 8 seconds intelligent_gates
pi@pi02:~ $ docker exec -it 66b813b4c074 /bin/sh
/ # cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.12.0
PRETTY_NAME="Alpine Linux v3.12"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
/ # uname -a
Linux 66b813b4c074 4.19.118-v7l+ #1311 SMP Mon Apr 27 14:26:42 BST 2020 armv7l Linux
/ # exit
pi@pi02:~ $

, where X=1, or 2, or 3, or 4.

Now, we can see docker0 is UP. And a wired connection is running as the additional interface vethc6e4f97, which is actually just eth0 running in docker image arm32v6/alpine.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
pi@pi02:~ $ ip -c address
...
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:0e:a2:30:0d brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:eff:fea2:300d/64 scope link
valid_lft forever preferred_lft forever
6: vethc6e4f97@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 3e:cf:32:5f:92:e8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.108.166/16 brd 169.254.255.255 scope global noprefixroute vethc6e4f97
valid_lft forever preferred_lft forever
inet6 fe80::f8c1:6d4c:aa67:11af/64 scope link
valid_lft forever preferred_lft forever

3.3.2 Docker Login & Docker Commit

In order to avoid issues about registry, we can run docker login, so that a configuration file /home/pi/.docker/config.json will also be automatically generated.

1
2
3
4
5
6
7
8
9
10
11
pi@pi01:~ $ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: jiapei100
Password:
WARNING! Your password will be stored unencrypted in /home/pi/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
pi@pi01:~ $ docker commit 561025c3974e
sha256:4fb0183bf76ad8dd0e3620e90b162a58c7bebb135e0081454cee4b2ae40d6881

3.4 Docker Info

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
pi@pi01:~ $ docker info
Client:
Debug Mode: false

Server:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 1
Server Version: 19.03.11
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.19.118-v7l+
Operating System: Raspbian GNU/Linux 10 (buster)
OSType: linux
Architecture: armv7l
CPUs: 4
Total Memory: 7.729GiB
Name: pi01
ID: ONE2:WOIF:K6MY:KO77:2NMA:F3OC:S3UH:3ZV7:GTFE:APXH:DST2:RX2C
Docker Root Dir: /var/lib/docker
Debug Mode: false
Username: jiapei100
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support

4. Docker Swarm

From the above command docker info, you can see clearly that Swarm: inactive. Docker now has a Docker Swarm Mode. Now, let's begin playing with Docker Swarm.

4.1 Docker Swarm Initialization on Master

If you've got both Wired and Wifi enabled at the same time, without specifying with connection you are going to use for initializing your docker swarm, you'll meet the following ERROR message.

1
2
pi@pi01:~ $ docker swarm init
Error response from daemon: could not choose an IP address to advertise since this system has multiple addresses on different interfaces (192.168.1.253 on eth0 and 192.168.1.252 on wlan0) - specify one with --advertise-addr

By specifying the argument --advertise-addr, docker swarm can be successfully initialized as:

1
2
3
4
5
6
7
8
pi@pi01:~ $ docker swarm init --advertise-addr 192.168.1.253
Swarm initialized: current node (q7w8gwf9z6keb86rltvqx0f7z) is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join --token SWMTKN-1-5ivkmmwk8kfw92gs0emkmgzg78d4cwitqmi827ghikpajzvrjt-6i326au3t4ka3g1wa74n4rhe4 192.168.1.253:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

4.2 Docker Swarm Initialization on Worker

On each worker, just run the following command to join docker swarm created by master.

1
2
pi@pi0X:~ $ docker swarm join --token SWMTKN-1-5ivkmmwk8kfw92gs0emkmgzg78d4cwitqmi827ghikpajzvrjt-6i326au3t4ka3g1wa74n4rhe4 192.168.1.253:2377
This node joined a swarm as a worker.

, where X=2, or 3, or 4.

4.3 List Docker Nodes In Leader

Come back to pi01, namely, docker swarm master, also the leader, and list all nodes:

1
2
3
4
5
6
pi@pi01:~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
q7w8gwf9z6keb86rltvqx0f7z * pi01 Ready Active Leader 19.03.11
sdiwqlawhfjobfsymmp38xivn pi02 Ready Active 19.03.11
97h11ixu7nvce4p64td2o5one pi03 Ready Active 19.03.11
tzrqp2fcj4yht20j84r83s5ps pi04 Ready Active 19.03.11

In our test, we ONLY designate a single leader. You can of course have the option to designate multiple leaders by using command docker node promote.

4.4 Swarmmode Test

Here, we test all Alex Ellis' Swarmmode tests on ARM.

4.4.1 Scenario 1 - Replicate Service

Create the service on leader with 4 replicas on both the master and 3 workers.

1
2
3
4
5
6
7
8
pi@pi01:~ $ docker service create --name ping1 --replicas=4 alexellis2/arm-pingcurl ping google.com
63n31w2jyr6uuqg2670r985h6
overall progress: 4 out of 4 tasks
1/4: running [==================================================>]
2/4: running [==================================================>]
3/4: running [==================================================>]
4/4: running [==================================================>]
verify: Service converged

Then, we can list the created docker service on the leader.

1
2
3
4
5
6
7
8
9
pi@pi01:~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
63n31w2jyr6u ping1 replicated 4/4 alexellis2/arm-pingcurl:latest
pi@pi01:~ $ docker service ps ping1
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
xqjxus8ayetx ping1.1 alexellis2/arm-pingcurl:latest pi02 Running Running 2 hours ago
9lemwgls6hp6 ping1.2 alexellis2/arm-pingcurl:latest pi03 Running Running 2 hours ago
b2y2muif9h2x ping1.3 alexellis2/arm-pingcurl:latest pi04 Running Running 2 hours ago
2uqbcrn0j7et ping1.4 alexellis2/arm-pingcurl:latest pi01 Running Running 2 hours ago

4.4.1.1 On Master

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
pi@pi01:~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7d72a046d2f4 alexellis2/arm-pingcurl:latest "ping google.com" 41 minutes ago Up 41 minutes ping1.4.2uqbcrn0j7ettnqndxfq6s44b
71ef4e0107a7 arm32v6/alpine "/bin/sh" 9 hours ago Up 9 hours great_saha
pi@pi01:~ $ docker logs 7d72a046d2f4 | tail -n 10
64 bytes from sea15s12-in-f206.1e100.net (172.217.3.206): icmp_seq=42461 ttl=119 time=5.98 ms
64 bytes from sea15s12-in-f14.1e100.net (172.217.3.206): icmp_seq=42462 ttl=119 time=5.53 ms
64 bytes from sea15s12-in-f14.1e100.net (172.217.3.206): icmp_seq=42463 ttl=119 time=5.69 ms
64 bytes from sea15s12-in-f206.1e100.net (172.217.3.206): icmp_seq=42464 ttl=119 time=5.98 ms
64 bytes from sea15s12-in-f206.1e100.net (172.217.3.206): icmp_seq=42465 ttl=119 time=5.97 ms
64 bytes from sea15s12-in-f206.1e100.net (172.217.3.206): icmp_seq=42466 ttl=119 time=5.65 ms
64 bytes from sea15s12-in-f14.1e100.net (172.217.3.206): icmp_seq=42467 ttl=119 time=6.22 ms
64 bytes from sea15s12-in-f206.1e100.net (172.217.3.206): icmp_seq=42468 ttl=119 time=6.33 ms
64 bytes from sea15s12-in-f14.1e100.net (172.217.3.206): icmp_seq=42469 ttl=119 time=9.93 ms
64 bytes from sea15s12-in-f206.1e100.net (172.217.3.206): icmp_seq=42470 ttl=119 time=5.82 ms

4.4.1.2 On Worker

This time, let's take pi04 as our example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
pi@pi04:~/Downloads $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2520e1ea0ea8 alexellis2/arm-pingcurl:latest "ping google.com" 44 minutes ago Up 44 minutes ping1.3.b2y2muif9h2xa847rlibcv5nz
746386fce983 arm32v6/alpine "/bin/sh" 11 days ago Up 9 hours serene_chebyshev
pi@pi04:~ $ docker logs 2520e1ea0ea8 | tail -n 10
64 bytes from sea15s12-in-f14.1e100.net (172.217.3.206): icmp_seq=42491 ttl=119 time=7.22 ms
64 bytes from sea15s12-in-f14.1e100.net (172.217.3.206): icmp_seq=42492 ttl=119 time=9.10 ms
64 bytes from sea15s12-in-f206.1e100.net (172.217.3.206): icmp_seq=42493 ttl=119 time=6.97 ms
64 bytes from sea15s12-in-f14.1e100.net (172.217.3.206): icmp_seq=42494 ttl=119 time=6.76 ms
64 bytes from sea15s12-in-f206.1e100.net (172.217.3.206): icmp_seq=42495 ttl=119 time=6.74 ms
64 bytes from sea15s12-in-f206.1e100.net (172.217.3.206): icmp_seq=42496 ttl=119 time=7.03 ms
64 bytes from sea15s12-in-f206.1e100.net (172.217.3.206): icmp_seq=42497 ttl=119 time=7.06 ms
64 bytes from sea15s12-in-f206.1e100.net (172.217.3.206): icmp_seq=42498 ttl=119 time=6.17 ms
64 bytes from sea15s12-in-f14.1e100.net (172.217.3.206): icmp_seq=42499 ttl=119 time=6.74 ms
64 bytes from sea15s12-in-f206.1e100.net (172.217.3.206): icmp_seq=42500 ttl=119 time=6.23 ms

4.4.2 Scenario 2 - Replicate Webservice

In this example, we will create 2 replicas only for fun.

1
2
3
4
5
6
pi@pi01:~ $ docker service create --name hello1 --publish 3000:3000 --replicas=2 alexellis2/arm-alpinehello
pe19as7evzmgi5qiq36z2pk63
overall progress: 2 out of 2 tasks
1/2: running [==================================================>]
2/2: running [==================================================>]
verify: Service converged

Again, let's list our created services.

1
2
3
4
5
6
7
8
pi@pi01:~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
pe19as7evzmg hello1 replicated 2/2 alexellis2/arm-alpinehello:latest *:3000->3000/tcp
63n31w2jyr6u ping1 replicated 4/4 alexellis2/arm-pingcurl:latest
pi@pi01:~ $ docker service ps hello1
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
z9k8a10hkxqv hello1.1 alexellis2/arm-alpinehello:latest pi02 Running Running about an hour ago
7xuhpuru0r5u hello1.2 alexellis2/arm-alpinehello:latest pi01 Running Running about an hour ago

4.4.2.1 On Master and Worker pi02

On master pi01 and worker pi02, there are 3 running docker images:

1
2
3
4
5
pi@pi02:~ $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
arm32v6/alpine latest 3ddac682c5b6 3 weeks ago 4.77MB
alexellis2/arm-pingcurl <none> 86a225e3a07b 3 years ago 6.19MB
alexellis2/arm-alpinehello <none> 2ad6d1f9b6ae 3 years ago 33.8MB

, and 3 running docker containers.

1
2
3
4
5
pi@pi02:~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
918e211b5784 alexellis2/arm-alpinehello:latest "npm start" 22 minutes ago Up 22 minutes 3000/tcp hello1.1.z9k8a10hkxqvsovuu6omw4axy
a92acd03471a alexellis2/arm-pingcurl:latest "ping google.com" About an hour ago Up About an hour ping1.1.xqjxus8ayetxiip2cmw4fm54x
66b813b4c074 arm32v6/alpine "/bin/sh" 11 hours ago Up 11 hours intelligent_gates

4.4.2.2 On Worker pi03 and Worker pi04

On master pi03 and worker pi04, there are only 2 running docker images:

1
2
3
4
pi@pi03:~ $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
arm32v6/alpine latest 3ddac682c5b6 3 weeks ago 4.77MB
alexellis2/arm-pingcurl <none> 86a225e3a07b 3 years ago 6.19MB

, and 3 running docker containers.

1
2
3
4
pi@pi03:~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5670e6944779 alexellis2/arm-pingcurl:latest "ping google.com" About an hour ago Up About an hour ping1.2.9lemwgls6hp6wnpyg14aeyeva
f3d1670dfc7a arm32v6/alpine "/bin/sh" 11 days ago Up 9 hours infallible_cerf

4.4.2.3 Test The Webservice

Finally, let's test the webservice:

1
2
3
4
5
pi@pi01:~ $ curl -4 localhost:3000
Hellopi@pi01:~ $ curl -4 localhost:3000
Hellopi@pi01:~ $ curl -4 localhost:3000
Hellopi@pi01:~ $ curl -4 localhost:3000
Hellopi@pi01:~ $

Since service alexellis2/arm-alpinehello is only running on master pi01 and worker pi02, let's just check respective docker logs in the following:

1
2
3
4
5
6
pi@pi01:~ $ docker logs b2c313ac8200

> hello@1.0.0 start /root
> node index.js

I'm listening.
1
2
3
4
5
6
pi@pi02:~ $ docker logs 918e211b5784

> hello@1.0.0 start /root
> node index.js

I'm listening.

4.4.3 Scenario 3 - Inter-Container Communication

We first create a distributed subnetwork named armnet.

1
2
pi@pi01:~ $ docker network create --driver overlay --subnet 20.0.14.0/24 armnet
qzcwsh0y0fx22sw7088luxybr

Afterwards, a redis database service is created and runs on a single node, here, master pi01.

1
2
3
4
5
pi@pi01:~ $ docker service create --replicas=1 --network=armnet --name redis alexellis2/redis-arm:v6
kmk663ad1t88nfjf567tkctmr
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged

Finally, 2 replicas of docker service counter are created in this created subnetwork armnet and listening to port 3333.

1
2
3
4
5
6
pi@pi01:~ $ docker service create --name counter --replicas=2 --network=armnet --publish 3333:3333 alexellis2/arm_redis_counter
ivdubq3vzdil3r2i5z0r1d2va
overall progress: 2 out of 2 tasks
1/2: running [==================================================>]
2/2: running [==================================================>]
verify: Service converged

Now, let's list all created docker service on the leader.

1
2
3
4
5
6
7
8
9
10
11
12
13
pi@pi01:~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ivdubq3vzdil counter replicated 2/2 alexellis2/arm_redis_counter:latest *:3333->3333/tcp
pe19as7evzmg hello1 replicated 2/2 alexellis2/arm-alpinehello:latest *:3000->3000/tcp
63n31w2jyr6u ping1 replicated 4/4 alexellis2/arm-pingcurl:latest
kmk663ad1t88 redis replicated 1/1 alexellis2/redis-arm:v6
pi@pi01:~ $ docker service ps counter
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
ip6z9yacmjbz counter.1 alexellis2/arm_redis_counter:latest pi04 Running Running about an hour ago
nzhx6q1vugm9 counter.2 alexellis2/arm_redis_counter:latest pi02 Running Running about an hour ago
pi@pi01:~ $ docker service ps redis
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
z1tqve759bh6 redis.1 alexellis2/redis-arm:v6 pi03 Running Running 2 hours ago

It's interesting that this time: - the 2 replicas of service counter are automatically created on node pi02 and pi04 - the unique replica of service redis is allocated for node pi03

I've got no idea how the created services are distributed to different nodes. The ONLY thing I can do is to run docker ps and show the distribution results:

  • pi01

    1
    2
    3
    4
    5
    pi@pi01:~ $ docker ps
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    b2c313ac8200 alexellis2/arm-alpinehello:latest "npm start" 3 hours ago Up 3 hours 3000/tcp hello1.2.7xuhpuru0r5ujkyrp4g558ehu
    7d72a046d2f4 alexellis2/arm-pingcurl:latest "ping google.com" 3 hours ago Up 3 hours ping1.4.2uqbcrn0j7ettnqndxfq6s44b
    71ef4e0107a7 arm32v6/alpine "/bin/sh" 11 hours ago Up 11 hours great_saha

  • pi02

    1
    2
    3
    4
    5
    6
    pi@pi02:~ $ docker ps
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    da2452b474af alexellis2/arm_redis_counter:latest "node ./app.js" 2 hours ago Up 2 hours 3000/tcp counter.2.nzhx6q1vugm9w7ouqyots5x43
    918e211b5784 alexellis2/arm-alpinehello:latest "npm start" 3 hours ago Up 3 hours 3000/tcp hello1.1.z9k8a10hkxqvsovuu6omw4axy
    a92acd03471a alexellis2/arm-pingcurl:latest "ping google.com" 3 hours ago Up 3 hours ping1.1.xqjxus8ayetxiip2cmw4fm54x
    66b813b4c074 arm32v6/alpine "/bin/sh" 14 hours ago Up 14 hours intelligent_gates

  • pi03

    1
    2
    3
    4
    5
    pi@pi03:~ $ docker ps
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    5fd9a0b8dd08 alexellis2/redis-arm:v6 "redis-server" 2 hours ago Up 2 hours 6379/tcp redis.1.z1tqve759bh6hdw2udnk6hh7t
    5670e6944779 alexellis2/arm-pingcurl:latest "ping google.com" 3 hours ago Up 3 hours ping1.2.9lemwgls6hp6wnpyg14aeyeva
    f3d1670dfc7a arm32v6/alpine "/bin/sh" 12 days ago Up 11 hours infallible_cerf

  • pi04

    1
    2
    3
    4
    5
    pi@pi04:~ $ docker ps
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    aca52fdaf2d4 alexellis2/arm_redis_counter:latest "node ./app.js" 2 hours ago Up 2 hours 3000/tcp counter.1.ip6z9yacmjbzjaeax45tuh2oa
    2520e1ea0ea8 alexellis2/arm-pingcurl:latest "ping google.com" 3 hours ago Up 3 hours ping1.3.b2y2muif9h2xa847rlibcv5nz
    746386fce983 arm32v6/alpine "/bin/sh" 12 days ago Up 11 hours serene_chebyshev

Clearly, - service alexellis2/arm_redis_counter is running on both nodes pi02 and pi04, but not on nodes pi01 or pi03 - service alexellis2/arm-alpinehello is running on both nodes pi01 and pi02, but not on nodes pi03 or pi04 - service alexellis2/arm-pingcurl is running on all 4 nodes pi01, pi02. pi03, and pi04

So far, curl localhost:3333/incr is NOT running. Why?

What's more: - without touching anything for a whole night, service alexellis2/arm_redis_counter seems to automatically shundown with the ERROR: task: non-zero exit (1) for multiple times. - alexellis2/arm_redis_counter is now running on nodes master pi01 and worker pi04, instead of original master pi02 and worker pi04. - it's interesting that alexellis2/redis-arm is still running on node pi03.

4.5 Docker Swarm Visualizer

Our final topie of this blog is the Docker Swarm Visualizer. Official Docker documentation has provided the details Docker Swarm Visualizer, and Docker Swarm Visualizer Source Code has also been provided on Github.

4.5.1 On Leader

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
pi@pi01:~ $ docker run -it -d -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock dockersamples/visualizer
Unable to find image 'dockersamples/visualizer:latest' locally
latest: Pulling from dockersamples/visualizer
cd784148e348: Pull complete
f6268ae5d1d7: Pull complete
97eb9028b14b: Pull complete
9975a7a2a3d1: Pull complete
ba903e5e6801: Pull complete
7f034edb1086: Pull complete
cd5dbf77b483: Pull complete
5e7311667ddb: Pull complete
687c1072bfcb: Pull complete
aa18e5d3472c: Pull complete
a3da1957bd6b: Pull complete
e42dbf1c67c4: Pull complete
5a18b01011d2: Pull complete
Digest: sha256:54d65cbcbff52ee7d789cd285fbe68f07a46e3419c8fcded437af4c616915c85
Status: Downloaded newer image for dockersamples/visualizer:latest
910e83cd7ba4de02be370f4b0280d7f7fecb9d80ed8578f3238f68beab28aac7
pi@pi01:~ $ docker service create \
> --name=viz \
> --publish=8080:8080/tcp \
> --constraint=node.role==manager \
> --mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
> alexellis2/visualizer-arm:latest
61tr8x3conmamkekzp6cqro6a
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged

4.5.2 On Remote Laptop

Now, let's visualize our docker swarm beautifully from laptop by entering the leader's IP address at port 8080:

Docker Swarm Visualizer on My Raspberry Pi Cluster

0%