A Cluster of Raspberry Pis (3) - k3s
In this blog, we're going to explore Kubernetes. However, there is a VERY FIRST question to be answered: what is the relationship between Kubernetes and Docker? Let's start our journey of today:
1 Kubernetes
Kubernetes (commonly stylized as k8s) is an open-source container-orchestration system for automating application deployment, scaling, and management.
(cited from Wikipedia)
On Raspberry Pi, a lightweight variant of Kubernetes is normally preferred. A variety of choices are available:
Packages | Description |
---|---|
MicroK8s | MicroK8s is only available for 64-bit Ubuntu images.
(Cited from How
to build a Raspberry Pi Kubernetes cluster using MicroK8s: Setting up
each Pi) |
k3s, k3d, k3d | |
Minikube | |
kind | |
Kubeadm |
Minikube vs. kind vs. k3s - What should I use? elaborates the differences among Minikube, kind and k3s. Its final table is cited as follows:
minikube | kind | k3s | |
---|---|---|---|
runtime | VM | container | native |
supported architectures | AMD64 | AMD64 | AMD64, ARMv7, ARM64 |
supported container runtimes | Docker, CRI-O, containerd, gVisor | Docker | Docker, containerd |
startup time: initial/following | 5:19 / 3:15 | 2:48 / 1:06 | 0:15 / 0:15 |
memory requirements | 2GB | 8GB (Windows, MacOS) | 512 MB |
requires root? | no | no | yes (rootless is experimental) |
multi-cluster support | yes | yes | no (can be achieved using containers) |
multi-node support | no | yes | yes |
project page | minikube | kind | k3s |
Here in my case, I'm going to use k3s to manage and monitor the cluster. The following 2 blogs are strongly recommended from me. - Run Kubernetes on a Raspberry Pi with k3s - Kubernetes 1.18 broke “kubectl run”, here’s what to do about it
2. Preparation
Let's take a look at the IP info of ALL 4 Raspberry Pis. Let's take pi04 as the example this time. pi01, pi02, pi03 are having very similar IP info as pi04.
1 | 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 |
As mentioned in A Cluster of Raspberry Pis (1) - Configuration, pi04 is an old Raspberry Pi 3 Model B Rev 1.2 1GB, which is unfortunately with a broken Wifi interface wlan0. Therefore, I've got to insert a Wifi dongle in order to have Wifi wlan1 enabled.
3. k3s Installation and Configuration
3.1 k3s Installation on Master Node pi01
1 | pi@pi01:~ $ curl -sfL https://get.k3s.io | sh - |
If we take a look at IP info, one additional flannel.1 interface is added as follows:
1 | 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default |
3.2 k3s Installation on Work Node pi01, pi02, pi03
Before moving forward, we need to write down node token on the master node, which will be used while the other work nodes join in the cluster.
1 | pi@pi01:~ $ sudo cat /var/lib/rancher/k3s/server/node-token |
1 | pi@pi0X:~ $ curl -sfL http://get.k3s.io | K3S_URL=https://192.168.1.253:6443 \ |
, where X=2, or 3, or 4.
3.3 Take a Look on pi01
1 | pi@pi01:~ $ sudo kubectl get nodes |
3.4 Access Raspberry Pi Cluster from PC
We can further configure our PC to be able to access the Raspberry Pi Cluster. For details, please refer to Run Kubernetes on a Raspberry Pi with k3s. On my laptop, I can do:
1 | ✔ kubectl get nodes |
We can even specify the role name by the following command:
1 | ✔ kubectl label nodes pi0X kubernetes.io/role=worker |
, where X=2, or 3, or 4.
Let's take a look at all nodes again:
1 | 64 ✔ kubectl get nodes |
4. Create Deployment
1 | 73 ✔ kubectl create deployment nginx-sample --image=nginx |
After a while, nginx-sample will be successfully deployed.
1 | 79 ✔ kubectl get deployments |
Now, let's expose this service and take a look from the browser:
1 | 82 ✔ kubectl expose deployment nginx-sample --type="NodePort" --port 80 |
A Cluster of Raspberry Pis (2) - docker & docker swarm
1. Container VS. Virtual Machine
Briefly refer to: What’s the Diff: VMs vs Containers and Bare Metal Servers, Virtual Servers, and Containerization.
2. Docker VS. VMWare
Briefly refer to: Docker vs VMWare: How Do They Stack Up?.
For me, I used to use Virtual Box (a virtual machine) a lot, but now start using Docker (the leading container).
3. Install Docker on Raspberry Pi
I heavily refer to Alex Ellis' blogs: - Get Started with Docker on Raspberry Pi - Hands-on Docker for Raspberry Pi - 5 things about Docker on Raspberry Pi - Live Deep Dive - Docker Swarm Mode on the Pi
3.1 Modify /boot/config.txt
Hostname | /boot/config.txt | ||
pi01 |
|
||
pi02 |
|
||
pi03 |
|
||
pi04 |
|
3.2 Docker Installation
3.2.1 Installation
To install Docker is ONLY one single command:
curl -sSL https://get.docker.com | sh
1 | pi@pi01:~ $ curl -sSL https://get.docker.com | sh |
Let's take a look at the IP info of pi01.
1 | pi@pi01:~ $ ip -c address |
Clearly, there is one more interface docker0 added into pi01's IP info, but it's now DOWN.
3.2.2 Uninstall
By the way, to uninstall Docker, 2 commands are required:
sudo apt remove docker-ce
sudo ip link delete docker0
Beside have Docker uninstalled, you may have to manually remove the IP link to docker0.
3.2.3 Enable/Start Docker
Then, let's enable and start docker service, and add user pi into group docker to be able to run docker as user root.
sudo systemctl enable docker
sudo systemctl start docker
sudo usermod -aG docker pi
1 | pi@pi01:~ $ sudo systemctl enable docker |
Afterwards, reboot all Raspberry Pi.
Now, let's check the installed Docker version:
1 | pi@pi0X:~ $ docker version |
, where X=1, or 2, or 3, or 4.
3.3 Test ARM Image Pulled from Docker Hub
3.3.1 Docker Pull, Docker Run, Docker Exec
Due to respective architectures of these 4 Raspberry Pis, I tried to use Docker Hub image arm64v8/alpine for pi01, and Docker Hub image arm32v6/alpine for pi02, pi03 and pi04. However, it finally turns out that Pi4 64-bit raspbian kernel is NOT quite stable yet.
pi01 ~ Docker Hub arm64v8 is NOT stable yet.
1 | pi@pi01:~ $ docker pull arm64v8/alpine |
It looks Docker Hub images from repository arm64v8 are NOT able to stably work on my Raspberry Pi 4 Model B Rev 1.4 8GB with Pi4 64-bit raspbian kernel. To further demonstrate this, I also tested some other images from repository arm64v8 and even deprecated aarch64, as follows:
1 | pi@pi01:~ $ docker images |
😊
😔
Well, I can pull alpine and keep it UP instead of arm64v8/alpine though.
1 | pi@pi01:~ $ docker pull alpine |
However, due to a bunch of the other instabilities, I downgrade the system from Pi4 64-bit raspbian kernel to Raspberry Pi OS (32-bit) with desktop 2020-05-27. And it's interesting that image from arm32v6/alpine is exactly the same as image directly from alpine, namely image ID 3ddac682c5b6.
By the way, please refer to my posts: - https://github.com/alpinelinux/docker-alpine/issues/92, - https://www.raspberrypi.org/forums/viewtopic.php?f=29&t=250730&p=1677106#p1677005 - https://www.raspberrypi.org/forums/viewtopic.php?f=29&t=250730&p=1677106#p1677106
pi0X ~ Docker Hub arm32v6 is tested.
Let's take pi02 as an example.
1 | pi@pi02:~ $ docker pull arm32v6/alpine |
, where X=1, or 2, or 3, or 4.
Now, we can see docker0 is UP. And a wired connection is running as the additional interface vethc6e4f97, which is actually just eth0 running in docker image arm32v6/alpine.
1 | pi@pi02:~ $ ip -c address |
3.3.2 Docker Login & Docker Commit
In order to avoid issues about registry, we can run
docker login, so that a configuration file
/home/pi/.docker/config.json
will also be automatically
generated.
1 | pi@pi01:~ $ docker login |
3.4 Docker Info
1 | pi@pi01:~ $ docker info |
4. Docker Swarm
From the above command docker info
, you can see clearly
that Swarm: inactive
. Docker now has a Docker Swarm Mode. Now,
let's begin playing with Docker Swarm.
4.1 Docker Swarm Initialization on Master
If you've got both Wired and Wifi enabled at the same time, without specifying with connection you are going to use for initializing your docker swarm, you'll meet the following ERROR message.
1 | pi@pi01:~ $ docker swarm init |
By specifying the argument --advertise-addr
, docker swarm can be
successfully initialized as:
1 | pi@pi01:~ $ docker swarm init --advertise-addr 192.168.1.253 |
4.2 Docker Swarm Initialization on Worker
On each worker, just run the following command to join docker swarm created by master.
1 | pi@pi0X:~ $ docker swarm join --token SWMTKN-1-5ivkmmwk8kfw92gs0emkmgzg78d4cwitqmi827ghikpajzvrjt-6i326au3t4ka3g1wa74n4rhe4 192.168.1.253:2377 |
, where X=2, or 3, or 4.
4.3 List Docker Nodes In Leader
Come back to pi01, namely, docker swarm master, also the leader, and list all nodes:
1 | pi@pi01:~ $ docker node ls |
In our test, we ONLY designate a single
leader. You can of course have the option to designate
multiple leaders by using command
docker node promote
.
4.4 Swarmmode Test
Here, we test all Alex Ellis' Swarmmode tests on ARM.
4.4.1 Scenario 1 - Replicate Service
Create the service on leader with 4 replicas on both the master and 3 workers.
1 | pi@pi01:~ $ docker service create --name ping1 --replicas=4 alexellis2/arm-pingcurl ping google.com |
Then, we can list the created docker service on the leader.
1 | pi@pi01:~ $ docker service ls |
4.4.1.1 On Master
1 | pi@pi01:~ $ docker ps |
4.4.1.2 On Worker
This time, let's take pi04 as our example:
1 | pi@pi04:~/Downloads $ docker ps |
4.4.2 Scenario 2 - Replicate Webservice
In this example, we will create 2 replicas only for fun.
1 | pi@pi01:~ $ docker service create --name hello1 --publish 3000:3000 --replicas=2 alexellis2/arm-alpinehello |
Again, let's list our created services.
1 | pi@pi01:~ $ docker service ls |
4.4.2.1 On Master and Worker pi02
On master pi01 and worker pi02, there are 3 running docker images:
1 | pi@pi02:~ $ docker images |
, and 3 running docker containers.
1 | pi@pi02:~ $ docker ps |
4.4.2.2 On Worker pi03 and Worker pi04
On master pi03 and worker pi04, there are only 2 running docker images:
1 | pi@pi03:~ $ docker images |
, and 3 running docker containers.
1 | pi@pi03:~ $ docker ps |
4.4.2.3 Test The Webservice
Finally, let's test the webservice:
1 | pi@pi01:~ $ curl -4 localhost:3000 |
Since service alexellis2/arm-alpinehello
is
only running on master
pi01 and worker pi02,
let's just check respective docker logs in the
following:
1 | pi@pi01:~ $ docker logs b2c313ac8200 |
1 | pi@pi02:~ $ docker logs 918e211b5784 |
4.4.3 Scenario 3 - Inter-Container Communication
We first create a distributed subnetwork named armnet.
1 | pi@pi01:~ $ docker network create --driver overlay --subnet 20.0.14.0/24 armnet |
Afterwards, a redis database service is created and runs on a single node, here, master pi01.
1 | pi@pi01:~ $ docker service create --replicas=1 --network=armnet --name redis alexellis2/redis-arm:v6 |
Finally, 2 replicas of docker service counter are created in this created subnetwork armnet and listening to port 3333.
1 | pi@pi01:~ $ docker service create --name counter --replicas=2 --network=armnet --publish 3333:3333 alexellis2/arm_redis_counter |
Now, let's list all created docker service on the leader.
1 | pi@pi01:~ $ docker service ls |
It's interesting that this time: - the 2 replicas of service counter are automatically created on node pi02 and pi04 - the unique replica of service redis is allocated for node pi03
I've got no idea how the created services are
distributed to different nodes. The ONLY thing I can do
is to run docker ps
and show the distribution
results:
pi01
1
2
3
4
5pi@pi01:~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b2c313ac8200 alexellis2/arm-alpinehello:latest "npm start" 3 hours ago Up 3 hours 3000/tcp hello1.2.7xuhpuru0r5ujkyrp4g558ehu
7d72a046d2f4 alexellis2/arm-pingcurl:latest "ping google.com" 3 hours ago Up 3 hours ping1.4.2uqbcrn0j7ettnqndxfq6s44b
71ef4e0107a7 arm32v6/alpine "/bin/sh" 11 hours ago Up 11 hours great_sahapi02
1
2
3
4
5
6pi@pi02:~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
da2452b474af alexellis2/arm_redis_counter:latest "node ./app.js" 2 hours ago Up 2 hours 3000/tcp counter.2.nzhx6q1vugm9w7ouqyots5x43
918e211b5784 alexellis2/arm-alpinehello:latest "npm start" 3 hours ago Up 3 hours 3000/tcp hello1.1.z9k8a10hkxqvsovuu6omw4axy
a92acd03471a alexellis2/arm-pingcurl:latest "ping google.com" 3 hours ago Up 3 hours ping1.1.xqjxus8ayetxiip2cmw4fm54x
66b813b4c074 arm32v6/alpine "/bin/sh" 14 hours ago Up 14 hours intelligent_gatespi03
1
2
3
4
5pi@pi03:~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5fd9a0b8dd08 alexellis2/redis-arm:v6 "redis-server" 2 hours ago Up 2 hours 6379/tcp redis.1.z1tqve759bh6hdw2udnk6hh7t
5670e6944779 alexellis2/arm-pingcurl:latest "ping google.com" 3 hours ago Up 3 hours ping1.2.9lemwgls6hp6wnpyg14aeyeva
f3d1670dfc7a arm32v6/alpine "/bin/sh" 12 days ago Up 11 hours infallible_cerfpi04
1
2
3
4
5pi@pi04:~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aca52fdaf2d4 alexellis2/arm_redis_counter:latest "node ./app.js" 2 hours ago Up 2 hours 3000/tcp counter.1.ip6z9yacmjbzjaeax45tuh2oa
2520e1ea0ea8 alexellis2/arm-pingcurl:latest "ping google.com" 3 hours ago Up 3 hours ping1.3.b2y2muif9h2xa847rlibcv5nz
746386fce983 arm32v6/alpine "/bin/sh" 12 days ago Up 11 hours serene_chebyshev
Clearly, - service
alexellis2/arm_redis_counter
is running on both nodes
pi02 and pi04, but
not on nodes pi01 or pi03 -
service alexellis2/arm-alpinehello
is
running on both nodes pi01 and pi02,
but not on nodes pi03 or
pi04 - service
alexellis2/arm-pingcurl
is running on all 4 nodes
pi01, pi02. pi03, and
pi04
So far, curl localhost:3333/incr
is NOT running. Why?
What's more: -
without touching anything for a whole night, service
alexellis2/arm_redis_counter
seems to automatically
shundown with the ERROR:
task: non-zero exit (1)
for multiple times. -
alexellis2/arm_redis_counter
is now running on nodes
master pi01 and
worker pi04, instead of original
master pi02 and
worker pi04. - it's interesting that
alexellis2/redis-arm
is still running on node
pi03.
4.5 Docker Swarm Visualizer
Our final topie of this blog is the Docker Swarm Visualizer. Official Docker documentation has provided the details Docker Swarm Visualizer, and Docker Swarm Visualizer Source Code has also been provided on Github.
4.5.1 On Leader
1 | pi@pi01:~ $ docker run -it -d -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock dockersamples/visualizer |
4.5.2 On Remote Laptop
Now, let's visualize our docker swarm beautifully from laptop by entering the leader's IP address at port 8080:
micro:bit
Hmmm... Happened to get this chance of playing with micro:bit for fun, then take this opportunity to write a blog. It seems this BBC products is very convenient. Let's just plug and play.
1. Introduction
1.1 lsusb
Just plug in and lsusb
.
1 | ...... |
1.2 Hardware Specification
Refer to micro:bit Hardware Specification
1.3 User Guide Overview
Refer to micro:bit User Guide Overview
2. Tutorials and Configurations
- Makecode Tutorials - this so called makecode looks very similar to Google Blockly
- BBC micro:bit MicroPython - this seems to be BBC Official tutorial, which adopts Python as the programming language
- UCL’s BBC Micro:bit Tutorials - this is a micro:bit tutorial from University College London, but the ONLY defect is: it seems it's developed under Windows? Please check UCL’s BBC Micro:bit Tutorials - Command Line Interface.
- Lancaster University micro:bit Runtime - this seems to be a little complicated, but adopts my favorite coding language: C/C++
Since I'm NOT a professional Makecode coder, I'll run a couple of simple micro:bit projects using either Python or C/C++.
😊
😔
3. Display
Hellow World
Using micro:bit MicroPython
3.1 Required Configuration on Your Host
Please refer to micro:bit MicroPython Installation
1 | $ sudo add-apt-repository -y ppa:team-gcc-arm-embedded |
and now ,let's check yotta.
1 | ✔ pip show yotta |
3.2 Connect micro:bit
Please refer to micro:bit MicroPython Dev Guide REPL, and now let's connect micro:bit using picocom.
1 | 12 ✔ sudo picocom /dev/ttyACM0 -b 115200 ~ |
Oh, my god... why can't I just code directly here from within the console??? That's ABSOLUTELY NOT my style.
3.3 Code & Run
The VERY FIRST demo is ALWAYS
displaying Hello World
. To fulfill this task, please refer
to How
do I transfer my code onto the micro:bit via USB.
As mentioned above, the UNACCEPTABLE thing
is: it seems we have to use micro:bit Python IDE for python
coding for micro:bit????? Anyway, my generated
HEX of Hello World
is here.
4. Flash Lancaster microbit-samples Using Yotta
Please strictly follow Lancaster University micro:bit Yotta.
1 | 141 ✔ cd microbit-samples/ |
5. Projects Field
Please refer to micro:bit Official Python Projects.
A Cluster of Raspberry Pis (1) - Configuration
Profile | Frontal | Top |
---|---|---|
![]() |
![]() |
![]() |
Today is Sunday. Vancouver is sunny. It's been quite a while that I haven't written anything. It took me a couple of weeks to have my tax reported finally. Hmmm... Anyway, finally, I've got some time to talk about Supercomputer:
- a cluster of 4 Raspberry
Pis. In my case:
- 1 Raspberry Pi 4 Model B Rev 1.4 8GB, installed with Pi4 64-bit raspbian kernel
- 1 Raspberry Pi 4 Model B Rev 1.1 4GB, installed with Raspberry Pi OS (32-bit) with desktop 2020-05-27
- 2 Raspberry Pi 3 Model B Rev 1.2 1GB, installed with Raspberry Pi OS (32-bit) with desktop 2020-05-27
- Docker
- Kubernetes
This is going to be a series of 3 blogs.
1. Compare All Raspberry Pi Variants
Refer to: Comparison of All Raspberry Pi Variants.
2. Four Raspberry Pis
Here, please guarantee the 4 Raspberry Pis are respectively designated with the following hostnames:
- pi01
- pi02
- pi03
- pi04
2.1 pi01: Raspberry Pi 4 Model B Rev 1.4 8GB with Raspberry Pi OS (32-bit) with desktop 2020-05-27
1 | pi@pi01:~ $ hostname |
In fact, at the very beginning, I used to try Pi4 64-bit raspbian kernel, as following:
1 | pi@pi01:~ $ hostname |
However, there are still quite a lot of issues about Pi4 64-bit raspbian kernel, I have to downgrade the system from Pi4 64-bit raspbian kernel to Raspberry Pi OS (32-bit) with desktop 2020-05-27 in the end.
2.2 pi02: Raspberry Pi 4 Model B Rev 1.1 4GB with Raspberry Pi OS (32-bit) with desktop 2020-05-27
1 | pi@pi02:~ $ hostname |
2.3 pi03: Raspberry Pi 3 Model B Rev 1.2 1GB with Raspberry Pi OS (32-bit) with desktop 2020-05-27
1 | pi@pi03:~ $ hostname |
2.4 pi04: Raspberry Pi 3 Model B Rev 1.2 1GB with Raspberry Pi OS (32-bit) with desktop 2020-05-27
1 | pi@pi04:~ $ hostname |
3. Raspberry Pi Cluster Configuration
This section heavily refers to the following blogs: - Build a Raspberry Pi cluster computer - Build your own bare-metal ARM cluster - Installing MPI for Python on a Raspberry Pi Cluster - Instructables: How to Make a Raspberry Pi SuperComputer!
Actually, the cluster can certainly be arbitrarily configured as you wish. A typical configuration is 1-master-3-workers, but which one should be the master? Is it really a good idea to ALWAYS designate the MOST powerful one as the master? Particularly, in my case, 4 Raspberry Pis are of different versions, so that they are of different computing capability.
3.1 Configure Hostfile
It's always a good idea to create a hostfile on the master node. However, as reasons mentioned above, there is NO priority among ALL nodes in my case, I configured the hostfile for ALL 4 Raspberry Pis.
node | hostfile |
---|---|
pi01 | 192.168.1.253 slots=4 192.168.1.251 slots=4 192.168.1.249 slots=4 192.168.1.247 slots=4 |
pi02 | 192.168.1.251 slots=4 192.168.1.253 slots=4 192.168.1.249 slots=4 192.168.1.247 slots=4 |
pi03 | 192.168.1.249 slots=4 192.168.1.253 slots=4 192.168.1.251 slots=4 192.168.1.247 slots=4 |
pi04 | 192.168.1.247 slots=4 192.168.1.253 slots=4 192.168.1.251 slots=4 192.168.1.249 slots=4 |
3.2 SSH-KEYGEN
In order to test multiple nodes across the cluster, we need to
generate SSH keys to avoid inputting password for logging into the other
nodes all the time. In such, for each Raspberry Pi, you'll have to
generate a SSH key by ssh-keygen -t rsa
, and push this
generated key using command ssh-copy-id
onto the other 3 Raspberry Pis. Finally, for a
cluster of 4 Raspberry Pis,
there are 3 authorized keys (for these other 3 Raspberry Pis) stored in file
/home/pi/.ssh/authorized_keys
on each of the 4 Raspberry Pis.
4. Cluster Test
4.1 Command mpiexec
4.1.1 Argument: -hostfile and -n
1 | pi@pi01:~ $ mpiexec -hostfile hostfile -n 16 hostname |
For a cluster of 4 Raspberry Pis, there will be 4*4=16 CPUs in total. Therefore, the maximum number to specify for argument -n will be 16. Otherwise, you'll meet the following ERROR message:
1 | pi@pi01:~ $ mpiexec -hostfile hostfile -n 20 hostname |
4.1.2 Execute Python Example mpi4py helloworld.py
1 | pi@pi01:~ $ mpiexec -hostfile hostfile -n 16 python Downloads/helloworld.py |
4.2 mpi4py-examples
Run all examples with argument --hostfile ~/hostfile
,
namely, 16 cores in a row.
4.2.1 mpi4py-examples 01-hello-world
1 | pi@pi01:~/Downloads/mpi4py-examples $ mpirun --hostfile ~/hostfile ./01-hello-world |
4.2.2 mpi4py-examples 02-broadcast
1 | pi@pi01:~/Downloads/mpi4py-examples $ mpirun --hostfile ~/hostfile ./02-broadcast |
4.2.3 mpi4py-examples 03-scatter-gather
Sometimes, without specifying the parameter btl_tcp_if_include, the running program will hang:
1 | pi@pi01:~/Downloads/mpi4py-examples $ mpirun --np 16 --hostfile ~/hostfile 03-scatter-gather |
Please refer to the explanation TCP: unexpected
process identifier in connect_ack. Now, let's
specify the parameter as
--mca btl_tcp_if_include "192.168.1.251/24,192.168.1.249/24,192.168.1.247/24"
.
1 | pi@pi01:~/Downloads/mpi4py-examples $ mpirun --np 16 --hostfile ~/hostfile --mca btl_tcp_if_include "192.168.1.251/24,192.168.1.249/24,192.168.1.247/24" 03-scatter-gather |
4.2.4 mpi4py-examples 04-image-spectrogram
4.2.5 mpi4py-examples 05-pseudo-whitening
4.2.6 NULL
4.2.7 mpi4py-examples 07-matrix-vector-product
1 | pi@pi01:~/Downloads/mpi4py-example $ mpirun --np 16 --hostfile ~/hostfile --mca btl_tcp_if_include "192.168.1.251/24,192.168.1.249/24,192.168.1.247/24" 07-matrix-vector-product |
4.2.8 mpi4py-examples 08-matrix-matrix-product.py
4.2.9 mpi4py-examples 09-task-pull.py
1 | pi@pi01:~/Downloads/mpi4py-examples $ mpirun --hostfile ~/hostfile python ./09-task-pull.py |
4.2.10 mpi4py-examples 10-task-pull-spawn.py
4.3 Example mpi4py prime.py
4.3.1 Computing Capability For Each CPU
Here, we're taking mpi4py prime.py as our example.
Hostname | Computing Time | ||
pi01 |
|
||
pi02 |
|
||
pi03 |
|
||
pi04 |
|
Clearly, the computing capability of each CPU on pi01/pi02 is roughly 3 times faster than the CPU on pi03/pi04, which can be easily estimated from the parameter BogoMIPS: \[ 108.00 (pi01/pi02) / 38.40 (pi03/pi04) \approx 3 \]
4.3.2 Computing Capability For Each Raspberry Pi
Clearly, on each of my Raspberry Pi, including - pi01: Raspberry Pi 4 Model B Rev 1.4 8GB - pi02: Raspberry Pi 4 Model B Rev 1.1 4GB - pi03 & pi04: Raspberry Pi 3 Model B Rev 1.2 1GB
there are 4 CPUs. So, let's take a look at the result when specify
argument -n 4
.
Master | Worker | Computing Time | ||
pi01 |
pi02 pi03 pi04 |
|
||
pi02 |
pi01 pi03 pi04 |
|
||
pi03 |
pi01 pi02 pi04 |
|
||
pi04 |
pi01 pi02 pi03 |
|
Clearly, to make full use of 4 CPUs
-n 4
is roughly 4 times faster than just to use 1 CPU
-n 1
.
4.3.3 Computing Capability For The cluster
I carried out 2 experiments: - Experiment 1 is done on 4 nodes: * 1 Raspberry Pi 4 Model B Rev 1.4 8GB * 1 Raspberry Pi 4 Model B Rev 1.1 4GB * 2 Raspberry Pi 3 Model B Rev 1.2 1GB - Experiment 2 is done on the FASTEST 2 nodes: * 1 Raspberry Pi 4 Model B Rev 1.4 8GB * 1 Raspberry Pi 4 Model B Rev 1.1 4GB
hostfile on master | Computing Time | ||
192.168.1.253 slots=4 192.168.1.251 slots=4 192.168.1.249 slots=4 192.168.1.247 slots=4 |
|
||
192.168.1.253 slots=4 192.168.1.251 slots=4 |
|
The results are obviously telling: - to calculate using a cluster of 4 Raspberry Pis with 16 CPUs is ALWAYS faster than running on a single node with 4 CPUs. \[ 42.22 \le 50 \] - to calculate using 2 fastest nodes is even faster than running on a cluster of 4 nodes. This clearly hints the importance of Load Balancing. \[ 29.56 \le 42.22 \] - the speed in Experiment 2 is roughly doubled as that using a single node of pi03 or pi04. \[ 52 (pi01/pi02) / 29.56 (Experiment 2) \approx 2 \]
In the end of this blog, as for Load Balancing, I may talk about it some time in the future.
ArcGIS For Covid-19
This is THOROUGHLY inspired by ArcGIS Blog Essential Configurations for Highly Scalable ArcGIS Online Web Applications (Viral Applications). And there are so many exisiting examples:
Build Bazel For AARCH64
Khadas VIM3
Today's concert: ONE WORLD : TOGETHER AT HOME. Yup, today, I've my previous blog updated. A lot of modifications. Khadas VIM3 is really a good product. With Amlogic's A311D with 5.0 TOPS NPU, the board itself comes with a super powerful AI inference capability.
- AI inference unit used to be in USB sticks, such as:
- Intel Movidius Neural Compute Stick
- Google Coral USB Accelerator, etc. The target of these type of products is for some SBCs without any XPU(GPU/VPU/NPU/TPU, etc.), for speeding up parallel computing.
- AI inference unit can of course be on board directly, BUT
NOT ONLY as an peripheral. For instance:
- Google Coral Dev Board based on Google's own TPU
- NVidia Jetson Nano based on NVidia's own GPU
- Khadas VIM3 based on Amlogic's A311D with 5.0 TOPS NPU
What a sunny day after the FIRST snow in this winter. Let me show you 3 pictures in the first row, and 3 videos in the second. We need to enjoy both R&D and life…
Green Timers Lake 1 | Green Timers Lake 2 | Green Timers Park |
---|---|---|
![]() |
![]() |
![]() |
A Pair of Swans | A Group of Ducks | A Little Stream In The Snow |
After a brief break, I started investigating Khadas VIM3 again.
1. About Khadas VIM3
Khadas VIM3 is a super computer based on Amlogic A311D. Before we start, let’s carry out several simple comparisons.
1.1 Raspberry Pi 4 Model B vs. Khadas VIM3 vs. Jetson Nano Developer Kit
Please refer to:
1.2 Amlogic A311D & S922X-B vs. Rockchip RK3399 (Pro) vs. Amlogic S912
Please refer to:
- androidtvbox
- cnx-software embedded system news - July 29, 2019
- cnx-software embedded system news - August 4, 2019
2. Install Prebuilt Operating System To EMMC Via Krescue
2.1 WIRED Connection Preferred
As mentioned in VIM3 Beginners Guide,
Krescue is a
Swiss Army knife
. As of
January 2020, Krescue can download and install OS images directly from
the web via wired Ethernet.
2.2 Flash Krescue Onto SD Card
1 | ➜ Krescue sudo dd bs=4M if=VIM3.krescue-d41d8cd98f00b204e9800998ecf8427e-1587199778-67108864-279c13890fa7253d5d2b76000769803e.sd.img of=/dev/mmcblk0 conv=fsync |
2.3 Setup Wifi From Within Krescue Shell
If you really don't like the WIRED connection, boot into Krescue shell, and use the following commands to set up Wifi:
1 | root@Krescue:~# wifi.config WIFI_NAME WIFI_PASSWORD |
2.4 SSH Into Krescue Via Wireless Connection
Now, let's try to connect Khadas VIM3 board remotely.
1 | ➜ ~ ping 192.168.1.110 |
2.5 Flash OS onto EMMC (WIRED Connection Preferred)
Let's take a look at the SD card device:
1 | root@Krescue:~# ls /dev/mmcblk* |
2.5.1 Install OS Using Shell Command
Please refer to the Shell Commands Examples.
curl -sfL dl.khadas.com/.mega | sh -s - -Y -X > /dev/mmcblk?
should do.
2.5.2 Install OS Using Krescue GUI
Let's bring back Krescue GUI by command
krescue
, and select VIMx.Ubuntu-xfce-bionic_Linux-4.9_arm64_V20191231.emmc.kresq
and have it flashed onto EMMC.
Krescue Default | Image Write To EMMC |
---|---|
![]() |
![]() |
Select Prebuilt OS | Start Downloading OS |
![]() |
![]() |
Start Installation | Installation Complete |
![]() |
![]() |
Krescue Reboot | Ubuntu XFCE Desktop |
![]() |
![]() |
2.6. Boot From EMMC
Actually, the 8th image in the above just showed Ubuntu XFCE desktop. We can also SSH into it after configuring Wifi successfully.
2.6.1 SSH Into Khadas VIM3
1 | ➜ ~ ssh khadas@192.168.1.95 |
2.6.2 Specs For Khadas VIM3
1 | khadas@Khadas:~$ uname -a |
2.6.3 Package Versions
1 | khadas@Khadas:~$ gcc --version |
It looks current OpenCV on current VIM3_Ubuntu-xfce-bionic_Linux-4.9_arm64_EMMC_V20191231.img** is a kind of outdated. Let's just remove package opencv3** and have OpenCV-4.3.0 installed manually.
3. Install Manjaro To TF/SD Card
As one of my dreamed operating systems, Manjaro has already provided 2 operating systems for Khadas users to try out.
To flash either of the above systems onto a TF/SD card is simple. However, both are ONLY for SD-USB, instead of EMMC. For instancen:
1 | ➜ Manjaro burn-tool -b VIM3 -i ./Manjaro-ARM-xfce-vim3-20.04.img |
Before moving on, let's cite the following word from Boot Images from External Media:
1 | WARNING: Don’t use your PC as the USB-Host to supply the electrical power, otherwise it will fail to activate Multi-Boot! |
4. NPU
In this section, we're testing the computing capability of Khadas VIM3's NPU.
Before everything starts, make sure you have the
galcore module loaded, by using command
modinfo galcore
.
4.1 Obtain aml_npu_sdk From Khadas
Extract the obtained aml_npu_sdk.tgz on your local host. Bear in mind that it is your local host, BUT NOT Khadas VIM3. Relative issues can be found at:
4.2 Model Conversion on Host
Afterwards, the models applicable on Khadas VIM3 can be obtained by following Model Conversion. Anyway, on my laptop, I obtained the converted model as follows:
1 | ➜ nbg_unify_inception_v3 ll |
Do I need to emphasize that I'm using Tensorflow 2.1.0 ? Anyway, check the following:
1 | ➜ ~ python |
4.3 Build Case Code
4.3.1 Cross-build on Host
You can of course cross-build the case code on your local host, instead of Khadas VIM3 by referring to Compile the Case Code. (The document seems NOT updated yet.) Instead of using 1 argument, we specify 2 auguments, one for aml_npu_sdk, the other for Fenix.
1 | ➜ nbg_unify_inception_v3 ./build_vx.sh ....../aml_npu_sdk/linux_sdk/linux_sdk_6.3.3.4 ....../fenix |
inceptionv3 now should be ready to use, but in my case, it's NOT working properly. It's probably because Fenix is NOT able to provide/represent the correct cross-compile toolchains for my installed VIMx.Ubuntu-xfce-bionic_Linux-4.9_arm64_V20191231.emmc.kresq. Anyway, this is NOT my preference.
4.3.2 Directly Build on Khadas VIM3
Let's leave this for the next section 4.4 Run Executable on Khadas VIM3.
4.4 Run Executable on Khadas VIM3
4.4.1 Step 1: Install aml-npu
1 | khadas@Khadas:~$ sudo apt install aml-npu |
And with command line dpkg -L aml-npu
, you'll see what's
been installed by aml-npu. However, due to its
commercial license, I may NOT be allowed to show
anything here in my blog.
😶
4.4.2 Step 2: Install aml-npu-demo and Run Demo
1 | khadas@Khadas:~$ sudo apt install aml-npu-demo |
Where is the sample to run?
/usr/share/npu/inceptionv3
.
Alright, let's try it.
1 | khadas@Khadas:~$ cd /usr/share/npu/inceptionv3 |
The program runs smoothly.
😏
4.4.3 Step 3: Build Your Own Executable and Run
Clearly, ALL (really???) required development files have been provided by aml-npu, in such, we should be able to build this demo inceptionv3 out by ourselves.
4.4.3.1 You STILL Need aml_npu_sdk from Khadas
Besides aml-npu from repo, in order
to have the demo inceptionv3 fully and successfully built,
you still need aml_npu_sdk from Khadas. In my case, you do need
acuity-ovxlib-dev, and let's do
export ACUITY_OVXLIB_DEV=path_to_acuity-ovxlib-dev
.
4.4.3.2 Build inceptionv3 from Source
We don't need to copy the entire aml_npu_sdk onto Khadas VIM3, but
ONLY demo/inceptionv3. Here in my case, ONLY
demo/inceptionv3 is copied under ~/Programs
.
1 | khadas@Khadas:~/Programs/inceptionv3$ ll |
This is almost the same as folder nbg_unify_inception_v3 shown in 4.2 Model Conversion on Host.
Now, The MOST important part is to modify makefile.
1 | khadas@Khadas:~/Programs/inceptionv3$ cp makefile.linux makefile |
My makefile is modified as follows.
1 | khadas@Khadas:~/Programs/inceptionv3$ cat makefile |
In fact, you still need to modify common.target a little bit accordingly. However, to disclose it in this blog is still NOT allowed I think. Anyway, after the modification, let's make it.
1 | khadas@Khadas:~/Programs/inceptionv3$ make |
Don't worry about the error. It just failed to run the demo, but the executable inceptionv3 has already been successfully built under folder bin_r.
1 | khadas@Khadas:~/Programs/inceptionv3$ ll bin_r |
4.4.3.3 Run inceptionv3
Let's run inceptionv3 under folder bin_demo.
1 | khadas@Khadas:~/Programs/inceptionv3$ cd bin_demo/ |
This is the original status of ALL files under bin_demo. Let's copy and paste our built bin_r/inceptionv3 into this folder bin_demo. The size of the executable seems to be dramatically decreased.
1 | khadas@Khadas:~/Programs/inceptionv3/bin_demo$ cp ../bin_r/inceptionv3 ./ |
Now, let's copy the built inception_v3.nb from host to Khadas VIM3. It seems inception_v3.nb built by Tensorflow 2.1.0 on host is of the same size as provided by Khadas.
1 | khadas@Khadas:~/Programs/inceptionv3/bin_demo$ ll inception_v3.nb |
Finally, let's run the demo.
1 | khadas@Khadas:~/Programs/inceptionv3/bin_demo$ ./inceptionv3 ./inception_v3.nb ./dog_299x299.jpg |
1 | khadas@Khadas:~/Programs/inceptionv3/bin_demo$ ./inceptionv3 ./inception_v3.nb ./goldfish_299x299.jpg |
By comparing to imagenet_slim_labels.txt under current folder, let's take a look at our inference results. Only the FIRST inference is qualified because of the probability.
Index | Result for dog_299x299.jpg | Result for goldfish_299x299.jpg |
---|---|---|
N/A | ![]() |
![]() |
1 | 208: 'curly-coated retriever', | 2: 'tench', |
2 | 209: 'golden retriever', | 795: 'shower cap', |
3 | 223: 'Irish water spaniel', | 974: 'cliff', |
4 | 268: 'miniature poodle', | 408: 'altar', |
5 | 185: 'Kerry blue terrier', | 393: 'coho', |
😘
5. Dual Boot From Manjaro
5.1 How to Boot Images from External Media?
There are clearly 2 options:
- dual boot by selecting devices: EMMC or
TF/SD Card. On Boot Images
from External Media, it's recommended as
Via Keys mode (Side-Buttons) - the easiest and fastest way
, which is the FIRST option on page How To Boot Into Upgrade Mode. Therefore, by following 4 steps as follows(cited from How To Boot Into Upgrade Mode), we should be able to boot into SD-USB.- Power on VIM3.
- Long press the POWER key without releasing it.
- Short press the ‘Reset’ key and release it.
- Count for 2 to 3 seconds, then release the POWER key to enter into Upgrade Mode. You will see the sys-led turn ON when you’ve entered Upgrade Mode.
- multiple boot via grub: Reasonably speaking, 2 operating systems may even have a chance to be installed onto a SINGLE EMMC
5.2 How to flash Manjaro XFCE for Khadas Vim 3 from TF/SD card to EMMC?
ONLY 1 operating system is preferred. Why??? Khadas VIM3 board comes with a large EMMC of size 32G.
After a VERY long time struggling, I would really like to emphasize the quality of Type C cable and power adaptor again. Try to buy things NOT from Taobao.
😢
😭
Finally, I had Manjaro XFCE for Khadas Vim 3 on SD card booted and running, as follows:

1 | ➜ ~ ssh khadas@192.168.1.95 |
It seems Arch Linux is totally different from Debian. What can I say? Go to bed.
Fold for Covid
Today, April 17, 2020, I've got 2 big NEWS for me. -
China Airline cancelled my flight back
to China in May. - I received an Email from balena to encourage us to
contribute our spare computing power (PCs, laptops, single-board devices) to [Rosetta@Home](https://boinc.bakerlab.org/) and support vital COVID-19 research
.
Well, having been using balenaEtcher for quite a while, I of course will support Baker Laboratory at W - University of Washington. There are 2 points need to be emphasized here:
- W used to
be my dreamed university, but it's STILL in my dream so
far.
😂
- Baker Laboratory seems to be really good at bakery.
Alright, let's taste how they bake this COVID-19. 2 manuals to follow:
Make sure one thing: Wired Connection.
Now, it's your choice to visit either http://foldforcovid.local/ or IP address of this Raspberry Pi 4, you will see your Raspberry Pi 4 is up and running, and you are donating the compute capacity to support COVID-19.
foldforcovid.local | 192.168.1.111 |
---|---|
![]() |
![]() |
Finally, 2 additional things: - When will the boarder between Canada and USA be opening? I'd love to visit Baker Laboratory in person. - I built my own OS for Raspberry Pi based on Raspbian, please check it out on my website https://www.longervision.cc/. Don't forget to BUY ME A COFFEE.
Let me update a bit: Besides this Fold for Covid, there are so many activities ongoing:
AI Model Visualization By Tensorspace - 3D Interactive
Visited Green Timbers Lake again.
The Grassland | Ducks In the Lake | A Pair of Ducks In The Lake |
---|---|---|
![]() |
![]() |
![]() |
The Lake | Me In Facial Mask for COVID-19 | The Lake - The Other Side |
![]() |
![]() |
![]() |
1. About Tensorspace
- Tensorspace Playground: Have some fun FIRST
- Official Documentation: Let's follow this manual
- Towards Data Science Blog: The BEST DIY tutorial so far
2. Let's Have Some Fun
We FIRST create an empty project folder, here, named as WebTensorspace.
2.1 Follow Towards Data Science Blog FIRST
Let's strictly follow this part of Towards Data Science Blog (cited from Towards Data Science Blog).
Finally we need to create .html file which will output the result. Not to spend time on setting-up TensorFlow.js and JQuery I encourage you just to use my template at TensorSpace folder. The folder structure looks as following:
- index.html — out html file to run visualization
- lib/ — folder storing all the dependencies
- data/ — folder containing .json file with network inputs
- model/ — folder containing exported model
For our html file we need to first import dependecies and write a TensorSpace script.
Now, let's take a look at our project.
1 | ➜ WebTensorspace ls |
2.1.1 lib
Three resource is referred to download ALL required libraries.
Some required libraries suggested by me:
- Chart.min.js
- TrackballControls.js
- jQuery: Do NOT forget to rename jquery-3.5.0.min.js to jquery.min.js
- signature_pad: signature_pad on github, signature_pad on JSDELIVR, signature_pad 3.0.0-beta.3
- stats.min.js
- three.min.js
- tween.cjs.js
- tensorspace.min.js
- tf.min.js
- tf.min.js.map: tensorflow on cdnjs
Now, let's take a look at what's under folder lib.
1 | ➜ WebTensorspace ls lib |
2.1.2 model
Let's just use tf_keras_model.h5 provided by TensorSpace as our example. You may have to click on the Download button to have this tf_keras_model.h5 downloaded into folder model.
1 | ➜ WebTensorspace ls model |
2.1.2.1 tensorspacejs_converter Failed to Run
Now, let's try to run the following command:
1 | ➜ WebTensorspace tensorspacejs_converter \ |
Clearly, we can downgrade tensorflow-estimator from 2.2.0 to 2.1.0.
1 | ➜ WebTensorspace pip show tensorflow_estimator |
Now, we try to re-run the above tensorspacejs_converter command:
1 | ➜ WebTensorspace tensorspacejs_converter \ |
2.1.2.2 Install tfjs-converter
Please checkout tfjs and enter tfjs-converter, and then have the python package installed tfjs-converter/python
1 | ➜ python git:(master) ✗ pwd |
2.1.2.3 Install tensorspace-converter
Please checkout my modified tensorspace-converter and have the python package installed. Please also keep an eye on my PR.
1 | ➜ tensorspace-converter git:(master) python setup.py install --user |
2.1.2.4 Try tensorspacejs_converter Again
1 | ➜ WebTensorspace tensorspacejs_converter \ |
2.1.3 index.html
2.1.3.1 helloworld-empty
Let's copy and paste TensorSpace's Example helloworld-empty.html and do some trivial modification for 3D visualization, as follows:
1 | <!DOCTYPE html> |