What a performance!! Intel RealSense LiDAR Camera L515
Arduino Based Mini Radar
Hmmm… This Mini Radar from Taobao is recommended.
| Start | A Chair | Arduino |
|---|---|---|
![]() |
![]() |
![]() |
| Ultrasound | My Hand | Whole View |
![]() |
![]() |
![]() |
AlexeyAB's Darknet
Salute to AlexeyAB’s Darknet:
Deepfake
RISC-V Assembly
A Cluster of Raspberry Pis (3) - k3s
In this blog, we’re going to explore Kubernetes. However, there is a VERY FIRST question to be answered: what is the relationship between Kubernetes and Docker? Let’s start our journey of today:
1 Kubernetes
Kubernetes (commonly stylized as k8s) is an open-source container-orchestration system for automating application deployment, scaling, and management. (cited from Wikipedia)
On Raspberry Pi, a lightweight variant of Kubernetes is normally preferred. A variety of choices are available:
| Packages | Description |
|---|---|
| MicroK8s | MicroK8s is only available for 64-bit Ubuntu images. (Cited from How to build a Raspberry Pi Kubernetes cluster using MicroK8s: Setting up each Pi) |
| k3s, k3d, k3d | |
| Minikube | |
| kind | |
| Kubeadm |
Minikube vs. kind vs. k3s - What should I use? elaborates the differences among Minikube, kind and k3s. Its final table is cited as follows:
| minikube | kind | k3s | |
|---|---|---|---|
| runtime | VM | container | native |
| supported architectures | AMD64 | AMD64 | AMD64, ARMv7, ARM64 |
| supported container runtimes | Docker, CRI-O, containerd, gVisor | Docker | Docker, containerd |
| startup time: initial/following | 5:19 / 3:15 | 2:48 / 1:06 | 0:15 / 0:15 |
| memory requirements | 2GB | 8GB (Windows, MacOS) | 512 MB |
| requires root? | no | no | yes (rootless is experimental) |
| multi-cluster support | yes | yes | no (can be achieved using containers) |
| multi-node support | no | yes | yes |
| project page | minikube | kind | k3s |
Here in my case, I’m going to use k3s to manage and monitor the cluster. The following 2 blogs are strongly recommended from me.
- Run Kubernetes on a Raspberry Pi with k3s
- Kubernetes 1.18 broke “kubectl run”, here’s what to do about it
2. Preparation
Let’s take a look at the IP info of ALL 4 Raspberry Pis. Let’s take pi04 as the example this time. pi01, pi02, pi03 are having very similar IP info as pi04.
1 | 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 |
As mentioned in A Cluster of Raspberry Pis (1) - Configuration, pi04 is an old Raspberry Pi 3 Model B Rev 1.2 1GB, which is unfortunately with a broken Wifi interface wlan0. Therefore, I’ve got to insert a Wifi dongle in order to have Wifi wlan1 enabled.
3. k3s Installation and Configuration
3.1 k3s Installation on Master Node pi01
1 | pi@pi01:~ $ curl -sfL https://get.k3s.io | sh - |
If we take a look at IP info, one additional flannel.1 interface is added as follows:
1 | 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default |
3.2 k3s Installation on Work Node pi01, pi02, pi03
Before moving forward, we need to write down node token on the master node, which will be used while the other work nodes join in the cluster.
1 | pi@pi01:~ $ sudo cat /var/lib/rancher/k3s/server/node-token |
1 | pi@pi0X:~ $ curl -sfL http://get.k3s.io | K3S_URL=https://192.168.1.253:6443 \ |
, where X=2, or 3, or 4.
3.3 Take a Look on pi01
1 | pi@pi01:~ $ sudo kubectl get nodes |
3.4 Access Raspberry Pi Cluster from PC
We can further configure our PC to be able to access the Raspberry Pi Cluster. For details, please refer to Run Kubernetes on a Raspberry Pi with k3s. On my laptop, I can do:
1 | ✔ kubectl get nodes |
We can even specify the role name by the following command:
1 | ✔ kubectl label nodes pi0X kubernetes.io/role=worker |
, where X=2, or 3, or 4.
Let’s take a look at all nodes again:
1 | 64 ✔ kubectl get nodes |
4. Create Deployment
1 | 73 ✔ kubectl create deployment nginx-sample --image=nginx |
After a while, nginx-sample will be successfully deployed.
1 | 79 ✔ kubectl get deployments |
Now, let’s expose this service and take a look from the browser:
1 | 82 ✔ kubectl expose deployment nginx-sample --type="NodePort" --port 80 |
A Cluster of Raspberry Pis (2) - docker & docker swarm
1. Container VS. Virtual Machine
Briefly refer to: What’s the Diff: VMs vs Containers and Bare Metal Servers, Virtual Servers, and Containerization.
2. Docker VS. VMWare
Briefly refer to: Docker vs VMWare: How Do They Stack Up?.
For me, I used to use Virtual Box (a virtual machine) a lot, but now start using Docker (the leading container).
3. Install Docker on Raspberry Pi
I heavily refer to Alex Ellis’ blogs:
- Get Started with Docker on Raspberry Pi
- Hands-on Docker for Raspberry Pi
- 5 things about Docker on Raspberry Pi
- Live Deep Dive - Docker Swarm Mode on the Pi
3.1 Modify /boot/config.txt
| Hostname | /boot/config.txt | ||
| pi01 |
|
||
| pi02 |
|
||
| pi03 |
|
||
| pi04 |
|
3.2 Docker Installation
3.2.1 Installation
To install Docker is ONLY one single command:
curl -sSL https://get.docker.com | sh
1 | pi@pi01:~ $ curl -sSL https://get.docker.com | sh |
Let’s take a look at the IP info of pi01.
1 | pi@pi01:~ $ ip -c address |
Clearly, there is one more interface docker0 added into pi01‘s IP info, but it’s now DOWN.
3.2.2 Uninstall
By the way, to uninstall Docker, 2 commands are required:
sudo apt remove docker-cesudo ip link delete docker0
Beside have Docker uninstalled, you may have to manually remove the IP link to docker0.
3.2.3 Enable/Start Docker
Then, let’s enable and start docker service, and add user pi into group docker to be able to run docker as user root.
sudo systemctl enable dockersudo systemctl start dockersudo usermod -aG docker pi
1 | pi@pi01:~ $ sudo systemctl enable docker |
Afterwards, reboot all Raspberry Pi.
Now, let’s check the installed Docker version:
1 | pi@pi0X:~ $ docker version |
, where X=1, or 2, or 3, or 4.
3.3 Test ARM Image Pulled from Docker Hub
3.3.1 Docker Pull, Docker Run, Docker Exec
Due to respective architectures of these 4 Raspberry Pis, I tried to use Docker Hub image arm64v8/alpine for pi01, and Docker Hub image arm32v6/alpine for pi02, pi03 and pi04. However, it finally turns out that Pi4 64-bit raspbian kernel is NOT quite stable yet.
pi01 ~ Docker Hub arm64v8 is NOT stable yet.
1 | pi@pi01:~ $ docker pull arm64v8/alpine |
It looks Docker Hub images from repository arm64v8 are NOT able to stably work on my Raspberry Pi 4 Model B Rev 1.4 8GB with Pi4 64-bit raspbian kernel. To further demonstrate this, I also tested some other images from repository arm64v8 and even deprecated aarch64, as follows:
1 | pi@pi01:~ $ docker images |
None of the three images arm64v8/alpine, arm64v8/ubuntu, aarch64/ubuntu can UP for 3 seconds.
Well, I can pull alpine and keep it UP instead of arm64v8/alpine though.
1 | pi@pi01:~ $ docker pull alpine |
However, due to a bunch of the other instabilities, I downgrade the system from Pi4 64-bit raspbian kernel to Raspberry Pi OS (32-bit) with desktop 2020-05-27. And it’s interesting that image from arm32v6/alpine is exactly the same as image directly from alpine, namely image ID 3ddac682c5b6.
By the way, please refer to my posts:
- https://github.com/alpinelinux/docker-alpine/issues/92,
- https://www.raspberrypi.org/forums/viewtopic.php?f=29&t=250730&p=1677106#p1677005
- https://www.raspberrypi.org/forums/viewtopic.php?f=29&t=250730&p=1677106#p1677106
pi0X ~ Docker Hub arm32v6 is tested.
Let’s take pi02 as an example.
1 | pi@pi02:~ $ docker pull arm32v6/alpine |
, where X=1, or 2, or 3, or 4.
Now, we can see docker0 is UP. And a wired connection is running as the additional interface vethc6e4f97, which is actually just eth0 running in docker image arm32v6/alpine.
1 | pi@pi02:~ $ ip -c address |
3.3.2 Docker Login & Docker Commit
In order to avoid issues about registry, we can run docker login, so that a configuration file /home/pi/.docker/config.json will also be automatically generated.
1 | pi@pi01:~ $ docker login |
3.4 Docker Info
1 | pi@pi01:~ $ docker info |
4. Docker Swarm
From the above command docker info, you can see clearly that Swarm: inactive. Docker now has a Docker Swarm Mode. Now, let’s begin playing with Docker Swarm.
4.1 Docker Swarm Initialization on Master
If you’ve got both Wired and Wifi enabled at the same time, without specifying with connection you are going to use for initializing your docker swarm, you’ll meet the following ERROR message.
1 | pi@pi01:~ $ docker swarm init |
By specifying the argument --advertise-addr, docker swarm can be successfully initialized as:
1 | pi@pi01:~ $ docker swarm init --advertise-addr 192.168.1.253 |
4.2 Docker Swarm Initialization on Worker
On each worker, just run the following command to join docker swarm created by master.
1 | pi@pi0X:~ $ docker swarm join --token SWMTKN-1-5ivkmmwk8kfw92gs0emkmgzg78d4cwitqmi827ghikpajzvrjt-6i326au3t4ka3g1wa74n4rhe4 192.168.1.253:2377 |
, where X=2, or 3, or 4.
4.3 List Docker Nodes In Leader
Come back to pi01, namely, docker swarm master, also the leader, and list all nodes:
1 | pi@pi01:~ $ docker node ls |
In our test, we ONLY designate a single leader. You can of course have the option to designate multiple leaders by using command docker node promote.
4.4 Swarmmode Test
Here, we test all Alex Ellis’ Swarmmode tests on ARM.
4.4.1 Scenario 1 - Replicate Service
Create the service on leader with 4 replicas on both the master and 3 workers.
1 | pi@pi01:~ $ docker service create --name ping1 --replicas=4 alexellis2/arm-pingcurl ping google.com |
Then, we can list the created docker service on the leader.
1 | pi@pi01:~ $ docker service ls |
4.4.1.1 On Master
1 | pi@pi01:~ $ docker ps |
4.4.1.2 On Worker
This time, let’s take pi04 as our example:
1 | pi@pi04:~/Downloads $ docker ps |
4.4.2 Scenario 2 - Replicate Webservice
In this example, we will create 2 replicas only for fun.
1 | pi@pi01:~ $ docker service create --name hello1 --publish 3000:3000 --replicas=2 alexellis2/arm-alpinehello |
Again, let’s list our created services.
1 | pi@pi01:~ $ docker service ls |
4.4.2.1 On Master and Worker pi02
On master pi01 and worker pi02, there are 3 running docker images:
1 | pi@pi02:~ $ docker images |
, and 3 running docker containers.
1 | pi@pi02:~ $ docker ps |
4.4.2.2 On Worker pi03 and Worker pi04
On master pi03 and worker pi04, there are only 2 running docker images:
1 | pi@pi03:~ $ docker images |
, and 3 running docker containers.
1 | pi@pi03:~ $ docker ps |
4.4.2.3 Test The Webservice
Finally, let’s test the webservice:
1 | pi@pi01:~ $ curl -4 localhost:3000 |
Since service alexellis2/arm-alpinehello is only running on master pi01 and worker pi02, let’s just check respective docker logs in the following:
1 | pi@pi01:~ $ docker logs b2c313ac8200 |
1 | pi@pi02:~ $ docker logs 918e211b5784 |
4.4.3 Scenario 3 - Inter-Container Communication
We first create a distributed subnetwork named armnet.
1 | pi@pi01:~ $ docker network create --driver overlay --subnet 20.0.14.0/24 armnet |
Afterwards, a redis database service is created and runs on a single node, here, master pi01.
1 | pi@pi01:~ $ docker service create --replicas=1 --network=armnet --name redis alexellis2/redis-arm:v6 |
Finally, 2 replicas of docker service counter are created in this created subnetwork armnet and listening to port 3333.
1 | pi@pi01:~ $ docker service create --name counter --replicas=2 --network=armnet --publish 3333:3333 alexellis2/arm_redis_counter |
Now, let’s list all created docker service on the leader.
1 | pi@pi01:~ $ docker service ls |
It’s interesting that this time:
- the 2 replicas of service counter are automatically created on node pi02 and pi04
- the unique replica of service redis is allocated for node pi03
I’ve got no idea how the created services are distributed to different nodes. The ONLY thing I can do is to run docker ps and show the distribution results:
- pi01
1 | pi@pi01:~ $ docker ps |
- pi02
1 | pi@pi02:~ $ docker ps |
- pi03
1 | pi@pi03:~ $ docker ps |
- pi04
1 | pi@pi04:~ $ docker ps |
Clearly,
- service
alexellis2/arm_redis_counteris running on both nodes pi02 and pi04, but not on nodes pi01 or pi03 - service
alexellis2/arm-alpinehellois running on both nodes pi01 and pi02, but not on nodes pi03 or pi04 - service
alexellis2/arm-pingcurlis running on all 4 nodes pi01, pi02. pi03, and pi04
So far, curl localhost:3333/incr is NOT running. Why?
What’s more:
- without touching anything for a whole night, service
alexellis2/arm_redis_counterseems to automatically shundown with the ERROR:task: non-zero exit (1)for multiple times. alexellis2/arm_redis_counteris now running on nodes master pi01 and worker pi04, instead of original master pi02 and worker pi04.- it’s interesting that
alexellis2/redis-armis still running on node pi03.
4.5 Docker Swarm Visualizer
Our final topie of this blog is the Docker Swarm Visualizer. Official Docker documentation has provided the details Docker Swarm Visualizer, and Docker Swarm Visualizer Source Code has also been provided on Github.
4.5.1 On Leader
1 | pi@pi01:~ $ docker run -it -d -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock dockersamples/visualizer |
4.5.2 On Remote Laptop
Now, let’s visualize our docker swarm beautifully from laptop by entering the leader‘s IP address at port 8080:
micro:bit
Hmmm… Happened to get this chance of playing with micro:bit for fun, then take this opportunity to write a blog. It seems this BBC products is very convenient. Let’s just plug and play.
1. Introduction
1.1 lsusb
Just plug in and lsusb.
1 | ...... |
1.2 Hardware Specification
Refer to micro:bit Hardware Specification
1.3 User Guide Overview
Refer to micro:bit User Guide Overview
2. Tutorials and Configurations
- Makecode Tutorials - this so called makecode looks very similar to Google Blockly
- BBC micro:bit MicroPython - this seems to be BBC Official tutorial, which adopts Python as the programming language
- UCL’s BBC Micro:bit Tutorials - this is a micro:bit tutorial from University College London, but the ONLY defect is: it seems it’s developed under Windows? Please check UCL’s BBC Micro:bit Tutorials - Command Line Interface.
- Lancaster University micro:bit Runtime - this seems to be a little complicated, but adopts my favorite coding language: C/C++
Since I’m NOT a professional Makecode coder, I’ll run a couple of simple micro:bit projects using either Python or C/C++.
3. Display Hellow World Using micro:bit MicroPython
3.1 Required Configuration on Your Host
Please refer to micro:bit MicroPython Installation
1 | $ sudo add-apt-repository -y ppa:team-gcc-arm-embedded |
and now ,let’s check yotta.
1 | ✔ pip show yotta |
3.2 Connect micro:bit
Please refer to micro:bit MicroPython Dev Guide REPL, and now let’s connect micro:bit using picocom.
1 | 12 ✔ sudo picocom /dev/ttyACM0 -b 115200 ~ |
Oh, my god… why can’t I just code directly here from within the console??? That’s ABSOLUTELY NOT my style.
3.3 Code & Run
The VERY FIRST demo is ALWAYS displaying Hello World. To fulfill this task, please refer to How do I transfer my code onto the micro:bit via USB.
As mentioned above, the UNACCEPTABLE thing is: it seems we have to use micro:bit Python IDE for python coding for micro:bit????? Anyway, my generated HEX of Hello World is here.
4. Flash Lancaster microbit-samples Using Yotta
Please strictly follow Lancaster University micro:bit Yotta.
1 | 141 ✔ cd microbit-samples/ |
5. Projects Field
Please refer to micro:bit Official Python Projects.
A Cluster of Raspberry Pis (1) - Configuration
| Profile | Frontal | Top |
|---|---|---|
![]() |
![]() |
![]() |
Today is Sunday. Vancouver is sunny. It’s been quite a while that I haven’t written anything. It took me a couple of weeks to have my tax reported finally. Hmmm… Anyway, finally, I’ve got some time to talk about Supercomputer:
- a cluster of 4 Raspberry Pis. In my case:
- 1 Raspberry Pi 4 Model B Rev 1.4 8GB, installed with Pi4 64-bit raspbian kernel
- 1 Raspberry Pi 4 Model B Rev 1.1 4GB, installed with Raspberry Pi OS (32-bit) with desktop 2020-05-27
- 2 Raspberry Pi 3 Model B Rev 1.2 1GB, installed with [
Raspberry Pi OS (32-bit) with desktop 2020-05-27]
- Docker
- Kubernetes
This is going to be a series of 3 blogs.
1. Compare All Raspberry Pi Variants
Refer to: Comparison of All Raspberry Pi Variants.
2. Four Raspberry Pis
Here, please guarantee the 4 Raspberry Pis are respectively designated with the following hostnames:
- pi01
- pi02
- pi03
- pi04
2.1 pi01: Raspberry Pi 4 Model B Rev 1.4 8GB with Raspberry Pi OS (32-bit) with desktop 2020-05-27
1 | pi@pi01:~ $ hostname |
In fact, at the very beginning, I used to try Pi4 64-bit raspbian kernel, as following:
1 | pi@pi01:~ $ hostname |
However, there are still quite a lot of issues about Pi4 64-bit raspbian kernel, I have to downgrade the system from Pi4 64-bit raspbian kernel to Raspberry Pi OS (32-bit) with desktop 2020-05-27 in the end.
2.2 pi02: Raspberry Pi 4 Model B Rev 1.1 4GB with Raspberry Pi OS (32-bit) with desktop 2020-05-27
1 | pi@pi02:~ $ hostname |
2.3 pi03: Raspberry Pi 3 Model B Rev 1.2 1GB with Raspberry Pi OS (32-bit) with desktop 2020-05-27
1 | pi@pi03:~ $ hostname |
2.4 pi04: Raspberry Pi 3 Model B Rev 1.2 1GB with Raspberry Pi OS (32-bit) with desktop 2020-05-27
1 | pi@pi04:~ $ hostname |
3. Raspberry Pi Cluster Configuration
This section heavily refers to the following blogs:
- Build a Raspberry Pi cluster computer
- Build your own bare-metal ARM cluster
- Installing MPI for Python on a Raspberry Pi Cluster
- Instructables: How to Make a Raspberry Pi SuperComputer!
Actually, the cluster can certainly be arbitrarily configured as you wish. A typical configuration is 1-master-3-workers, but which one should be the master? Is it really a good idea to ALWAYS designate the MOST powerful one as the master? Particularly, in my case, 4 Raspberry Pis are of different versions, so that they are of different computing capability.
3.1 Configure Hostfile
It’s always a good idea to create a hostfile on the master node. However, as reasons mentioned above, there is NO priority among ALL nodes in my case, I configured the hostfile for ALL 4 Raspberry Pis.
| node | hostfile |
|---|---|
| pi01 | 192.168.1.253 slots=4 192.168.1.251 slots=4 192.168.1.249 slots=4 192.168.1.247 slots=4 |
| pi02 | 192.168.1.251 slots=4 192.168.1.253 slots=4 192.168.1.249 slots=4 192.168.1.247 slots=4 |
| pi03 | 192.168.1.249 slots=4 192.168.1.253 slots=4 192.168.1.251 slots=4 192.168.1.247 slots=4 |
| pi04 | 192.168.1.247 slots=4 192.168.1.253 slots=4 192.168.1.251 slots=4 192.168.1.249 slots=4 |
3.2 SSH-KEYGEN
In order to test multiple nodes across the cluster, we need to generate SSH keys to avoid inputting password for logging into the other nodes all the time. In such, for each Raspberry Pi, you’ll have to generate a SSH key by ssh-keygen -t rsa, and push this generated key using command ssh-copy-id onto the other 3 Raspberry Pis. Finally, for a cluster of 4 Raspberry Pis, there are 3 authorized keys (for these other 3 Raspberry Pis) stored in file /home/pi/.ssh/authorized_keys on each of the 4 Raspberry Pis.
4. Cluster Test
4.1 Command mpiexec
4.1.1 Argument: -hostfile and -n
1 | pi@pi01:~ $ mpiexec -hostfile hostfile -n 16 hostname |
For a cluster of 4 Raspberry Pis, there will be 4*4=16 CPUs in total. Therefore, the maximum number to specify for argument -n will be 16. Otherwise, you’ll meet the following ERROR message:
1 | pi@pi01:~ $ mpiexec -hostfile hostfile -n 20 hostname |
4.1.2 Execute Python Example mpi4py helloworld.py
1 | pi@pi01:~ $ mpiexec -hostfile hostfile -n 16 python Downloads/helloworld.py |
4.2 mpi4py-examples
Run all examples with argument --hostfile ~/hostfile, namely, 16 cores in a row.
4.2.1 mpi4py-examples 01-hello-world
1 | pi@pi01:~/Downloads/mpi4py-examples $ mpirun --hostfile ~/hostfile ./01-hello-world |
4.2.2 mpi4py-examples 02-broadcast
1 | pi@pi01:~/Downloads/mpi4py-examples $ mpirun --hostfile ~/hostfile ./02-broadcast |
4.2.3 mpi4py-examples 03-scatter-gather
Sometimes, without specifying the parameter btl_tcp_if_include, the running program will hang:
1 | pi@pi01:~/Downloads/mpi4py-examples $ mpirun --np 16 --hostfile ~/hostfile 03-scatter-gather |
Please refer to the explanation TCP: unexpected process identifier in connect_ack. Now, let’s specify the parameter as --mca btl_tcp_if_include "192.168.1.251/24,192.168.1.249/24,192.168.1.247/24".
1 | pi@pi01:~/Downloads/mpi4py-examples $ mpirun --np 16 --hostfile ~/hostfile --mca btl_tcp_if_include "192.168.1.251/24,192.168.1.249/24,192.168.1.247/24" 03-scatter-gather |
4.2.4 mpi4py-examples 04-image-spectrogram
4.2.5 mpi4py-examples 05-pseudo-whitening
4.2.6 NULL
4.2.7 mpi4py-examples 07-matrix-vector-product
1 | pi@pi01:~/Downloads/mpi4py-example $ mpirun --np 16 --hostfile ~/hostfile --mca btl_tcp_if_include "192.168.1.251/24,192.168.1.249/24,192.168.1.247/24" 07-matrix-vector-product |
4.2.8 mpi4py-examples 08-matrix-matrix-product.py
4.2.9 mpi4py-examples 09-task-pull.py
1 | pi@pi01:~/Downloads/mpi4py-examples $ mpirun --hostfile ~/hostfile python ./09-task-pull.py |
4.2.10 mpi4py-examples 10-task-pull-spawn.py
4.3 Example mpi4py prime.py
4.3.1 Computing Capability For Each CPU
Here, we’re taking mpi4py prime.py as our example.
| Hostname | Computing Time | ||
| pi01 |
|
||
| pi02 |
|
||
| pi03 |
|
||
| pi04 |
|
Clearly, the computing capability of each CPU on pi01/pi02 is roughly 3 times faster than the CPU on pi03/pi04, which can be easily estimated from the parameter BogoMIPS:
$$ 108.00 (pi01/pi02) / 38.40 (pi03/pi04) \approx 3 $$
4.3.2 Computing Capability For Each Raspberry Pi
Clearly, on each of my Raspberry Pi, including
- pi01: Raspberry Pi 4 Model B Rev 1.4 8GB
- pi02: Raspberry Pi 4 Model B Rev 1.1 4GB
- pi03 & pi04: Raspberry Pi 3 Model B Rev 1.2 1GB
there are 4 CPUs. So, let’s take a look at the result when specify argument -n 4.
| Master | Worker | Computing Time | ||
| pi01 |
pi02 pi03 pi04 |
|
||
| pi02 |
pi01 pi03 pi04 |
|
||
| pi03 |
pi01 pi02 pi04 |
|
||
| pi04 |
pi01 pi02 pi03 |
|
Clearly, to make full use of 4 CPUs -n 4 is roughly 4 times faster than just to use 1 CPU -n 1.
4.3.3 Computing Capability For The cluster
I carried out 2 experiments:
- Experiment 1 is done on 4 nodes:
- Experiment 2 is done on the FASTEST 2 nodes:
| hostfile on master | Computing Time | ||
|
192.168.1.253 slots=4 192.168.1.251 slots=4 192.168.1.249 slots=4 192.168.1.247 slots=4 |
|
||
|
192.168.1.253 slots=4 192.168.1.251 slots=4 |
|
The results are obviously telling:
- to calculate using a cluster of 4 Raspberry Pis with 16 CPUs is ALWAYS faster than running on a single node with 4 CPUs.
$$ 42.22 \le 50 $$ - to calculate using 2 fastest nodes is even faster than running on a cluster of 4 nodes. This clearly hints the importance of Load Balancing.
$$ 29.56 \le 42.22 $$ - the speed in Experiment 2 is roughly doubled as that using a single node of pi03 or pi04.
$$ 52 (pi01/pi02) / 29.56 (Experiment 2) \approx 2 $$
In the end of this blog, as for Load Balancing, I may talk about it some time in the future.
ArcGIS For Covid-19
This is THOROUGHLY inspired by ArcGIS Blog Essential Configurations for Highly Scalable ArcGIS Online Web Applications (Viral Applications). And there are so many exisiting examples:









