Well, I’ve got to build Tensorflow for aarch64 from source. In such, I’ve got to build Bazel for aarch64 as well.
A good news is: Bazel recently released 3.0.0.
Khadas VIM3
Today’s concert: ONE WORLD : TOGETHER AT HOME. Yup, today, I’ve my previous blog updated. A lot of modifications. Khadas VIM3 is really a good product. With Amlogic‘s A311D with 5.0 TOPS NPU, the board itself comes with a super powerful AI inference capability.
- AI inference unit used to be in USB sticks, such as:
- Intel Movidius Neural Compute Stick
- Google Coral USB Accelerator, etc.
The target of these type of products is for some SBCs without any XPU(GPU/VPU/NPU/TPU, etc.), for speeding up parallel computing.
- AI inference unit can of course be on board directly, BUT NOT ONLY as an peripheral. For instance:
- Google Coral Dev Board based on Google’s own TPU
- NVidia Jetson Nano based on NVidia’s own GPU
- Khadas VIM3 based on Amlogic‘s A311D with 5.0 TOPS NPU
| Green Timers Lake 1 | Green Timers Lake 2 | Green Timers Park |
|---|---|---|
![]() |
![]() |
![]() |
| A Pair of Swans | A Group of Ducks | A Little Stream In The Snow |
After a brief break, I started investigating Khadas VIM3 again.
1. About Khadas VIM3
Khadas VIM3 is a super computer based on Amlogic A311D. Before we start, let’s carry out several simple comparisons.
1.1 Raspberry Pi 4 Model B vs. Khadas VIM3 vs. Jetson Nano Developer Kit
Please refer to:
1.2 Amlogic A311D & S922X-B vs. Rockchip RK3399 (Pro) vs. Amlogic S912
Please refer to:
- androidtvbox
- cnx-software embedded system news - July 29, 2019
- cnx-software embedded system news - August 4, 2019
2. Install Prebuilt Operating System To EMMC Via Krescue
2.1 WIRED Connection Preferred
As mentioned in VIM3 Beginners Guide, Krescue is a Swiss Army knife. As of January 2020, Krescue can download and install OS images directly from the web via wired Ethernet.
2.2 Flash Krescue Onto SD Card
1 | ➜ Krescue sudo dd bs=4M if=VIM3.krescue-d41d8cd98f00b204e9800998ecf8427e-1587199778-67108864-279c13890fa7253d5d2b76000769803e.sd.img of=/dev/mmcblk0 conv=fsync |
2.3 Setup Wifi From Within Krescue Shell
If you really don’t like the WIRED connection, boot into Krescue shell, and use the following commands to set up Wifi:
1 | root@Krescue:~# wifi.config WIFI_NAME WIFI_PASSWORD |
2.4 SSH Into Krescue Via Wireless Connection
Now, let’s try to connect Khadas VIM3 board remotely.
1 | ➜ ~ ping 192.168.1.110 |
2.5 Flash OS onto EMMC (WIRED Connection Preferred)
Let’s take a look at the SD card device:
1 | root@Krescue:~# ls /dev/mmcblk* |
2.5.1 Install OS Using Shell Command
Please refer to the Shell Commands Examples.
curl -sfL dl.khadas.com/.mega | sh -s - -Y -X > /dev/mmcblk? should do.
2.5.2 Install OS Using Krescue GUI
Let’s bring back Krescue GUI by command krescue, and select VIMx.Ubuntu-xfce-bionic_Linux-4.9_arm64_V20191231.emmc.kresq and have it flashed onto EMMC.
| Krescue Default | Image Write To EMMC |
|---|---|
![]() |
![]() |
| Select Prebuilt OS | Start Downloading OS |
![]() |
![]() |
| Start Installation | Installation Complete |
![]() |
![]() |
| Krescue Reboot | Ubuntu XFCE Desktop |
![]() |
![]() |
2.6. Boot From EMMC
Actually, the 8th image in the above just showed Ubuntu XFCE desktop. We can also SSH into it after configuring Wifi successfully.
2.6.1 SSH Into Khadas VIM3
1 | ➜ ~ ssh khadas@192.168.1.95 |
2.6.2 Specs For Khadas VIM3
1 | khadas@Khadas:~$ uname -a |
2.6.3 Package Versions
1 | khadas@Khadas:~$ gcc --version |
It looks current OpenCV on current VIM3_Ubuntu-xfce-bionic_Linux-4.9_arm64_EMMC_V20191231.img is a kind of outdated. Let’s just remove package opencv3 and have OpenCV-4.3.0 installed manually.
3. Install Manjaro To TF/SD Card
As one of my dreamed operating systems, Manjaro has already provided 2 operating systems for Khadas users to try out.
To flash either of the above systems onto a TF/SD card is simple. However, both are ONLY for SD-USB, instead of EMMC. For instancen:
1 | ➜ Manjaro burn-tool -b VIM3 -i ./Manjaro-ARM-xfce-vim3-20.04.img |
Before moving on, let’s cite the following word from Boot Images from External Media:
1 | WARNING: Don’t use your PC as the USB-Host to supply the electrical power, otherwise it will fail to activate Multi-Boot! |
4. NPU
In this section, we’re testing the computing capability of Khadas VIM3‘s NPU.
Before everything starts, make sure you have the galcore module loaded, by using command modinfo galcore.
4.1 Obtain aml_npu_sdk From Khadas
Extract the obtained aml_npu_sdk.tgz on your local host. Bear in mind that it is your local host, BUT NOT Khadas VIM3. Relative issues can be found at:
4.2 Model Conversion on Host
Afterwards, the models applicable on Khadas VIM3 can be obtained by following Model Conversion. Anyway, on my laptop, I obtained the converted model as follows:
1 | ➜ nbg_unify_inception_v3 ll |
Do I need to emphasize that I’m using Tensorflow 2.1.0 ? Anyway, check the following:
1 | ➜ ~ python |
4.3 Build Case Code
4.3.1 Cross-build on Host
You can of course cross-build the case code on your local host, instead of Khadas VIM3 by referring to Compile the Case Code. (The document seems NOT updated yet.) Instead of using 1 argument, we specify 2 auguments, one for aml_npu_sdk, the other for Fenix.
1 | ➜ nbg_unify_inception_v3 ./build_vx.sh ....../aml_npu_sdk/linux_sdk/linux_sdk_6.3.3.4 ....../fenix |
inceptionv3 now should be ready to use, but in my case, it’s NOT working properly. It’s probably because Fenix is NOT able to provide/represent the correct cross-compile toolchains for my installed VIMx.Ubuntu-xfce-bionic_Linux-4.9_arm64_V20191231.emmc.kresq. Anyway, this is NOT my preference.
4.3.2 Directly Build on Khadas VIM3
Let’s leave this for the next section 4.4 Run Executable on Khadas VIM3.
4.4 Run Executable on Khadas VIM3
4.4.1 Step 1: Install aml-npu
1 | khadas@Khadas:~$ sudo apt install aml-npu |
And with command line dpkg -L aml-npu, you’ll see what’s been installed by aml-npu. However, due to its commercial license, I may NOT be allowed to show anything here in my blog.
4.4.2 Step 2: Install aml-npu-demo and Run Demo
1 | khadas@Khadas:~$ sudo apt install aml-npu-demo |
Where is the sample to run? /usr/share/npu/inceptionv3.
Alright, let’s try it.
1 | khadas@Khadas:~$ cd /usr/share/npu/inceptionv3 |
The program runs smoothly.
4.4.3 Step 3: Build Your Own Executable and Run
Clearly, ALL (really???) required development files have been provided by aml-npu, in such, we should be able to build this demo inceptionv3 out by ourselves.
4.4.3.1 You STILL Need aml_npu_sdk from Khadas
Besides aml-npu from repo, in order to have the demo inceptionv3 fully and successfully built, you still need aml_npu_sdk from Khadas. In my case, you do need acuity-ovxlib-dev, and let’s do export ACUITY_OVXLIB_DEV=path_to_acuity-ovxlib-dev.
4.4.3.2 Build inceptionv3 from Source
We don’t need to copy the entire aml_npu_sdk onto Khadas VIM3, but ONLY demo/inceptionv3. Here in my case, ONLY demo/inceptionv3 is copied under ~/Programs.
1 | khadas@Khadas:~/Programs/inceptionv3$ ll |
This is almost the same as folder nbg_unify_inception_v3 shown in 4.2 Model Conversion on Host.
Now, The MOST important part is to modify makefile.
1 | khadas@Khadas:~/Programs/inceptionv3$ cp makefile.linux makefile |
My makefile is modified as follows.
1 | khadas@Khadas:~/Programs/inceptionv3$ cat makefile |
In fact, you still need to modify common.target a little bit accordingly. However, to disclose it in this blog is still NOT allowed I think. Anyway, after the modification, let’s make it.
1 | khadas@Khadas:~/Programs/inceptionv3$ make |
Don’t worry about the error. It just failed to run the demo, but the executable inceptionv3 has already been successfully built under folder bin_r.
1 | khadas@Khadas:~/Programs/inceptionv3$ ll bin_r |
4.4.3.3 Run inceptionv3
Let’s run inceptionv3 under folder bin_demo.
1 | khadas@Khadas:~/Programs/inceptionv3$ cd bin_demo/ |
This is the original status of ALL files under bin_demo. Let’s copy and paste our built bin_r/inceptionv3 into this folder bin_demo. The size of the executable seems to be dramatically decreased.
1 | khadas@Khadas:~/Programs/inceptionv3/bin_demo$ cp ../bin_r/inceptionv3 ./ |
Now, let’s copy the built inception_v3.nb from host to Khadas VIM3. It seems inception_v3.nb built by Tensorflow 2.1.0 on host is of the same size as provided by Khadas.
1 | khadas@Khadas:~/Programs/inceptionv3/bin_demo$ ll inception_v3.nb |
Finally, let’s run the demo.
1 | khadas@Khadas:~/Programs/inceptionv3/bin_demo$ ./inceptionv3 ./inception_v3.nb ./dog_299x299.jpg |
1 | khadas@Khadas:~/Programs/inceptionv3/bin_demo$ ./inceptionv3 ./inception_v3.nb ./goldfish_299x299.jpg |
By comparing to imagenet_slim_labels.txt under current folder, let’s take a look at our inference results. Only the FIRST inference is qualified because of the probability.
| Index | Result for dog_299x299.jpg | Result for goldfish_299x299.jpg |
|---|---|---|
| N/A | ![]() |
![]() |
| 1 | 208: ‘curly-coated retriever’, | 2: ‘tench’, |
| 2 | 209: ‘golden retriever’, | 795: ‘shower cap’, |
| 3 | 223: ‘Irish water spaniel’, | 974: ‘cliff’, |
| 4 | 268: ‘miniature poodle’, | 408: ‘altar’, |
| 5 | 185: ‘Kerry blue terrier’, | 393: ‘coho’, |
5. Dual Boot From Manjaro
5.1 How to Boot Images from External Media?
There are clearly 2 options:
- dual boot by selecting devices: EMMC or TF/SD Card. On Boot Images from External Media, it’s recommended as
Via Keys mode (Side-Buttons) - the easiest and fastest way, which is the FIRST option on page How To Boot Into Upgrade Mode. Therefore, by following 4 steps as follows(cited from How To Boot Into Upgrade Mode), we should be able to boot into SD-USB.
5.2 How to flash Manjaro XFCE for Khadas Vim 3 from TF/SD card to EMMC?
ONLY 1 operating system is preferred. Why??? Khadas VIM3 board comes with a large EMMC of size 32G.
After a VERY long time struggling, I would really like to emphasize the quality of Type C cable and power adaptor again. Try to buy things NOT from Taobao.
Finally, I had Manjaro XFCE for Khadas Vim 3 on SD card booted and running, as follows:

1 | ➜ ~ ssh khadas@192.168.1.95 |
It seems Arch Linux is totally different from Debian. What can I say? Go to bed.
Fold for Covid
Today, April 17, 2020, I’ve got 2 big NEWS for me.
- China Airline cancelled my flight back to China in May.
- I received an Email from balena to encourage us to
contribute our spare computing power (PCs, laptops, single-board devices) to [Rosetta@Home](https://boinc.bakerlab.org/) and support vital COVID-19 research.
Well, having been using balenaEtcher for quite a while, I of course will support Baker Laboratory at W - University of Washington. There are 2 points need to be emphasized here:
- W used to be my dreamed university, but it’s STILL in my dream so far.
- Baker Laboratory seems to be really good at bakery.
Alright, let’s taste how they bake this COVID-19. 2 manuals to follow:
Make sure one thing: Wired Connection.
Now, it’s your choice to visit either http://foldforcovid.local/ or IP address of this Raspberry Pi 4, you will see your Raspberry Pi 4 is up and running, and you are donating the compute capacity to support COVID-19.
| foldforcovid.local | 192.168.1.111 |
|---|---|
![]() |
![]() |
Finally, 2 additional things:
- When will the boarder between Canada and USA be opening? I’d love to visit Baker Laboratory in person.
- I built my own OS for Raspberry Pi based on Raspbian, please check it out on my website https://www.longervision.cc/. Don’t forget to BUY ME A COFFEE.
Let me update a bit: Besides this Fold for Covid, there are so many activities ongoing:
AI Model Visualization By Tensorspace - 3D Interactive
Visited Green Timbers Lake again.
| The Grassland | Ducks In the Lake | A Pair of Ducks In The Lake |
|---|---|---|
![]() |
![]() |
![]() |
| The Lake | Me In Facial Mask for COVID-19 | The Lake - The Other Side |
![]() |
![]() |
![]() |
1. About Tensorspace
- Tensorspace Playground: Have some fun FIRST
- Official Documentation: Let’s follow this manual
- Towards Data Science Blog: The BEST DIY tutorial so far
2. Let’s Have Some Fun
We FIRST create an empty project folder, here, named as WebTensorspace.
2.1 Follow Towards Data Science Blog FIRST
Let’s strictly follow this part of Towards Data Science Blog (cited from Towards Data Science Blog).
Finally we need to create .html file which will output the result. Not to spend time on setting-up TensorFlow.js and JQuery I encourage you just to use my template at TensorSpace folder. The folder structure looks as following:- index.html — out html file to run visualization
- lib/ — folder storing all the dependencies
- data/ — folder containing .json file with network inputs
- model/ — folder containing exported model
For our html file we need to first import dependecies and write a TensorSpace script.
Now, let’s take a look at our project.
1 | ➜ WebTensorspace ls |
2.1.1 lib
Three resource is referred to download ALL required libraries.
Some required libraries suggested by me:
- Chart.min.js
- TrackballControls.js
- jQuery: Do NOT forget to rename jquery-3.5.0.min.js to jquery.min.js
- signature_pad: signature_pad on github, signature_pad on JSDELIVR, signature_pad 3.0.0-beta.3
- stats.min.js
- three.min.js
- tween.cjs.js
- tensorspace.min.js
- tf.min.js
- tf.min.js.map: tensorflow on cdnjs
Now, let’s take a look at what’s under folder lib.
1 | ➜ WebTensorspace ls lib |
2.1.2 model
Let’s just use tf_keras_model.h5 provided by TensorSpace as our example. You may have to click on the Download button to have this tf_keras_model.h5 downloaded into folder model.
1 | ➜ WebTensorspace ls model |
2.1.2.1 tensorspacejs_converter Failed to Run
Now, let’s try to run the following command:
1 | ➜ WebTensorspace tensorspacejs_converter \ |
Clearly, we can downgrade tensorflow-estimator from 2.2.0 to 2.1.0.
1 | ➜ WebTensorspace pip show tensorflow_estimator |
Now, we try to re-run the above tensorspacejs_converter command:
1 | ➜ WebTensorspace tensorspacejs_converter \ |
2.1.2.2 Install tfjs-converter
Please checkout tfjs and enter tfjs-converter, and then have the python package installed tfjs-converter/python
1 | ➜ python git:(master) ✗ pwd |
2.1.2.3 Install tensorspace-converter
Please checkout my modified tensorspace-converter and have the python package installed. Please also keep an eye on my PR.
1 | ➜ tensorspace-converter git:(master) python setup.py install --user |
2.1.2.4 Try tensorspacejs_converter Again
1 | ➜ WebTensorspace tensorspacejs_converter \ |
2.1.3 index.html
2.1.3.1 helloworld-empty
Let’s copy and paste TensorSpace’s Example helloworld-empty.html and do some trivial modification for 3D visualization, as follows:
1 | <!DOCTYPE html> |
Google Coral
First snow in 2020. Actually, it is ALSO the FIRST snow for the winter from 2019 to 2020.
| First Snow 1 | First Snow 2 | First Snow 3 |
|---|---|---|
![]() |
![]() |
![]() |
Both my son and the Chinese New Year are coming. Let’s start the mode of celebrating. Today, I’m going to do the hotpot.
| Hotpot 1 | Hotpot 2 | Hotpot 3 |
|---|---|---|
![]() |
![]() |
![]() |
It looks in 1-day time, everybody is doing the edge computing. Today, we’re going to have some fun of Google Coral.
1. Google Coral USB Accelerator
Image cited from Coral official website.

To try out Google Coral USB Accelerator is comparitively simple. The ONLY thing to do is just to follow Google Doc - Get started with the USB Accelerator. Anyway, let’s test it out with the following commands.
Make sure we are able to list the device.
1 | ➜ classification git:(master) ✗ lsusb |
We then checkout Google Coral Edgue TPU and test the example classify_image.py.
1 | ➜ edgetpu git:(master) pwd |
BTW, I’m going to discuss
- Google Coral TPU
- Intel Movidius VPU
- Cambricon NPU which has been adopted in HuaWei Hikey 970 and Rockchip 3399 Pro
sooner or later. Just keep an eye on my blog.
2. Google Coral Dev Board
In the following, we’re going to disscuss Google Coral Dev Board more. Image cited from Coral official website.

2.1 Mendel Installation
2.1.1 Mendel Linux Preparation
Google Corel Mendel Linux can be downloaded from https://coral.ai/software/. In our case, we are going to try Mendel Linux 4.0.
2.1.2 Connect Dev Board Via Micro-USB Serial Port
On the host, we should be able to see:
1 | ➜ mendel-enterprise-day-13 lsusb |
Now what you see is a black screen. After having connected the Type C power cable, you should be able to see:
1 | ...... |
That is shown on the screen monitor of Google Coral Dev Board. We now need to input fastboot 0 on u-boot=> prompt. After having connected the Type C OTG cable, we should be able to see on the host:
1 | ➜ mendel-enterprise-day-13 fastboot devices |
2.1.3 Flash Corel Dev Board
1 | ➜ mendel-enterprise-day-13 ls |
2.1.4 Boot Mendel

After a while, you’ll see:

Now, login with
- username: mendel
- password: mendel
1 | ➜ ~ mdt devices |
You will be able to see Google Coral Dev Board is NOW connected. If you don’t see the EXPECTED output mocha-shrimp (192.168.101.2), just plug out and plug in the Type C power cable again.
Unfortunately, mdt tool does NOT work properly.
1 | ➜ mendel-enterprise-day-13 mdt shell |
This bug has been clarified on StackOverflow. By modifying file vim $HOME/.local/lib/python3.6/site-packages/mdt/sshclient.py line 86, from if not self.address.startswith('192.168.100'): to if not self.address.startswith('192.168.10'):, problem solved.
1 | ➜ mendel-enterprise-day-13 mdt shell |
After activate the Internet by nmtui, we can NOW clearly see the wlan0 IP is automatically allocated.
1 | mendel@mocha-shrimp:~$ ip -c address |
Of course, we can setup a static IP for this particular Google Coral Dev Board afterwards.
2.1.5 SSH into Mendel
In order to SSH into Mendel and connect remotely, we need to do Connect to a board’s shell on the host computer. You MUST pushkey before you can ssh into the board via the Internet IP instead of the virtual IP via USB, say 192.168.100.2 or 192.168.101.2.
1 | ➜ ~ ssh -i ~/.ssh/id_rsa_mendel.pub mendel@192.168.1.97 |
However, for now, I’ve got NO idea why ssh NETVER works for Google Coral Dev Board any more.
From now on, a huge modification.
2.2 Flash from U-Boot on an SD card
If you get unlucky and you can't even boot your board into U-Boot, then you can recover the system by booting into U-Boot from an image on the SD card and then reflash the board from your Linux (cited from Google Coral Dev Board’s Official Doc). Now, fastboot devices from host is NOW back.
1 | ➜ mendel-enterprise-day-13 fastboot devices |
The, we reflash Google Coral Dev Board.
1 | ➜ mendel-enterprise-day-13 bash flash.sh |
Now we are able to run mdt shell successfully.
1 | ➜ mendel-enterprise-day-13 mdt shell |
Run ssh-keygen and pushkey consequently:
1 | ➜ .ssh ssh-keygen |
Then, with mdt shell, run command nmtui to activate wlan0.
Let’s briefly summarize:
2.3 Demonstration
2.3.1 edgetpu_demo –device & edgetpu_demo –stream
Let’s ignore edgetpu_demo –device for I ALMOST NEVER work with a GUI mode. The demo video is on my youtube channel, please refer to:
On console, it just displays as:
1 | mendel@deft-orange:~$ edgetpu_demo --stream |
2.3.2 Classification
Refer to Install the TensorFlow Lite library.
1 | mendel@green-snail:~/.local$ pip3 install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_aarch64.whl |
1 | mendel@green-snail:~/Downloads/tflite/python/examples/classification$ python3 classify_image.py --model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels models/inat_bird_labels.txt --input images/parrot.jpg |
2.3.3 Camera
2.3.3.1 Google Coral camera
The Google Coral camera can be detected as a video device:
1 | mendel@mocha-shrimp:~$ v4l2-ctl --list-formats-ext --device /dev/video0 |
2.3.3.2 Face Detection Using Google TPU
My youtube real-time face detection video clearly shows Google TPU is seriously powerful.
On console, it displays:
1 | mendel@deft-orange:~$ edgetpu_detect_server \ |
2.3.4 Bugs
1 | mendel@green-snail:~$ edgetpu_demo --stream |
1 | mendel@green-snail:~/Downloads/edgetpu/test_data$ edgetpu_detect_server --model ./mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite |
To solve this problem, run the following command:
1 | mendel@green-snail:~$ sudo systemctl restart weston |
AI Model Visualization By Tensorboard
One of the most widely used tool to visualize AI model is well-known tensorboard. In this blog, we focus on tensorboard model graphs.
Well, let me leave this for future. Tensorboard seems to be a comprehensive/synthetic/complicated solution. It’s NOT my type.
AI Model Visualization By PlotNeuralNet - 3D Static
| Beautiful Sakura 1 | Beautiful Sakura 2 | Beautiful Sakura 3 |
|---|---|---|
![]() |
![]() |
![]() |
| Beautiful Sakura 4 | Beautiful Sakura 5 | Beautiful Sakura 6 |
![]() |
![]() |
![]() |
| Beautiful Sakura 7 | Beautiful Sakura 8 | The Willow |
![]() |
![]() |
![]() |
Spent some time with the beautiful flowers.
Okay, today, let’s briefly talk about PlotNeuralNet for 3D visualization of various AI model architectures. We just follow PlotNeuralNet and generate some results for fun.
1. pyexamples
1.2 test_simple
1 | ➜ pyexamples git:(master) ✗ ../tikzmake.sh test_simple |
The following is to show .pdf file is able to be displayed in hexo:

1.2 unet
1 | ➜ pyexamples git:(master) ✗ ../tikzmake.sh unet |

Pretty neat, isn’t it?
2. Provided Examples
Let’s FIRST take a look how many LETEX files are there under the folder examples.
1 | ➜ PlotNeuralNet git:(master) ✗ cd examples |
We then enter each subfolder and run command ../../tikzmake.sh example_name, for instance:
1 | ➜ examples git:(master) ✗ cd fcn32s |
It’s WEIRD that so far, each .tex file is removed after the .pdf file for AI model architecture has been generated. Anyway, let’s take a look at all generated files in .png format.
2.1 fcn32

2.2 fcn8

2.1 HED

2.2 SoftmaxLoss

2.1 Unet

2.2 Unet_ushape

2.1 vgg16

AI Model Visualization By Netron - 2D Static
| Beautiful Sakura 1 | Beautiful Sakura 2 | Beautiful Sakura 3 |
|---|---|---|
![]() |
![]() |
![]() |
| Beautiful Sakura 4 | Beautiful Sakura 5 | Beautiful Sakura 6 |
![]() |
![]() |
![]() |
| Beautiful Sakura 7 | Beautiful Sakura 8 | Beautiful Sakura 9 |
![]() |
![]() |
![]() |
For simplicity, let’s just pick up Python Server way to install Netron by pip install --user netron. After installation:
1 | ➜ ~ pip show netron |
We then test out some popular models, including
- a onnx Alexnet model: bvlcalexnet-9.onnx
- a caffe model trained by BAIR: finetune_flickr_style.caffemodel
- a Tensorflow model provided by Google Brain on Kaggle: Inception-v3
| bvlcalexnet-9.onnx | finetune_flickr_style.caffemodel |
|---|---|
![]() |
![]() |
OpenVINO on Raspberry Pi 4 with Movidius Neural Compute Stick II
Today is Easter Sunday, Prime minister of United Kindom Boris Johnson recovered from COVID-19. Canada has been suffereing COVID-19 for a month already. I’m re-writing this blog FIRSTLY written in September 2019, and UPDATED ONCE in December 2019.
Merry Christmas and happy new year everybody. I’ve been back to Vancouver for several days. These days, I’m updating this blog FIRSTLY written in September 2019. 2020 is coming, and we’re getting 1 year older. A kind of sad hum?
Okay… No matter what, let’s enjoy the song first: WE ARE YOUNG. Today, I joined Free Software Foundation and start my journey of supporting Open Source Software BY CASH. For me, it’s not about poverity or richness. It’s ALL about FAITH.
To write something about Raspberry Pi is to say GOOD BYE to my Raspberry Pi 3B, and WELCOME Raspberry Pi 4 at the same time. Our target today is to build an AI edge computing end as the following Youtube video:
1. About Raspberry Pi 4
1.1 Raspberry Pi 4 vs. Raspberry Pi 3B+
Before we start, let’s carry out a simple comparison between Raspberry Pi 4 and Raspberry Pi 3B+.
1.2 Raspbian Installation
1 | ➜ raspbian sudo dd bs=4M if=2020-02-13-raspbian-buster.img of=/dev/mmcblk0 conv=fsync |
1.3 BCM2711 is detected as BCM2835
1 | ➜ ~ cat /proc/cpuinfo |
This issue seems to be a well-known bug. Raspberry Pi 4’s specification can be retrieved from The MagPi Magazine. More details about the development history of Raspberry Pi can be found on Wikipedia.
2. Movidius Neural Compute Stick on Raspberry Pi 4
Then, we just follow the following 2 blogs Run NCS Applications on Raspberry Pi and Adding AI to the Raspberry Pi with the Movidius Neural Compute Stick to test out Intel Movidius Neural Compute Stick 2:
| Intel Movidius Neural Compute Stick 2 | Intel Movidius Neural Compute Stick 1 |
|---|---|
![]() |
![]() |
Intel Movidius Neural Compute Stick 1 is NOT listed on Intel’s official website any more. But github support for Intel Movidius Neural Compute Stick 1 can be found at https://github.com/movidius/ncsdk.
2.1 NCSDK Installation
We FIRST need to have ncsdk installed. Yup, here, as described in Run NCS Applications on Raspberry Pi, we carry out the installation directly under folder ...../ncsdk/api/src.
1 | ➜ src make |
2.2 Test NCSDK Example Apps
2.2.1 For Movidius NCS 1
1 | ➜ hello_ncs_py lsusb |
2.2.2 For Movidius NCS 2
1 | ➜ hello_ncs_py lsusb |
The above bug has ALREADY been expalined in main online resource:
- ncDeviceOpen:527 ncDeviceOpen() XLinkBootRemote returned error 3
- Make the examples Error! on NCS2 - [Error 7] Toolkit Error: USB Failure. Code: Error opening device
- etc.
All these hint that OpenVINO should be utilized instead of NCSDK2.
2.3 mvnc Python Package
1 | ➜ ~ python |
3. Transitioning from Intel Movidius Neural Compute SDK to Intel OpenVINO
By following Intel’s official documentation Transitioning from Intel® Movidius™ Neural Compute SDK to Intel® Distribution of OpenVINO™ toolkit, we are transitioning to OpenVINO, which supports both Intel NCS 2 and Intel NCS 1.
3.1 Install OpenVINO for Raspbian
For the installation details of OpenVINO, please refer to the following 2 documentations:
We now extract the MOST up-to-date l_openvino_toolkit_runtime_raspbian_p_2020.3.220.tgz under folder /opt/intel/openvino. Let’s take a brief look at this folder:
1 | ➜ openvino pwd |
Clearly, by comparing with OpenVINO™ Toolkit - Deep Learning Deployment Toolkit repository, we know that the open source version of deployment_tools contains some more content than the trimmed version for Raspbian. We’ll use model-optimizer for sure. Therefore, we checked out dldt, and put it under folder /opt/intel.
1 | ➜ intel pwd |
3.2 Build OpenVINO Samples
Before start building OpenVINO samples, please have OpenCV built from source and installed. You can of course directly run the command build_samples.sh to build ALL samples. However, I personally would like you to build and install OpenCV from source FIRST.
Note: Be sure to enable -DCMAKE_CXX_FLAGS=’-march=armv7-a’ while building dldt/samples, which is exactly the same as the source under l_openvino_toolkit_runtime_raspbian_p_2020.1.023/inference_engine/samples/cpp. In fact, to build l_openvino_toolkit_runtime_raspbian_p_2020.1.023/inference_engine/samples/c, -DCMAKE_CXX_FLAGS=’-march=armv7-a’ also needs to be enabled.
After having successfully built C/C++ samples, let’s enter folder /opt/intel/openvino/inference_engine/samples.
3.3 Device Query
3.2.1 C
There is NO such an exe file hello_query_device_c.
3.2.2 C++
For NCS 2
1 | ➜ samples $ ./cpp/build/armv7l/Release/hello_query_device |
For NCS 1
1 | ➜ samples ./cpp/build/armv7l/Release/hello_query_device |
3.2.3 Python
For NCS 2
1 | ➜ samples python ./python/hello_query_device/hello_query_device.py |
For NCS 1
1 | ➜ samples python ./python/hello_query_device/hello_query_device.py |
3.3 Object Detection
Please refer to Device-specific Plugin Libraries for ALL possible device types.
We then download two model files as given in the blog Install OpenVINO™ toolkit for Raspbian* OS.
Please note that:
- face-detection-adas-0001.xml and face-detection-adas-0001.bin are not working properly. Please refer to OpenVINO Issue 848966.
- For now, please use the following model files from year 2019: face-detection-adas-0001.xml and face-detection-adas-0001.bin.
Afterwards, start running Object Detection for 2 images: me.jpg and parents.jpg:
3.3.1 C
For NCS 2
1 | ➜ samples ./c/build/armv7l/Release/object_detection_sample_ssd_c -m face-detection-adas-0001.xml -d MYRIAD -i parents.jpg |
1 | ➜ samples ./c/build/armv7l/Release/object_detection_sample_ssd_c -m face-detection-adas-0001.xml -d MYRIAD -i me.jpg |
For NCS 1
1 | ➜ samples pwd |
1 | ➜ samples ./c/build/armv7l/Release/object_detection_sample_ssd_c -m face-detection-adas-0001.xml -d MYRIAD -i me.jpg |
3.3.2 C++
For NCS 2
1 | ➜ samples ./cpp/build/armv7l/Release/object_detection_sample_ssd -m face-detection-adas-0001.xml -d MYRIAD -i me.jpg |
1 | ➜ samples ./cpp/build/armv7l/Release/object_detection_sample_ssd -m face-detection-adas-0001.xml -d MYRIAD -i parents.jpg |
For NCS 1
1 | ➜ samples ./cpp/build/armv7l/Release/object_detection_sample_ssd -m face-detection-adas-0001.xml -d MYRIAD -i me.jpg |
1 | ➜ samples ./cpp/build/armv7l/Release/object_detection_sample_ssd -m face-detection-adas-0001.xml -d MYRIAD -i parents.jpg |
3.3.3 Python
For NCS 2
1 | ➜ samples python ./python/object_detection_sample_ssd/object_detection_sample_ssd.py -m face-detection-adas-0001.xml -d MYRIAD -i me.jpg |
1 | ➜ samples python ./python/object_detection_sample_ssd/object_detection_sample_ssd.py -m face-detection-adas-0001.xml -d MYRIAD -i parents.jpg |
For NCS 1
1 | ➜ samples python ./python/object_detection_sample_ssd/object_detection_sample_ssd.py -m face-detection-adas-0001.xml -d MYRIAD -i me.jpg |
1 | ➜ samples python ./python/object_detection_sample_ssd/object_detection_sample_ssd.py -m face-detection-adas-0001.xml -d MYRIAD -i parents.jpg |
3.3.4 Results
3.4 Model Optimization
In this section, we are going to test out another OpenVINO example: Image Classification C++ Sample Async. After reading this blog, it wouldn’t be hard for us to notice the MOST important thing we’re missing here is the model file alexnet_fp32.xml. Let’s just keep it in mind for now.
And, let’s review a bit about our previous example: Object Detection – we downloaded face-detection-adas-0001 model from online and use it directly. So, questions:
- Are we able to download alexnet_fp32.xml from online this time again?
- Where can we download a whole bunch of open source models?
3.4.1 Open Model Zoo
It wouldn’t be hard for us to google out OpenVINO™ Toolkit - Open Model Zoo repository, under which model face-detection-adas-0001 is just sitting there. However, face-detection-adas-0001.xml and face-detection-adas-0001.bin are missing.
Let’s checkout open_model_zoo and put it under folder /opt/intel.
1 | ➜ intel pwd |
Then, let’s enter folder face-detection-adas-0001 under open_model_zoo and take a look:
1 | ➜ face-detection-adas-0001 git:(master) pwd |
It seems that each folder under intel and public contains the detailed info of each model. For instance, file intel/face-detection-adas-0001/model.yml contains all the info about model face-detection-adas-0001. However, what we really need are a .xml file and a .bin file. In the following, we are going to generate such 2 files, which are optimized specifically for movidius by following Intel OpenVINO toolkit issue 798441.
3.4.2 Download Caffe Model Files
1 | ➜ downloader git:(master) pwd |
Clearly, three files including a large model file alexnet.caffemodel has been downloaded.
3.4.3 Model Optimization
Now, are are going to optimize the downloaded caffe model and make it feedable to OpenVINO.
1 | ➜ model-optimizer git:(2020) ✗ pwd |
And, let’s take a look at what’s generated under folder /opt/intel/dldt/model-optimizer.
1 | ➜ model-optimizer git:(2020) ✗ pwd |
Note: You may meet the following ERRORs during model optimization.
1 | [ ERROR ] |
Clearly, for networkx, the ERROR message is a kind of ridiculous.
Anyway, if you meet the above 2 errors, please DOWNGRADE your packages as follows:
1 | ➜ ~ pip install protobuf==3.6.1 --user |
3.5 Image Classification
3.5.1 C
For NCS 2
1 | ➜ samples ./c/build/armv7l/Release/hello_classification_c /opt/intel/dldt/model-optimizer/alexnet.xml me.jpg HETERO:MYRIAD |
1 | ➜ samples ./c/build/armv7l/Release/hello_classification_c /opt/intel/dldt/model-optimizer/alexnet.xml parents.jpg HETERO:MYRIAD |
For NCS 1
1 | ➜ samples ./c/build/armv7l/Release/hello_classification_c /opt/intel/dldt/model-optimizer/alexnet.xml me.jpg HETERO:MYRIAD |
1 | ➜ samples ./c/build/armv7l/Release/hello_classification_c /opt/intel/dldt/model-optimizer/alexnet.xml parents.jpg HETERO:MYRIAD |
3.5.2 C++
For NCS 2
1 | ➜ samples ./cpp/build/armv7l/Release/hello_classification /opt/intel/dldt/model-optimizer/alexnet.xml me.jpg HETERO:MYRIAD |
1 | ➜ samples ./cpp/build/armv7l/Release/hello_classification /opt/intel/dldt/model-optimizer/alexnet.xml parents.jpg HETERO:MYRIAD |
For NCS 1
1 | ➜ samples ./cpp/build/armv7l/Release/hello_classification /opt/intel/dldt/model-optimizer/alexnet.xml me.jpg HETERO:MYRIAD |
1 | ➜ samples ./cpp/build/armv7l/Release/hello_classification /opt/intel/dldt/model-optimizer/alexnet.xml parents.jpg HETERO:MYRIAD |
3.5.3 Python
For NCS 2
1 | ➜ samples python ./python/classification_sample/classification_sample.py -m /opt/intel/dldt/model-optimizer/alexnet.xml -i me.jpg -d HETERO:MYRIAD -nt 10 |
1 | ➜ samples python ./python/classification_sample/classification_sample.py -m /opt/intel/dldt/model-optimizer/alexnet.xml -i parents.jpg -d HETERO:MYRIAD -nt 10 |
For NCS 1
1 | ➜ samples python ./python/classification_sample/classification_sample.py -m /opt/intel/dldt/model-optimizer/alexnet.xml -i me.jpg -d HETERO:MYRIAD -nt 10 |
1 | ➜ samples python ./python/classification_sample/classification_sample.py -m /opt/intel/dldt/model-optimizer/alexnet.xml -i parents.jpg -d HETERO:MYRIAD -nt 10 |
3.5.4 Results
Classification Result for me.jpg
| Result from NCS 2 | Result from NCS 1 |
|---|---|
| 838: ‘sunscreen, sunblock, sun blocker’, | 978: ‘seashore, coast, seacoast, sea-coast’, |
| 978: ‘seashore, coast, seacoast, sea-coast’, | 977: ‘sandbar, sand bar’, |
| 977: ‘sandbar, sand bar’, | 838: ‘sunscreen, sunblock, sun blocker’, |
| 975: ‘lakeside, lakeshore’, | 975: ‘lakeside, lakeshore’, |
| 903: ‘wig’, | 903: ‘wig’, |
| 638: ‘maillot’, | 638: ‘maillot’, |
| 976: ‘promontory, headland, head, foreland’, | 433: ‘bathing cap, swimming cap’, |
| 433: ‘bathing cap, swimming cap’, | 976: ‘promontory, headland, head, foreland’, |
| 112: ‘conch’, | 112: ‘conch’, |
| 639: ‘maillot, tank suit’, | 639: ‘maillot, tank suit’, |
Classification Result for parents.jpg
| Result from NCS 2 | Result from NCS 1 |
|---|---|
| 707: ‘pay-phone, pay-station’, | 707: ‘pay-phone, pay-station’, |
| 577: ‘gong, tam-tam’, | 577: ‘gong, tam-tam’, |
| 704: ‘parking meter’, | 813: ‘spatula’, |
| 955: ‘jackfruit, jak, jack’, | 704: ‘parking meter’, |
| 813: ‘spatula’, | 910: ‘wooden spoon’, |
| 910: ‘wooden spoon’, | 515: ‘cowboy hat, ten-gallon hat’, |
| 515: ‘cowboy hat, ten-gallon hat’, | 955: ‘jackfruit, jak, jack’, |
| 605: ‘iPod’, | 523: ‘crutch’, |
| 918: ‘crossword puzzle, crossword’, | 808: ‘sombrero’, |
| 523: ‘crutch’, | 918: ‘crossword puzzle, crossword’, |
classid table:
4. OpenCV DNN with OpenVINO’s Inference Engine
For the example openvino_fd_myriad.py given on Install OpenVINO™ toolkit for Raspbian* OS, the TOUGH thing is how to build OpenCV with OpenVINO’s Inference Engine. NEVER forget to export export InferenceEngine_DIR="/opt/intel/openvino/inference_engine/share" and enable WITH_INF_ENGINE ON to have OpenCV rebuilt. Before moving forward, my modified version of openvino_fd_myriad.py is provided as follows:
1 | import sys, argparse |
4.1 Intel64
On my laptop, of course, we are building OpenCV for architecture Intel64, with NVidia GPU + CUDA support, for DNN in OpenCV requires either CUDA or OpenCL.
My test result of openvino_fd_myriad.py shows the performance of adopted model face-detection-adas-0001 is NOT as good as expected.
For NCS 2
1 | ➜ OpenVINO_Examples python ./openvino_fd_myriad.py -x /opt/intel/openvino/inference_engine/samples/cpp/build_18.04/intel64/Release/face-detection-adas-0001.xml -b /opt/intel/openvino/inference_engine/samples/cpp/build_18.04/intel64/Release/face-detection-adas-0001.bin -i ./parents.jpg |
For NCS 1
1 | ➜ OpenVINO_Examples python ./openvino_fd_myriad.py -x /opt/intel/openvino/inference_engine/samples/cpp/build_18.04/intel64/Release/face-detection-adas-0001.xml -b /opt/intel/openvino/inference_engine/samples/cpp/build_18.04/intel64/Release/face-detection-adas-0001.bin -i ./parents.jpg |
Results
| Parents At First Starbucks | Parents Face Detection By OpenCV DNN NCS 1 | Parents Face Detection By OpenCV DNN NCS 2 |
|---|---|---|
![]() |
![]() |
![]() |
| Me | Me Face Detection By OpenCV DNN NCS 1 | Me Face Detection By OpenCV DNN NCS 2 |
![]() |
![]() |
![]() |
4.2 armv7l
For NCS 2
1 | ➜ Programs python ./openvino_fd_myriad.py -x /opt/intel/openvino/inference_engine/samples/face-detection-adas-0001.xml -b /opt/intel/openvino/inference_engine/samples/face-detection-adas-0001.bin -i ./parents.jpg |
For NCS 1
1 | ➜ Programs python ./openvino_fd_myriad.py -x /opt/intel/openvino/inference_engine/samples/face-detection-adas-0001.xml -b /opt/intel/openvino/inference_engine/samples/face-detection-adas-0001.bin -i ./parents.jpg |
Results
| Parents At First Starbucks | Parents Face Detection By OpenCV DNN NCS 1 | Parents Face Detection By OpenCV DNN NCS 2 |
|---|---|---|
![]() |
![]() |
![]() |
| Me | Me Face Detection By OpenCV DNN NCS 1 | Me Face Detection By OpenCV DNN NCS 2 |
![]() |
![]() |
![]() |
5. My Built Raspbian ISO With OpenCV4 + OpenVINO
Finally, you are welcome to try my built image rpi4-raspbian20200213-opencv4.3-openvino2020.1.023-ncsdk2.10.01.01.img, which is about 9.5G and composed of:
Everything has ALREADY been updated and built successfully, ONLY EXCEPT object_detection_sample_ssd. In addition, rpi4-raspbian20200213-opencv4.3-openvino2020.1.023-ncsdk2.10.01.01.img ALSO works properly on my Raspberry Pi 3 Model B Version 1.2 manufactured in 2015.































































