Hi, all, I flied back and forth between ShenZhen China and Vancouver Canada in January 2018, and took a holiday break in California USA during Chinese Spring Festival in Febrary 2018. Now, I'm back to Vancouver and write this blog. Today, we are going to talk about how to flash the most recent Linux Kernel onto an NanoPi NEO. The board looks like (cited from NanoPi NEO ):

NanoPi NEO

Different from our previous blog where U-Boot and Linux Kernel are manually built, here, we download and install the built Ubuntu from Armbian directly.

Before starting, the following paragraph is cited to explain the relationships among AllWinner, Sunxi and Linaro.

In march 2014, Allwinner joined Linaro as part of the new linaro digital home group. After this, Allwinner stopped communicating with the sunxi community, as linaro membership apparently satisfies the marketing need to be seen as an open source friendly company. Despite Linaro membership, Allwinner still violates the GPL on many counts. (cited from http://linux-sunxi.org/Allwinner .)

PART A: Install Debian Server Built By Armbian onto NanoPi NEO

1. Download Armbian Debian Server for NanoPi NEO

We FIRST go visiting the website https://www.armbian.com/nanopi-neo/ and click Debian server -- mainline kernel icon, a file named Armbian_5.38_Nanopineo_Debian_stretch_next_4.14.14.7z will be automatically downloaded.

Then, we extract this .7z file by

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ 7za e Armbian_5.38_Nanopineo_Debian_stretch_next_4.14.14.7z

7-Zip (A) [64] 9.20 Copyright (c) 1999-2010 Igor Pavlov 2010-11-18
p7zip Version 9.20 (locale=en_CA.UTF-8,Utf16=on,HugeFiles=on,8 CPUs)

Processing archive: Armbian_5.38_Nanopineo_Debian_stretch_next_4.14.14.7z

Extracting Armbian_5.38_Nanopineo_Debian_stretch_next_4.14.14.img
Extracting armbian.txt
Extracting armbian.txt.asc
Extracting Armbian_5.38_Nanopineo_Debian_stretch_next_4.14.14.img.asc
Extracting sha256sum.sha

Everything is Ok

Files: 5
Size: 1396723567
Compressed: 256174798

2. Install Armbian Debian Server for NanoPi NEO

After the extracted image file is prepared, it's the time to install the Armbian Debian Server onto our TF card. We FIRST format the TF card:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ sudo mkfs.ext4 /dev/mmcblk0 
[sudo] password for jiapei:
mke2fs 1.42.13 (17-May-2015)
Found a dos partition table in /dev/mmcblk0
Proceed anyway? (y,n) y
Discarding device blocks: done
Creating filesystem with 3942144 4k blocks and 987360 inodes
Filesystem UUID: bf3f62c9-283d-48fe-a0f3-56ab357f7b94
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done

Afterwards, use dd to install the downloaded Armbian Ubuntu Desktop image.

1
2
3
4
5
$ sudo dd bs=4M if=Armbian_5.38_Nanopineo_Debian_stretch_next_4.14.14.img of=/dev/mmcblk0 conv=fsync
[sudo] password for jiapei:
333+0 records in
333+0 records out
1396703232 bytes (1.4 GB, 1.3 GiB) copied, 157.712 s, 8.9 MB/s

PART B: Boot Into Armbian, Network Configuration and Armbian Upgrading

As known, NanoPi NEO comes with NEITHER a HDMI interface for display, NOR a Wifi interface for wireless network connection. Therefore, we may have to find a wired cable, and connect our NanoPi NEO to a router, which also connects with our host computer. Afterwards, we'll have to find our NanoPi NEO connection via router settings. Here in my case, I'm using a Cisco DPC3848V DOCSIS 3.0 Gateway, with the gateway IP: 192.168.0.1. By visiting http://192.168.0.1 -> Setup -> LAN Setup -> Connected Devices Summary, you should be able to find out which is the newly connected network device, namely NanoPi NEO.

1. SSH Into NanoPi NEO

Through our host computer, SSH into NanoPi NEO:

1
2
3
4
5
6
$ ssh root@192.168.0.76
The authenticity of host '192.168.0.76 (192.168.0.76)' can't be established.
ECDSA key fingerprint is SHA256:L0qZCYEGGallJo9jSpM63TWAYJ4vU/JzIbMJS80Lx8I.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.0.76' (ECDSA) to the list of known hosts.
root@192.168.0.76's password:

Then you input 1234 as the password, you'll be asked to change your password right away.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
You are required to change your password immediately (root enforced)
_ _ ____ _ _ _
| \ | | __ _ _ __ ___ | _ \(_) | \ | | ___ ___
| \| |/ _` | '_ \ / _ \| |_) | | | \| |/ _ \/ _ \
| |\ | (_| | | | | (_) | __/| | | |\ | __/ (_) |
|_| \_|\__,_|_| |_|\___/|_| |_| |_| \_|\___|\___/


Welcome to ARMBIAN 5.38 stable Debian GNU/Linux 9 (stretch) 4.14.14-sunxi
System load: 0.07 0.04 0.07 Up time: 15 min
Memory usage: 16 % of 240MB IP: 192.168.0.76
CPU temp: 37°C
Usage of /: 7% of 15G

[ General system configuration (beta): armbian-config ]

New to Armbian? Check the documentation first: https://docs.armbian.com
Changing password for root.
(current) UNIX password:
Enter new UNIX password:
Retype new UNIX password:


Thank you for choosing Armbian! Support: www.armbian.com

And then you will be asked to create a new user account.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
Creating a new user account. Press <Ctrl-C> to abort

Please provide a username (eg. your forename): nanopineo
Trying to add user nanopineo
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = "en_US.UTF-8",
LC_ALL = (unset),
LC_MEASUREMENT = "en_CA.UTF-8",
LC_PAPER = "en_CA.UTF-8",
LC_MONETARY = "en_CA.UTF-8",
LC_NAME = "en_CA.UTF-8",
LC_ADDRESS = "en_CA.UTF-8",
LC_NUMERIC = "en_CA.UTF-8",
LC_MESSAGES = "en_US.UTF-8",
LC_TELEPHONE = "en_CA.UTF-8",
LC_IDENTIFICATION = "en_CA.UTF-8",
LC_TIME = "en_CA.UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
Adding user `nanopineo' ...
Adding new group `nanopineo' (1000) ...
Adding new user `nanopineo' (1000) with group `nanopineo' ...
Creating home directory `/home/nanopineo' ...
Copying files from `/etc/skel' ...
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for nanopineo
Enter the new value, or press ENTER for the default
Full Name []:
Room Number []:
Work Phone []:
Home Phone []:
Other []:
Is the information correct? [Y/n] Y

Dear nanopineo, your account nanopineo has been created and is sudo enabled.
Please use this account for your daily work from now on.

root@nanopineo:~#

2. Network Configuration

1) Doublecheck IP address

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
root@nanopineo:~# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.76 netmask 255.255.255.0 broadcast 192.168.0.255
inet6 fe80::dc0b:3cff:fe76:8606 prefixlen 64 scopeid 0x20<link>
ether de:0b:3c:76:86:06 txqueuelen 1000 (Ethernet)
RX packets 1544 bytes 145676 (142.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 667 bytes 63483 (61.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 33

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

2) Modify /etc/network/interfaces

I noticed that when you reboot NanoPi NEO, the IP address changed from time to time. The reason why it is so is because the MAC address of NanoPi NEO change randomly after Armbian reboot. Therefore, it's recommended to manually set a static MAC address for NanoPi NEO in file /etc/network/interfaces.

1
root@nanopineo:~# vim /etc/network/interfaces

Just make sure to uncomment this line hwaddress ether # if you want to set MAC manually and write in a fixed MAC address. The final /etc/network/interfaces for my NanoPi NEO is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
root@nanopineo:~# cat /etc/network/interfaces
source /etc/network/interfaces.d/*

# Wired adapter #1
allow-hotplug eth0
no-auto-down eth0
iface eth0 inet dhcp
#address 192.168.0.100
#netmask 255.255.255.0
#gateway 192.168.0.1
#dns-nameservers 8.8.8.8 8.8.4.4
hwaddress ether d2:d0:78:47:06:b3 # if you want to set MAC manually
# pre-up /sbin/ifconfig eth0 mtu 3838 # setting MTU for DHCP, static just: mtu 3838


# Wireless adapter #1
# Armbian ships with network-manager installed by default. To save you time
# and hassles consider using 'sudo nmtui' instead of configuring Wi-Fi settings
# manually. The below lines are only meant as an example how configuration could
# be done in an anachronistic way:
#
#allow-hotplug wlan0
#iface wlan0 inet dhcp
#address 192.168.0.100
#netmask 255.255.255.0
#gateway 192.168.0.1
#dns-nameservers 8.8.8.8 8.8.4.4
# wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
# Disable power saving on compatible chipsets (prevents SSH/connection dropouts over WiFi)
#wireless-mode Managed
#wireless-power off

# Local loopback
auto lo
iface lo inet loopback

And before reboot Armbian, we also need to set a **FIXED** IP address (Here in my case: 192.168.0.36) for the device with the static MAC address you just specified (Here in my case: **d2:d0:78:47:06:b3**).


## 3. Reboot Armbian

Now, reboot Armbian.

```console
root@nanopineo:~# sudo reboot

After a while, ssh into Armbian with created user nanopineo:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$ ssh nanopineo@192.168.0.36
The authenticity of host '192.168.0.36 (192.168.0.36)' can't be established.
ECDSA key fingerprint is SHA256:L0qZCYEGGallJo9jSpM63TWAYJ4vU/JzIbMJS80Lx8I.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.0.36' (ECDSA) to the list of known hosts.
nanopineo@192.168.0.36's password:
_ _ ____ _ _ _
| \ | | __ _ _ __ ___ | _ \(_) | \ | | ___ ___
| \| |/ _` | '_ \ / _ \| |_) | | | \| |/ _ \/ _ \
| |\ | (_| | | | | (_) | __/| | | |\ | __/ (_) |
|_| \_|\__,_|_| |_|\___/|_| |_| |_| \_|\___|\___/


Welcome to ARMBIAN 5.38 stable Debian GNU/Linux 9 (stretch) 4.14.14-sunxi
System load: 0.99 0.30 0.10 Up time: 0 min
Memory usage: 16 % of 240MB IP: 192.168.0.36
CPU temp: 43°C
Usage of /: 8% of 15G

[ 0 security updates available, 3 updates total: apt upgrade ]
Last check: 2018-03-01 06:30

[ General system configuration (beta): armbian-config ]


nanopineo@nanopineo:~$

4. Armbian Upgrading

Then we simply upgrade Armbian.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
nanopineo@nanopineo:~$ sudo apt update

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.

[sudo] password for nanopineo:
Hit:1 http://security.debian.org stretch/updates InRelease
Hit:4 http://apt.armbian.com stretch InRelease
Ign:2 http://cdn-fastly.deb.debian.org/debian stretch InRelease
Hit:3 http://cdn-fastly.deb.debian.org/debian stretch-updates InRelease
Hit:5 http://cdn-fastly.deb.debian.org/debian stretch-backports InRelease
Hit:6 http://cdn-fastly.deb.debian.org/debian stretch Release
Reading package lists... Done
Building dependency tree
Reading state information... Done
3 packages can be upgraded. Run 'apt list --upgradable' to see them.
nanopineo@nanopineo:~$ apt list --upgradable
Listing... Done
linux-dtb-next-sunxi/stretch 5.41 armhf [upgradable from: 5.38]
linux-image-next-sunxi/stretch 5.41 armhf [upgradable from: 5.38]
tzdata/stable-updates 2018c-0+deb9u1 all [upgradable from: 2017c-0+deb9u1]
nanopineo@nanopineo:~$ sudo apt upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
linux-dtb-next-sunxi linux-image-next-sunxi tzdata
3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 20.2 MB of archives.
After this operation, 240 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://cdn-fastly.deb.debian.org/debian stretch-updates/main armhf tzdata all 2018c-0+deb9u1 [264 kB]
Get:2 http://apt.armbian.com stretch/main armhf linux-dtb-next-sunxi armhf 5.41 [175 kB]
Get:3 http://apt.armbian.com stretch/main armhf linux-image-next-sunxi armhf 5.41 [19.7 MB]
Fetched 20.2 MB in 14s (1387 kB/s)
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = "en_US.UTF-8",
LC_ALL = (unset),
LC_TIME = "en_CA.UTF-8",
LC_MONETARY = "en_CA.UTF-8",
LC_ADDRESS = "en_CA.UTF-8",
LC_TELEPHONE = "en_CA.UTF-8",
LC_MESSAGES = "en_US.UTF-8",
LC_NAME = "en_CA.UTF-8",
LC_MEASUREMENT = "en_CA.UTF-8",
LC_IDENTIFICATION = "en_CA.UTF-8",
LC_NUMERIC = "en_CA.UTF-8",
LC_PAPER = "en_CA.UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
locale: Cannot set LC_ALL to default locale: No such file or directory
Preconfiguring packages ...
(Reading database ... 32833 files and directories currently installed.)
Preparing to unpack .../tzdata_2018c-0+deb9u1_all.deb ...
Unpacking tzdata (2018c-0+deb9u1) over (2017c-0+deb9u1) ...
Preparing to unpack .../linux-dtb-next-sunxi_5.41_armhf.deb ...
Unpacking linux-dtb-next-sunxi (5.41) over (5.38) ...
Preparing to unpack .../linux-image-next-sunxi_5.41_armhf.deb ...
update-initramfs: Deleting /boot/initrd.img-4.14.14-sunxi
Removing obsolete file uInitrd-4.14.14-sunxi
Unpacking linux-image-next-sunxi (5.41) over (5.38) ...
Setting up linux-dtb-next-sunxi (5.41) ...
Setting up tzdata (2018c-0+deb9u1) ...
locale: Cannot set LC_ALL to default locale: No such file or directory

Current default time zone: 'Etc/UTC'
Local time is now: Thu Mar 1 06:34:05 UTC 2018.
Universal Time is now: Thu Mar 1 06:34:05 UTC 2018.
Run 'dpkg-reconfigure tzdata' if you wish to change it.

Setting up linux-image-next-sunxi (5.41) ...
update-initramfs: Generating /boot/initrd.img-4.14.18-sunxi
update-initramfs: Converting to u-boot format
nanopineo@nanopineo:~$

5. Kernel Doublechecking

Finally, we have the system and kernel doublechecked.

1
2
3
4
5
6
7
8
9
10
11
nanopineo@nanopineo:~$ uname -r
4.14.14-sunxi
nanopineo@nanopineo:~$ uname -a
Linux nanopineo 4.14.14-sunxi #1 SMP Thu Jan 25 12:20:57 CET 2018 armv7l GNU/Linux
nanopineo@nanopineo:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 9.3 (stretch)
Release: 9.3
Codename: stretch
nanopineo@nanopineo:~$

Happy new year everybody. How time flies. It is already 2018. Longer Vision has been struggling on this tough road towards a successful enterprise. Algorithms ONLY are NOT enough for nowadays business. BSP (board support package) and IC design are a must for a successful business nowadays. Thus, ever since 2018, Longer Vision is going to balance both hardware and software, particularly, ARM based open source embedded development board and computer vision algorithms.

In this blog, we are going to talk about how to build and flash U-Boot and Linux Kernel onto a Beaglebone and Beaglebone Black, which adopts TI AM335x as its CPU. The board looks like (cited from Beaglebone Black ):

Beaglebone Black

PART A: Cross Compile U-Boot and Linux Kernel

1. Linaro GCC

Linaro is a popular platform providing high-quality code for both Linux kernel and GCC tool chain. Linaro's GCC toolchain of varoius versions can be directly found here.

Here, we download GCC latest-6 binary under arm-linux-gnueabihf.

1
2
3
4
$ wget -c 
https://releases.linaro.org/components/toolchain/binaries/latest-6/arm-linux-gnueabihf/gcc-linaro-6.4.1-2017.11-x86_64_arm-linux-gnueabihf.tar.xz
$ tar xf gcc-linaro-6.4.1-2017.11-x86_64_arm-linux-gnueabihf.tar.xz
$ export CC=`pwd`/gcc-linaro-6.4.1-2017.11-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-

Test the cross compiler:

1
2
3
4
5
$ ${CC}gcc --version
arm-linux-gnueabihf-gcc (Linaro GCC 6.4-2017.11) 6.4.1 20171012
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

2. U-Boot

U-Boot is a universal boot loader. For Beaglebone and Beaglebone Black, there are two patches which has been maintained by eewiki Linux on ARM on github. We first download U-Boot and check out the latest release as follows:

1
2
3
$ git clone https://github.com/u-boot/u-boot
$ cd u-boot/
$ git checkout v2018.01

Then check out the latest u-boot-patches and patch two files under the latest release v2018.01:

1
2
3
4
5
6
7
8
$ cd ../
$ git clone https://github.com/eewiki/u-boot-patches.git
$ cd ./u-boot-patches/v2018.01
$ cp 0001-am335x_evm-uEnv.txt-bootz-n-fixes.patch ../../u-boot/
$ cp 0002-U-Boot-BeagleBone-Cape-Manager.patch ../../u-boot/
$ cd ../../u-boot
$ patch -p1 < 0001-am335x_evm-uEnv.txt-bootz-n-fixes.patch
$ patch -p1 < 0002-U-Boot-BeagleBone-Cape-Manager.patch

Finally, we configure and build U-Boot for Beaglebone and Beaglebone Black as follows:

1
2
3
$ make ARCH=arm CROSS_COMPILE=${CC} distclean
$ make ARCH=arm CROSS_COMPILE=${CC} am335x_evm_defconfig
$ make ARCH=arm CROSS_COMPILE=${CC}

3. Linux Kernel

Due to Robert Nelson's summary at eewiki, there are two ways to build Beaglebone and Beaglebone Black: - Mainline - TI BSP

Their differences are: - bb-kernel: based on mainline, no-smp, optimized for AM335x devices. - ti-linux-kernel-dev: based on TI’s git tree, smp, optimized for AM335x/AM43xx/AM57x devices.

Here, we are going to use Mainline to build our own Linux Kernel. We FIRST check out Robert Nelson's bb-kernel.

1
2
3
$ git clone https://github.com/RobertCNelson/bb-kernel
$ cd bb-kernel/
$ git checkout origin/am33x-v4.14 -b am33x-v4.14

We then check out Linus Torvalds' stable Linux Kernel, which can be also tracked on github Torvalds Linux. This will take you quite a while.

1
$ git clone https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git

Now, it's the time to modify some of the configurations in .sh files under the checked out branch am33x-v4.14.

1) version.sh

Here, I'm using gcc_linaro_gnueabihf_6 instead of gcc_linaro_gnueabihf_7.

1
2
toolchain="gcc_linaro_gnueabihf_6"
#toolchain="gcc_linaro_gnueabihf_7"

2) system.sh

A file named system.sh.sample has been provided for us to configure accordingly. We FIRST do a copy.

1
$ cp system.sh.sample system.sh

Then, we manually re-specifying two MACROs: CC and LINUX_GIT, respectively to two directories containing the above downloaded Linaro GCC compiler and the current Torvalds' Linux Kernel. We also specify MMC for TF/SD Card as follows:

1
2
3
CC=.../gcc-linaro-6.4.1-2017.11-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-
LINUX_GIT=.../linux-stable
MMC=/dev/mmcblk0

After the above configurations, we start building the Linux Kernel using the following command line. This will probably take you one hour.

1
$ ./build_kernel.sh

When you see this Kernel Configuration page,

Kernel Configuration

Exit right away without any modification. And you will see:

After Kernel Configuration

After a while, you will see Linux Kernel has been successfully built:

The Built Kernel

4. Root File System

We then install the latest Debian (for now, Debian 9.3) according to Robert Nelson's minimal operating system. Three commands are to be executed in a row for downloading, verifying, and extraction.

1
2
3
4
$ wget -c https://rcn-ee.com/rootfs/eewiki/minfs/debian-9.3-minimal-armhf-2017-12-09.tar.xz
$ sha256sum debian-9.3-minimal-armhf-2017-12-09.tar.xz
5120fcfb8ff8af013737fae52dc0a7ecc2f52563a9aa8f5aa288aff0f3943d61 debian-9.3-minimal-armhf-2017-12-09.tar.xz
$ tar xf debian-9.3-minimal-armhf-2017-12-09.tar.xz

Until now, most components for booting have been downloaded and built. It is the time for us to flash our own OS onto a SD card.

PART B: Install the Linux OS onto SD Card

5. Setup microSD card

1) TF Card Preparation

We first check which block device that we are going to install our built system onto by command lsblk.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdd 8:48 0 931.5G 0 disk
└─sdd1 8:49 0 931.5G 0 part /media/jiapei/Data
sdb 8:16 0 238.5G 0 disk
├─sdb4 8:20 0 900M 0 part
├─sdb2 8:18 0 128M 0 part
├─sdb3 8:19 0 237.2G 0 part /media/jiapei/Win10
└─sdb1 8:17 0 300M 0 part /boot/efi
sr0 11:0 1 1024M 0 rom
sdc 8:32 0 238.5G 0 disk
├─sdc2 8:34 0 30.5G 0 part [SWAP]
├─sdc3 8:35 0 207.5G 0 part /
└─sdc1 8:33 0 512M 0 part
mmcblk0 179:0 0 15G 0 disk

We then define a MACRO DISK to specify the TF card device.

1
$ export DISK=/dev/mmcblk0

Afterwards, erase partition table/labels on TF card by:

1
$ sudo dd if=/dev/zero of=${DISK} bs=1M count=10

2) Bootloader installation

Now, we install the built U-Boot onto the SD card:

1
2
$ sudo dd if=./u-boot/MLO of=${DISK} count=1 seek=1 bs=128k
$ sudo dd if=./u-boot/u-boot.img of=${DISK} count=2 seek=1 bs=384k

3) Partition Preparation

We FIRST make sure which version of sfdisk is. In my case, it is of version 2.27.1

1
2
$ sudo sfdisk --version
sfdisk from util-linux 2.27.1

Then we create the partition layout by:

1
2
3
$ sudo sfdisk ${DISK} <<-__EOF__
4M,,L,*
__EOF__

Afterwards, we need to format the created partition by mkfs.ext4. We also need to make sure which version of mkfs.ext4 is. In my case, it is of version 1.42.13.

1
2
3
$ mkfs.ext4 -V
mke2fs 1.42.13 (17-May-2015)
Using EXT2FS Library version 1.42.13

Then, we format the partition by:

1
$ sudo mkfs.ext4 -L rootfs ${DISK}p1

After formatting, the TF card should be able to be either automatically mounted or manually mounted by the following command:

1
2
$ sudo mkdir -p /media/rootfs/
$ sudo mount ${DISK}p1 /media/rootfs/

4) Backup Bootloader

1
2
3
$ sudo mkdir -p /media/rootfs/opt/backup/uboot/
$ sudo cp -v ./u-boot/MLO /media/rootfs/opt/backup/uboot/
$ sudo cp -v ./u-boot/u-boot.img /media/rootfs/opt/backup/uboot/

5) Dealing with old Bootloader in eMMC (Optional)

If the old bootloader in eMMC is to be kept, please copy the following file uEnv.txt to /media/rootfs/.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
##This will work with: Angstrom's 2013.06.20 u-boot.

loadaddr=0x82000000
fdtaddr=0x88000000
rdaddr=0x88080000

initrd_high=0xffffffff
fdt_high=0xffffffff

#for single partitions:
mmcroot=/dev/mmcblk0p1

loadximage=load mmc 0:1 ${loadaddr} /boot/vmlinuz-${uname_r}
loadxfdt=load mmc 0:1 ${fdtaddr} /boot/dtbs/${uname_r}/${fdtfile}
loadxrd=load mmc 0:1 ${rdaddr} /boot/initrd.img-${uname_r}; setenv rdsize ${filesize}
loaduEnvtxt=load mmc 0:1 ${loadaddr} /boot/uEnv.txt ; env import -t ${loadaddr} ${filesize};
loadall=run loaduEnvtxt; run loadximage; run loadxfdt;

mmcargs=setenv bootargs console=tty0 console=${console} ${optargs} ${cape_disable} ${cape_enable} root=${mmcroot} rootfstype=${mmcrootfstype} ${cmdline}

uenvcmd=run loadall; run mmcargs; bootz ${loadaddr} - ${fdtaddr};
1
sudo cp -v ./uEnv.txt /media/rootfs/

6. Kernel Installation

1) Export MACRO kernel_version

Under the folder bb-kernel, a file named kernel_version was generated while building Linux kernel. In my case, I was building the NEWEST kernel 4.14.13-bone12. For convenience, we export a new MACRO kernel_version.

1
2
3
$ cat kernel_version 
4.14.13-bone12
$ export kernel_version=4.14.13-bone12

2) Extract Debian Root File System onto TF Card

1
2
3
4
$ sudo tar xfvp ./debian-9.3-minimal-armhf-2017-12-09/armhf-rootfs-debian-stretch.tar -C /media/rootfs/
$ sync
$ sudo chown root:root /media/rootfs/
$ sudo chmod 755 /media/rootfs/

3) Set uname_r in /boot/uEnv.txt

1
$ sudo sh -c "echo 'uname_r=${kernel_version}' >> /media/rootfs/boot/uEnv.txt"

4) Copy Kernel Image

1
$ sudo cp -v ./bb-kernel/deploy/${kernel_version}.zImage /media/rootfs/boot/vmlinuz-${kernel_version}

5) Copy Kernel Device Tree Binaries

1
2
$ sudo mkdir -p /media/rootfs/boot/dtbs/${kernel_version}/
$ sudo tar xfv ./bb-kernel/deploy/${kernel_version}-dtbs.tar.gz -C /media/rootfs/boot/dtbs/${kernel_version}/

6) Copy Kernel Modules

1
$ sudo tar xfv ./bb-kernel/deploy/${kernel_version}-modules.tar.gz -C /media/rootfs/

7) Set File Systems Table (/etc/fstab)

1
$ sudo sh -c "echo '/dev/mmcblk0p1  /  auto  errors=remount-ro  0  1' >> /media/rootfs/etc/fstab"

8) Networking

To enable the Internet for the FIRST booting, we need to do the network configuration:

1
$ sudo nano /media/rootfs/etc/network/interfaces

Add the following content:

1
2
3
4
5
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

In order to use a shared SD card with multiple BeagleBones, and always enable the Ethernet interface as eth0, we need to add a particular udev rule as:

1
$ sudo nano /media/rootfs/etc/udev/rules.d/70-persistent-net.rules

Add the following content:

1
2
# BeagleBone: net device ()
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

9) Remove TF Card

1
2
$ sync
$ sudo umount /media/rootfs

PART C: Configurations after Booting from the SD Card

Insert TF card into a Beaglebone Black, connect BBB with an Internet cable, and connect a Micro HDMI cable on demand. After the flashing of on-board LEDs, you should be able to find the IP address of BBB via your router. All our future jobs are to be done via ssh command through this IP address. My personal preference is to set a static IP address for this particularly BBB in the router settings.

Preparation

By reading two of our previous blogs Camera Calibration Using a Chessboard and Camera Posture Estimation Using Circle Grid Pattern, it wouldn’t be hard to replace the chessboard by a circle grid to calculate camera calibration parameters.

Coding

Our code can be found at OpenCV Examples.

First of all

As mentioned in Camera Calibration Using a Chessboard, for intrinsic parameters estimation, namely, camera calibration, there is NO need to measure the circle unit size. And the circle gird is to be adopted is exactly the same one as used in Camera Posture Estimation Using Circle Grid Pattern:

asymmetric_circle_grid

Secondly

Required packages need to be imported.

1
2
3
import numpy as np
import cv2
import yam

Thirdly

Some initialization work need to be done, including: 1) define the termination criteria when refine the corner sub-pixel later on; 2) blob detection parameters; 3) object points coordinators initialization.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

########################################Blob Detector##############################################

# Setup SimpleBlobDetector parameters.
blobParams = cv2.SimpleBlobDetector_Params()

# Change thresholds
blobParams.minThreshold = 8
blobParams.maxThreshold = 255

# Filter by Area.
blobParams.filterByArea = True
blobParams.minArea = 64 # minArea may be adjusted to suit for your experiment
blobParams.maxArea = 2500 # maxArea may be adjusted to suit for your experiment

# Filter by Circularity
blobParams.filterByCircularity = True
blobParams.minCircularity = 0.1

# Filter by Convexity
blobParams.filterByConvexity = True
blobParams.minConvexity = 0.87

# Filter by Inertia
blobParams.filterByInertia = True
blobParams.minInertiaRatio = 0.01

# Create a detector with the parameters
blobDetector = cv2.SimpleBlobDetector_create(blobParams)

###################################################################################################

###################################################################################################

# Original blob coordinates, supposing all blobs are of z-coordinates 0
# And, the distance between every two neighbour blob circle centers is 72 centimetres
# In fact, any number can be used to replace 72.
# Namely, the real size of the circle is pointless while calculating camera calibration parameters.
objp = np.zeros((44, 3), np.float32)
objp[0] = (0 , 0 , 0)
objp[1] = (0 , 72 , 0)
objp[2] = (0 , 144, 0)
objp[3] = (0 , 216, 0)
objp[4] = (36 , 36 , 0)
objp[5] = (36 , 108, 0)
objp[6] = (36 , 180, 0)
objp[7] = (36 , 252, 0)
objp[8] = (72 , 0 , 0)
objp[9] = (72 , 72 , 0)
objp[10] = (72 , 144, 0)
objp[11] = (72 , 216, 0)
objp[12] = (108, 36, 0)
objp[13] = (108, 108, 0)
objp[14] = (108, 180, 0)
objp[15] = (108, 252, 0)
objp[16] = (144, 0 , 0)
objp[17] = (144, 72 , 0)
objp[18] = (144, 144, 0)
objp[19] = (144, 216, 0)
objp[20] = (180, 36 , 0)
objp[21] = (180, 108, 0)
objp[22] = (180, 180, 0)
objp[23] = (180, 252, 0)
objp[24] = (216, 0 , 0)
objp[25] = (216, 72 , 0)
objp[26] = (216, 144, 0)
objp[27] = (216, 216, 0)
objp[28] = (252, 36 , 0)
objp[29] = (252, 108, 0)
objp[30] = (252, 180, 0)
objp[31] = (252, 252, 0)
objp[32] = (288, 0 , 0)
objp[33] = (288, 72 , 0)
objp[34] = (288, 144, 0)
objp[35] = (288, 216, 0)
objp[36] = (324, 36 , 0)
objp[37] = (324, 108, 0)
objp[38] = (324, 180, 0)
objp[39] = (324, 252, 0)
objp[40] = (360, 0 , 0)
objp[41] = (360, 72 , 0)
objp[42] = (360, 144, 0)
objp[43] = (360, 216, 0)
###################################################################################################

# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.

Fourthly

After localizing 10 frames (10 can be changed to any positive integer as you wish) of a grid of 2D circles, camera matrix and distortion coefficients can be calculated by invoking the function calibrateCamera. Here, we are testing on a real-time camera stream. Similar to Camera Posture Estimation Using Circle Grid Pattern, the trick is to do blobDetector.detect() and draw the detected blobs using cv2.drawKeypoints() before invoking cv2.findCirclesGrid(), so that the entire grid is easier to be found.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
cap = cv2.VideoCapture(0)
found = 0
while(found < 10): # Here, 10 can be changed to whatever number you like to choose
ret, img = cap.read() # Capture frame-by-frame
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

keypoints = blobDetector.detect(gray) # Detect blobs.

# Draw detected blobs as red circles. This helps cv2.findCirclesGrid() .
im_with_keypoints = cv2.drawKeypoints(img, keypoints, np.array([]), (0,255,0), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
im_with_keypoints_gray = cv2.cvtColor(im_with_keypoints, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findCirclesGrid(im_with_keypoints, (4,11), None, flags = cv2.CALIB_CB_ASYMMETRIC_GRID) # Find the circle grid

if ret == True:
objpoints.append(objp) # Certainly, every loop objp is the same, in 3D.

corners2 = cv2.cornerSubPix(im_with_keypoints_gray, corners, (11,11), (-1,-1), criteria) # Refines the corner locations.
imgpoints.append(corners2)

# Draw and display the corners.
im_with_keypoints = cv2.drawChessboardCorners(img, (4,11), corners2, ret)
found += 1

cv2.imshow("img", im_with_keypoints) # display
cv2.waitKey(2)

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)

Finally

Write the calculated calibration parameters into a yaml file. Here, it is a bit tricky. Please bear in mind that you MUST call function tolist() to transform a numpy array to a list.

1
2
3
4
# It's very important to transform the matrix to list.
data = {'camera_matrix': np.asarray(mtx).tolist(), 'dist_coeff': np.asarray(dist).tolist()}
with open("calibration.yaml", "w") as f:
yaml.dump(data, f)

Additionally

You may use the following piece of code to load the calibration parameters from file “calibration.yaml”.

1
2
3
4
with open('calibration.yaml') as f:
loadeddict = yaml.load(f)
mtxloaded = loadeddict.get('camera_matrix')
distloaded = loadeddict.get('dist_coeff')

Preparation

Traditionally, a camera is calibrated using a chessboard. Existing documentations are already out there and have discussed camera calibration in detail, for example, OpenCV-Python Tutorials.

Coding

Our code can be found at OpenCV Examples.

First of all

Unlike estimating camera postures which is dealing with the extrinsic parameters, camera calibration is to calculate the intrinsic parameters. In such a case, there is NO need for us to measure the cell size of the chessboard. Anyway, the adopted chessboard is just an ordinary chessboard as:

pattern_chessboard

Secondly

Required packages need to be imported.

1
2
3
import numpy as np
import cv2
import yaml

Thirdly

Some initialization work need to be done, including: 1) define the termination criteria when refine the corner sub-pixel later on; 2) object points coordinators initialization.

1
2
3
4
5
6
7
8
9
10
# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)

# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.

Fourthly

After localizing 10 frames (10 can be changed to any positive integer as you wish) of a grid of 2D chessboard cell corners, camera matrix and distortion coefficients can be calculated by invoking the function calibrateCamera. Here, we are testing on a real-time camera stream.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
cap = cv2.VideoCapture(0)
found = 0
while(found < 10): # Here, 10 can be changed to whatever number you like to choose
ret, img = cap.read() # Capture frame-by-frame
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Find the chess board corners
ret, corners = cv2.findChessboardCorners(gray, (7,6),None)

# If found, add object points, image points (after refining them)
if ret == True:
objpoints.append(objp) # Certainly, every loop objp is the same, in 3D.
corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
imgpoints.append(corners2)

# Draw and display the corners
img = cv2.drawChessboardCorners(img, (7,6), corners2, ret)
found += 1

cv2.imshow('img', img)
cv2.waitKey(10)

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)

Finally

Write the calculated calibration parameters into a yaml file. Here, it is a bit tricky. Please bear in mind that you MUST call function tolist() to transform a numpy array to a list.

1
2
3
4
5
6
# It's very important to transform the matrix to list.

data = {'camera_matrix': np.asarray(mtx).tolist(), 'dist_coeff': np.asarray(dist).tolist()}

with open("calibration.yaml", "w") as f:
yaml.dump(data, f)

Additionally

You may use the following piece of code to load the calibration parameters from file “calibration.yaml”.

1
2
3
4
5
with open('calibration.yaml') as f:
loadeddict = yaml.load(f)

mtxloaded = loadeddict.get('camera_matrix')
distloaded = loadeddict.get('dist_coeff')

Preparation

Open a bash terminal, and type in the following commands:

1
2
3
4
5
6
$ python
Python 3.5.2 (default, Nov 17 2016, 17:05:23)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> help (cv2.aruco)

Then, you will be able to see all the doc contents for cv2.aruco:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
Help on module cv2.aruco in cv2:

NAME
cv2.aruco

FUNCTIONS
Board_create(...)

Board_create(objPoints, dictionary, ids) -> retval

CharucoBoard_create(...)

CharucoBoard_create(squaresX, squaresY, squareLength, markerLength, dictionary) -> retval

DetectorParameters_create(...)

DetectorParameters_create() -> retval

Dictionary_create(...)

Dictionary_create(nMarkers, markerSize) -> retval

Dictionary_create_from(...)

Dictionary_create_from(nMarkers, markerSize, baseDictionary) -> retval

Dictionary_get(...)

Dictionary_get(dict) -> retval

GridBoard_create(...)

GridBoard_create(markersX, markersY, markerLength, markerSeparation, dictionary[, firstMarker]) -> retval

calibrateCameraAruco(...)

calibrateCameraAruco(corners, ids, counter, board, imageSize, cameraMatrix, distCoeffs[, rvecs[, tvecs[, flags[, criteria]]]]) -> retval, cameraMatrix, distCoeffs, rvecs, tvecs

calibrateCameraArucoExtended(...)

calibrateCameraArucoExtended(corners, ids, counter, board, imageSize, cameraMatrix, distCoeffs[, rvecs[, tvecs[, stdDeviationsIntrinsics[, stdDeviationsExtrinsics[, perViewErrors[, flags[, criteria]

]]]]]]) -> retval, cameraMatrix, distCoeffs, rvecs, tvecs, stdDeviationsIntrinsics, stdDeviationsExtrinsics, perViewErrors

calibrateCameraCharuco(...)

calibrateCameraCharuco(charucoCorners, charucoIds, board, imageSize, cameraMatrix, distCoeffs[, rvecs[, tvecs[, flags[, criteria]]]]) -> retval, cameraMatrix, distCoeffs, rvecs, tvecs

calibrateCameraCharucoExtended(...)

calibrateCameraCharucoExtended(charucoCorners, charucoIds, board, imageSize, cameraMatrix, distCoeffs[, rvecs[, tvecs[, stdDeviationsIntrinsics[, stdDeviationsExtrinsics[, perViewErrors[, flags[, cr

iteria]]]]]]]) -> retval, cameraMatrix, distCoeffs, rvecs, tvecs, stdDeviationsIntrinsics, stdDeviationsExtrinsics, perViewErrors

custom_dictionary(...)

custom_dictionary(nMarkers, markerSize) -> retval

custom_dictionary_from(...)

custom_dictionary_from(nMarkers, markerSize, baseDictionary) -> retval

detectCharucoDiamond(...)

detectCharucoDiamond(image, markerCorners, markerIds, squareMarkerLengthRate[, diamondCorners[, diamondIds[, cameraMatrix[, distCoeffs]]]]) -> diamondCorners, diamondIds

detectMarkers(...)

detectMarkers(image, dictionary[, corners[, ids[, parameters[, rejectedImgPoints]]]]) -> corners, ids, rejectedImgPoints

drawAxis(...)

drawAxis(image, cameraMatrix, distCoeffs, rvec, tvec, length) -> image

drawDetectedCornersCharuco(...)

drawDetectedCornersCharuco(image, charucoCorners[, charucoIds[, cornerColor]]) -> image

drawDetectedDiamonds(...)

drawDetectedDiamonds(image, diamondCorners[, diamondIds[, borderColor]]) -> image

drawDetectedMarkers(...)

drawDetectedMarkers(image, corners[, ids[, borderColor]]) -> image

drawMarker(...)

drawMarker(dictionary, id, sidePixels[, img[, borderBits]]) -> img

drawPlanarBoard(...)

drawPlanarBoard(board, outSize[, img[, marginSize[, borderBits]]]) -> img

estimatePoseBoard(...)

estimatePoseBoard(corners, ids, board, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess]]]) -> retval, rvec, tvec

estimatePoseCharucoBoard(...)

estimatePoseCharucoBoard(charucoCorners, charucoIds, board, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess]]]) -> retval, rvec, tvec

estimatePoseSingleMarkers(...)

estimatePoseSingleMarkers(corners, markerLength, cameraMatrix, distCoeffs[, rvecs[, tvecs]]) -> rvecs, tvecs

getPredefinedDictionary(...)

getPredefinedDictionary(dict) -> retval

interpolateCornersCharuco(...)

interpolateCornersCharuco(markerCorners, markerIds, image, board[, charucoCorners[, charucoIds[, cameraMatrix[, distCoeffs[, minMarkers]]]]]) -> retval, charucoCorners, charucoIds

refineDetectedMarkers(...)

refineDetectedMarkers(image, board, detectedCorners, detectedIds, rejectedCorners[, cameraMatrix[, distCoeffs[, minRepDistance[, errorCorrectionRate[, checkAllOrders[, recoveredIdxs[, parameters]]]]]]]) -> detectedCorners, detectedIds, rejectedCorners, recoveredIdxs

DATA
DICT_4X4_100 = 1
DICT_4X4_1000 = 3
DICT_4X4_250 = 2
DICT_4X4_50 = 0
DICT_5X5_100 = 5
DICT_5X5_1000 = 7
DICT_5X5_250 = 6
DICT_5X5_50 = 4
DICT_6X6_100 = 9
DICT_6X6_1000 = 11
DICT_6X6_250 = 10
DICT_6X6_50 = 8
DICT_7X7_100 = 13
DICT_7X7_1000 = 15
DICT_7X7_250 = 14
DICT_7X7_50 = 12
DICT_ARUCO_ORIGINAL = 16

FILE
(built-in)

Preparation

A widely used asymmetric circle grid pattern can be found in doc of OpenCV 2.4. Same as previous blogs, the camera needs to be calibrated beforehand. For this asymmetric circle grid example, a sequence of images (instead of a video stream) is tested.

Coding

The code can be found at OpenCV Examples.

First of all

We need to ensure cv2.so is under our system path. cv2.so is specifically for OpenCV Python.

1
2
import sys
sys.path.append('/usr/local/python/3.5')

Then, we import some packages to be used (NO ArUco).

1
2
3
import os
import cv2
import numpy as np

Secondly

We now load all camera calibration parameters, including: cameraMatrix, distCoeffs, etc. For example, your code might look like the following:

1
2
3
4
calibrationFile = "calibrationFileName.xml"
calibrationParams = cv2.FileStorage(calibrationFile, cv2.FILE_STORAGE_READ)
camera_matrix = calibrationParams.getNode("cameraMatrix").mat()
dist_coeffs = calibrationParams.getNode("distCoeffs").mat()

Since we are testing a calibrated fisheye camera, two extra parameters are to be loaded from the calibration file.

1
2
r = calibrationParams.getNode("R").mat()
new_camera_matrix = calibrationParams.getNode("newCameraMatrix").mat()

Afterwards, two mapping matrices are pre-calculated by calling function cv2.fisheye.initUndistortRectifyMap() as (supposing the images to be processed are of 1080P):

1
2
image_size = (1920, 1080)
map1, map2 = cv2.fisheye.initUndistortRectifyMap(camera_matrix, dist_coeffs, r, new_camera_matrix, image_size, cv2.CV_16SC2)

Thirdly

The circle pattern is to be loaded.

asymmetric_circle_grid

Here in our case, this asymmetric circle grid pattern is manually loaded as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# Original blob coordinates
objectPoints = np.zeros((44, 3)) # In this asymmetric circle grid, 44 circles are adopted.
objectPoints[0] = (0 , 0 , 0)
objectPoints[1] = (0 , 72 , 0)
objectPoints[2] = (0 , 144, 0)
objectPoints[3] = (0 , 216, 0)
objectPoints[4] = (36 , 36 , 0)
objectPoints[5] = (36 , 108, 0)
objectPoints[6] = (36 , 180, 0)
objectPoints[7] = (36 , 252, 0)
objectPoints[8] = (72 , 0 , 0)
objectPoints[9] = (72 , 72 , 0)
objectPoints[10] = (72 , 144, 0)
objectPoints[11] = (72 , 216, 0)
objectPoints[12] = (108, 36, 0)
objectPoints[13] = (108, 108, 0)
objectPoints[14] = (108, 180, 0)
objectPoints[15] = (108, 252, 0)
objectPoints[16] = (144, 0 , 0)
objectPoints[17] = (144, 72 , 0)
objectPoints[18] = (144, 144, 0)
objectPoints[19] = (144, 216, 0)
objectPoints[20] = (180, 36 , 0)
objectPoints[21] = (180, 108, 0)
objectPoints[22] = (180, 180, 0)
objectPoints[23] = (180, 252, 0)
objectPoints[24] = (216, 0 , 0)
objectPoints[25] = (216, 72 , 0)
objectPoints[26] = (216, 144, 0)
objectPoints[27] = (216, 216, 0)
objectPoints[28] = (252, 36 , 0)
objectPoints[29] = (252, 108, 0)
objectPoints[30] = (252, 180, 0)
objectPoints[31] = (252, 252, 0)
objectPoints[32] = (288, 0 , 0)
objectPoints[33] = (288, 72 , 0)
objectPoints[34] = (288, 144, 0)
objectPoints[35] = (288, 216, 0)
objectPoints[36] = (324, 36 , 0)
objectPoints[37] = (324, 108, 0)
objectPoints[38] = (324, 180, 0)
objectPoints[39] = (324, 252, 0)
objectPoints[40] = (360, 0 , 0)
objectPoints[41] = (360, 72 , 0)
objectPoints[42] = (360, 144, 0)
objectPoints[43] = (360, 216, 0)

In our case, the distance between two neighbour circle centres (in the same column) is measured as 72 centimetres. Meanwhile, the axis at the origin is loaded as well, with respective length 300, 200, 100 centimetres.

1
axis = np.float32([[360,0,0], [0,240,0], [0,0,-120]]).reshape(-1,3)

Fourthly

Since we are going to use OpenCV’s SimpleBlobDetector for the blob detection, the SimpleBlobDetector’s parameters are to be created beforehand. The parameter values can be adjusted according to your own testing environments. The iteration criteria for the simple blob detection is also created at the same time.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Setup SimpleBlobDetector parameters.
blobParams = cv2.SimpleBlobDetector_Params()

# Change thresholds
blobParams.minThreshold = 8
blobParams.maxThreshold = 255

# Filter by Area.
blobParams.filterByArea = True
blobParams.minArea = 64 # minArea may be adjusted to suit for your experiment
blobParams.maxArea = 2500 # maxArea may be adjusted to suit for your experiment

# Filter by Circularity
blobParams.filterByCircularity = True
blobParams.minCircularity = 0.1

# Filter by Convexity
blobParams.filterByConvexity = True
blobParams.minConvexity = 0.87

# Filter by Inertia
blobParams.filterByInertia = True
blobParams.minInertiaRatio = 0.01

# Create a detector with the parameters
blobDetector = cv2.SimpleBlobDetector_create(blobParams)

# Create the iteration criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
###################################################################################################

Finally

Estimate camera postures. Here, we are testing a sequence of images, rather than video streams. We first list all file names in sequence.

1
2
3
imgDir = "imgSequence"  # Specify the image directory
imgFileNames = [os.path.join(imgDir, fn) for fn in next(os.walk(imgDir))[2]]
nbOfImgs = len(imgFileNames)

Then, we calculate the camera posture frame by frame:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
for i in range(0, nbOfImgs-1):
img = cv2.imread(imgFileNames[i], cv2.IMREAD_COLOR)
imgRemapped = cv2.remap(img, map1, map2, cv2.INTER_LINEAR, cv2.BORDER_CONSTANT) # for fisheye remapping
imgRemapped_gray = cv2.cvtColor(imgRemapped, cv2.COLOR_BGR2GRAY) # blobDetector.detect() requires gray image

keypoints = blobDetector.detect(imgRemapped_gray) # Detect blobs.

# Draw detected blobs as red circles. This helps cv2.findCirclesGrid() .
im_with_keypoints = cv2.drawKeypoints(imgRemapped, keypoints, np.array([]), (0,255,0), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
im_with_keypoints_gray = cv2.cvtColor(im_with_keypoints, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findCirclesGrid(im_with_keypoints, (4,11), None, flags = cv2.CALIB_CB_ASYMMETRIC_GRID) # Find the circle grid

if ret == True:
corners2 = cv2.cornerSubPix(im_with_keypoints_gray, corners, (11,11), (-1,-1), criteria) # Refines the corner locations.

# Draw and display the corners.
im_with_keypoints = cv2.drawChessboardCorners(imLeftRemapped, (4,11), corners2, ret)

# 3D posture
if len(corners2) == len(objectPoints):
retval, rvec, tvec = cv2.solvePnP(objectPoints, corners2, camera_matrix, dist_coeffs)

if retval:
projectedPoints, jac = cv2.projectPoints(objectPoints, rvec, tvec, camera_matrix, dist_coeffs) # project 3D points to image plane
projectedAxis, jacAsix = cv2.projectPoints(axis, rvec, tvec, camera_matrix, dist_coeffs) # project axis to image plane
for p in projectedPoints:
p = np.int32(p).reshape(-1,2)
cv2.circle(im_with_keypoints, (p[0][0], p[0][1]), 3, (0,0,255))
origin = tuple(corners2[0].ravel())
im_with_keypoints = cv2.line(im_with_keypoints, origin, tuple(projectedAxis[0].ravel()), (255,0,0), 2)
im_with_keypoints = cv2.line(im_with_keypoints, origin, tuple(projectedAxis[1].ravel()), (0,255,0), 2)
im_with_keypoints = cv2.line(im_with_keypoints, origin, tuple(projectedAxis[2].ravel()), (0,0,255), 2)

cv2.imshow("circlegrid", im_with_keypoints) # display

cv2.waitKey(2)

The drawn axis is just the world coordinators and orientations estimated from the images taken by the testing camera.

Preparation

ChAruco is an integrated marker, which combines a chessboard with an aruco marker. The code is also very similar to the code in our previous blog aruco board.

Coding

The code can be found at OpenCV Examples. And the code in the first two subsections are exactly the same as what’s written in our previous blogs. We’ll neglect the first two subsections ever since.

First of all

Exactly the same as in previous blogs.

Secondly

Exactly the same as in previous blogs.

Thirdly

Dictionary aruco.DICT_6X6_1000 is integrated with a chessboard to construct a grid charuco board. The experimenting board is of size 5X7, which looks like:

charuco.DICT_6X6_1000.board57
1
aruco_dict = aruco.Dictionary_get( aruco.DICT_6X6_1000 )

After having this aruco board marker printed, the edge lengths of this chessboard and aruco marker (displayed in the white cell of the chessboard) are to be measured and stored in two variables squareLength and markerLength, which are used to create the 5X7 grid board.

1
2
3
squareLength = 40   # Here, our measurement unit is centimetre.
markerLength = 30 # Here, our measurement unit is centimetre.
board = aruco.CharucoBoard_create(5, 7, squareLength, markerLength, aruco_dict)

Meanwhile, create aruco detector with default parameters.

1
arucoParams = aruco.DetectorParameters_create()

Finally

Now, let’s test on a video stream, a .mp4 file.

1
2
videoFile = "charuco_board_57.mp4"
cap = cv2.VideoCapture(videoFile)

Then, we calculate the camera posture frame by frame:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
while(True):
ret, frame = cap.read() # Capture frame-by-frame
if ret == True:
frame_remapped = cv2.remap(frame, map1, map2, cv2.INTER_LINEAR, cv2.BORDER_CONSTANT) # for fisheye remapping
frame_remapped_gray = cv2.cvtColor(frame_remapped, cv2.COLOR_BGR2GRAY)

corners, ids, rejectedImgPoints = aruco.detectMarkers(frame_remapped_gray, aruco_dict, parameters=arucoParams) # First, detect markers
aruco.refineDetectedMarkers(frame_remapped_gray, board, corners, ids, rejectedImgPoints)

if ids != None: # if there is at least one marker detected
charucoretval, charucoCorners, charucoIds = aruco.interpolateCornersCharuco(corners, ids, frame_remapped_gray, board)
im_with_charuco_board = aruco.drawDetectedCornersCharuco(frame_remapped, charucoCorners, charucoIds, (0,255,0))
retval, rvec, tvec = aruco.estimatePoseCharucoBoard(charucoCorners, charucoIds, board, camera_matrix, dist_coeffs) # posture estimation from a charuco board
if retval == True:
im_with_charuco_board = aruco.drawAxis(im_with_charuco_board, camera_matrix, dist_coeffs, rvec, tvec, 100) # axis length 100 can be changed according to your requirement
else:
im_with_charuco_left = frame_remapped

cv2.imshow("charucoboard", im_with_charuco_board)

if cv2.waitKey(2) & 0xFF == ord('q'):
break
else:
break

The drawn axis is just the world coordinators and orientations estimated from the images taken by the testing camera. At the end of the code, we release the video capture handle and destroy all opening windows.

1
2
cap.release()   # When everything done, release the capture
cv2.destroyAllWindows()

Preparation

Today, let’s test on an aruco board, instead of a single marker or a diamond marker. Again, you need to make sure your camera has already been calibrated. In the coding section, it’s assumed that you can successfully load the camera calibration parameters.

Coding

The code can be found at OpenCV Examples.

First of all

We need to ensure cv2.so is under our system path. cv2.so is specifically for OpenCV Python.

1
2
import sys
sys.path.append('/usr/local/python/3.5')

Then, we import some packages to be used.

1
2
3
4
import os
import cv2
from cv2 import aruco
import numpy as np

Secondly

Again, we need to load all camera calibration parameters, including: cameraMatrix, distCoeffs, etc. :

1
2
3
4
calibrationFile = "calibrationFileName.xml"
calibrationParams = cv2.FileStorage(calibrationFile, cv2.FILE_STORAGE_READ)
camera_matrix = calibrationParams.getNode("cameraMatrix").mat()
dist_coeffs = calibrationParams.getNode("distCoeffs").mat()

If you are using a calibrated fisheye camera like us, two extra parameters are to be loaded from the calibration file.

1
2
r = calibrationParams.getNode("R").mat()
new_camera_matrix = calibrationParams.getNode("newCameraMatrix").mat()

Afterwards, two mapping matrices are pre-calculated by calling function cv2.fisheye.initUndistortRectifyMap() as (supposing the images to be processed are of 1080P):

1
2
image_size = (1920, 1080)
map1, map2 = cv2.fisheye.initUndistortRectifyMap(camera_matrix, dist_coeffs, r, new_camera_matrix, image_size, cv2.CV_16SC2)

Thirdly

In our test, the dictionary aruco.DICT_6X6_1000 is adopted as the unit pattern to construct a grid board. The board is of size 5X7, which looks like:

aruco.DICT_6X6_1000.board57
1
aruco_dict = aruco.Dictionary_get( aruco.DICT_6X6_1000 )

After having this aruco board marker printed, the edge lengths of this particular aruco marker and the distance between two neighbour markers are to be measured and stored in two variables markerLength and markerSeparation, which are used to create the 5X7 grid board.

1
2
3
markerLength = 40   # Here, our measurement unit is centimetre.
markerSeparation = 8 # Here, our measurement unit is centimetre.
board = aruco.GridBoard_create(5, 7, markerLength, markerSeparation, aruco_dict)

Meanwhile, create aruco detector with default parameters.

1
arucoParams = aruco.DetectorParameters_create()

Finally

Now, let’s test on a video stream, a .mp4 file. We first load the video file and initialize a video capture handle.

1
2
videoFile = "aruco\_board\_57.mp4"
cap = cv2.VideoCapture(videoFile)

Then, we calculate the camera posture frame by frame:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
while(True):
ret, frame = cap.read() # Capture frame-by-frame
if ret == True:
frame_remapped = cv2.remap(frame, map1, map2, cv2.INTER_LINEAR, cv2.BORDER_CONSTANT) # for fisheye remapping
frame_remapped_gray = cv2.cvtColor(frame_remapped, cv2.COLOR_BGR2GRAY)

corners, ids, rejectedImgPoints = aruco.detectMarkers(frame_remapped_gray, aruco_dict, parameters=arucoParams) # First, detect markers
aruco.refineDetectedMarkers(frame_remapped_gray, board, corners, ids, rejectedImgPoints)

if ids != None: # if there is at least one marker detected
im_with_aruco_board = aruco.drawDetectedMarkers(frame_remapped, corners, ids, (0,255,0))
retval, rvec, tvec = aruco.estimatePoseBoard(corners, ids, board, camera_matrix, dist_coeffs) # posture estimation from a diamond
if retval != 0:
im_with_aruco_board = aruco.drawAxis(im_with_aruco_board, camera_matrix, dist_coeffs, rvec, tvec, 100) # axis length 100 can be changed according to your requirement
else:
im_with_aruco_board = frame_remapped

cv2.imshow("arucoboard", im_with_aruco_board)

if cv2.waitKey(2) & 0xFF == ord('q'):
break
else:
break

The drawn axis is just the world coordinators and orientations estimated from the images taken by the testing camera. At the end of the code, we release the video capture handle and destroy all opening windows.

1
2
cap.release()   # When everything done, release the capture
cv2.destroyAllWindows()

Preparation

Very similar to our previous post Camera Posture Estimation Using A Single aruco Marker, you need to make sure your camera has already been calibrated. In the coding section, it’s assumed that you can successfully load the camera calibration parameters.

Coding

The code can be found at OpenCV Examples.

First of all

We need to ensure cv2.so is under our system path. cv2.so is specifically for OpenCV Python.

1
2
import sys
sys.path.append('/usr/local/python/3.5')

Then, we import some packages to be used.

1
2
3
4
import os
import cv2
from cv2 import aruco
import numpy as np

Secondly

Again, we need to load all camera calibration parameters, including: cameraMatrix, distCoeffs, etc. :

1
2
3
4
calibrationFile = "calibrationFileName.xml"
calibrationParams = cv2.FileStorage(calibrationFile, cv2.FILE_STORAGE_READ)
camera_matrix = calibrationParams.getNode("cameraMatrix").mat()
dist_coeffs = calibrationParams.getNode("distCoeffs").mat()

If you are using a calibrated fisheye camera like us, two extra parameters are to be loaded from the calibration file.

1
2
r = calibrationParams.getNode("R").mat()
new_camera_matrix = calibrationParams.getNode("newCameraMatrix").mat()

Afterwards, two mapping matrices are pre-calculated by calling function cv2.fisheye.initUndistortRectifyMap() as (supposing the images to be processed are of 1080P):

1
2
image_size = (1920, 1080)
map1, map2 = cv2.fisheye.initUndistortRectifyMap(camera_matrix, dist_coeffs, r, new_camera_matrix, image_size, cv2.CV_16SC2)

Thirdly

The dictionary aruco.DICT_6X6_250 is to be loaded. Although current OpenCV provides four groups of aruco patterns, 4X4, 5X5, 6X6, 7X7, etc., it seems OpenCV Python does NOT provide a function named drawCharucoDiamond(). Therefore, we have to refer to the C++ tutorial Detection of Diamond Markers. And, we directly use this particular diamond marker in the tutorial:

aruco.DICT_6X6_250.diamond
1
aruco_dict = aruco.Dictionary_get( aruco.DICT_6X6_250 )

After having this aruco diamond marker printed, the edge lengths of this particular diamond marker are to be measured and stored in two variables squareLength and markerLength.

1
2
squareLength = 40   # Here, our measurement unit is centimetre.
markerLength = 25 # Here, our measurement unit is centimetre.

Meanwhile, create aruco detector with default parameters.

1
arucoParams = aruco.DetectorParameters_create()

Finally

This time, let’s test on a video stream, a .mp4 file. We first load the video file and initialize a video capture handle.

1
2
videoFile = "aruco_diamond.mp4"
cap = cv2.VideoCapture(videoFile)

Then, we calculate the camera posture frame by frame:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
while(True):
ret, frame = cap.read() # Capture frame-by-frame
if ret == True:
frame_remapped = cv2.remap(frame, map1, map2, cv2.INTER_LINEAR, cv2.BORDER_CONSTANT) # for fisheye remapping
frame_remapped_gray = cv2.cvtColor(frame_remapped, cv2.COLOR_BGR2GRAY)

corners, ids, rejectedImgPoints = aruco.detectMarkers(frame_remapped_gray, aruco_dict, parameters=arucoParams) # First, detect markers

if ids != None: # if there is at least one marker detected
diamondCorners, diamondIds = aruco.detectCharucoDiamond(frame_remapped_gray, corners, ids, squareLength/markerLength) # Second, detect diamond markers
if len(diamondCorners) >= 1: # if there is at least one diamond detected
im_with_diamond = aruco.drawDetectedDiamonds(frame_remapped, diamondCorners, diamondIds, (0,255,0))
rvec, tvec = aruco.estimatePoseSingleMarkers(diamondCorners, squareLength, camera_matrix, dist_coeffs) # posture estimation from a diamond
im_with_diamond = aruco.drawAxis(im_with_diamond, camera_matrix, dist_coeffs, rvec, tvec, 100) # axis length 100 can be changed according to your requirement
else:
im_with_diamond = frame_remapped

cv2.imshow("diamondLeft", im_with_diamond) # display

if cv2.waitKey(2) & 0xFF == ord('q'): # press 'q' to quit
break
else:
break

The drawn axis is just the world coordinators and orientations estimated from the images taken by the testing camera. At the end of the code, we release the video capture handle and destroy all opening windows.

1
2
cap.release()   # When everything done, release the capture
cv2.destroyAllWindows()

Preparation

Before start coding, you need to ensure your camera has already been calibrated. (Camera calibration is covered in our blog as well.) In the coding section, it’s assumed that you can successfully load the camera calibration parameters.

Coding

The code can be found at OpenCV Examples.

First of all

We need to ensure cv2.so is under our system path. cv2.so is specifically for OpenCV Python.

1
2
import sys
sys.path.append('/usr/local/python/3.5')

Then, we import some packages to be used.

1
2
3
4
import os
import cv2
from cv2 import aruco
import numpy as np

Secondly

We now load all camera calibration parameters, including: cameraMatrix, distCoeffs, etc. For example, your code might look like the following:

1
2
3
4
calibrationFile = "calibrationFileName.xml"
calibrationParams = cv2.FileStorage(calibrationFile, cv2.FILE_STORAGE_READ)
camera_matrix = calibrationParams.getNode("cameraMatrix").mat()
dist_coeffs = calibrationParams.getNode("distCoeffs").mat()

Since we are testing a calibrated fisheye camera, two extra parameters are to be loaded from the calibration file.

1
2
r = calibrationParams.getNode("R").mat()
new_camera_matrix = calibrationParams.getNode("newCameraMatrix").mat()

Afterwards, two mapping matrices are pre-calculated by calling function cv2.fisheye.initUndistortRectifyMap() as (supposing the images to be processed are of 1080P):

1
2
image_size = (1920, 1080)
map1, map2 = cv2.fisheye.initUndistortRectifyMap(camera_matrix, dist_coeffs, r, new_camera_matrix, image_size, cv2.CV_16SC2)

Thirdly

A dictionary is to be loaded. Current OpenCV provides four groups of aruco patterns, 4X4, 5X5, 6X6, 7X7, etc. Here, aruco.DICT_6X6_1000 is randomly selected as our example, which looks like:

aruco.DICT_6X6_1000
1
aruco_dict = aruco.Dictionary_get( aruco.DICT_6X6_1000 )

After having this aruco square marker printed, the edge length of this particular marker is to be measured and stored in a variable markerLength.

1
markerLength = 20 # Here, our measurement unit is centimetre.

Meanwhile, create aruco detector with default parameters.

1
arucoParams = aruco.DetectorParameters_create()

Finally

Estimate camera postures. Here, we are testing a sequence of images, rather than video streams. We first list all file names in sequence.

1
2
3
imgDir = "imgSequence"  # Specify the image directory
imgFileNames = [os.path.join(imgDir, fn) for fn in next(os.walk(imgDir))[2]]
nbOfImgs = len(imgFileNames)

Then, we calculate the camera posture frame by frame:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
for i in range(0, nbOfImgs):
img = cv2.imread(imgFileNames[i], cv2.IMREAD_COLOR)
imgRemapped = cv2.remap(img, map1, map2, cv2.INTER_LINEAR, cv2.BORDER_CONSTANT) # for fisheye remapping
imgRemapped_gray = cv2.cvtColor(imgRemapped, cv2.COLOR_BGR2GRAY) # aruco.etectMarkers() requires gray image
corners, ids, rejectedImgPoints = aruco.detectMarkers(imgRemapped_gray, aruco_dict, parameters=arucoParams) # Detect aruco
if ids != None: # if aruco marker detected
rvec, tvec = aruco.estimatePoseSingleMarkers(corners, markerLength, camera_matrix, dist_coeffs) # For a single marker
imgWithAruco = aruco.drawDetectedMarkers(imgRemapped, corners, ids, (0,255,0))
imgWithAruco = aruco.drawAxis(imgWithAruco, camera_matrix, dist_coeffs, rvec, tvec, 100) # axis length 100 can be changed according to your requirement
else: # if aruco marker is NOT detected
imgWithAruco = imgRemapped # assign imRemapped_color to imgWithAruco directly

cv2.imshow("aruco", imgWithAruco) # display

if cv2.waitKey(2) & 0xFF == ord('q'): # if 'q' is pressed, quit.
break

The drawn axis is just the world coordinators and orientations estimated from the images taken by the testing camera.

0%