Tensorflow is always problematic, particularly, for guys like me...

1. Configuration

Linux Kernel 4.17.3 + Ubuntu 18.04 + GCC 7.3.0 + Python

1
2
$ uname -r
4.17.3-041703-generic
1
2
3
4
5
6
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04 LTS
Release: 18.04
Codename: bionic
1
2
3
4
5
$ gcc --version
gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
1
2
$ python --version
Python 3.6.5

2. Install Bazel

From Bazel's official website Installing Bazel on Ubuntu:

1
2
3
$ echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
$ curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -
$ sudo apt update && sudo apt install bazel

Current bazel 0.15.0 will be installed.

3. Let's Build Tensorflow from Source

3.1 Configuration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
$ ./configure
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.protobuf.UnsafeUtil (file:~/.cache/bazel/_bazel_jiapei/install/ce085f519b017357185750fe457b4648/_embedded_binaries/A-server.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of com.google.protobuf.UnsafeUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.15.0 installed.
Please specify the location of python. [Default is /usr/bin/python]:


Found possible Python library paths:
/usr/local/lib/python3.6/dist-packages
/usr/lib/python3/dist-packages
Please input the desired Python library path to use. Default is [/usr/local/lib/python3.6/dist-packages]

Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: Y
jemalloc as malloc support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: Y
Google Cloud Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: Y
Hadoop File System support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Amazon AWS Platform support? [Y/n]: Y
Amazon AWS Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: Y
Apache Kafka Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with XLA JIT support? [y/N]: N
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with GDR support? [y/N]: N
No GDR support will be enabled for TensorFlow.

Do you wish to build TensorFlow with VERBS support? [y/N]: N
No VERBS support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: N
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.

Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: 9.2


Please specify the location where CUDA 9.2 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:


Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 7.1.4


Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:


Do you wish to build TensorFlow with TensorRT support? [y/N]: N
No TensorRT support will be enabled for TensorFlow.

Please specify the NCCL version you want to use. [Leave empty to default to NCCL 1.3]:


Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 5.2]


Do you want to use clang as CUDA compiler? [y/N]: N
nvcc will be used as CUDA compiler.

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:


Do you wish to build TensorFlow with MPI support? [y/N]: N
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:


Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: N
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
Configuration finished

3.2 FIRST Bazel Build

Follow Installing TensorFlow from Sources on Tensorflow's official website, have all required packages prepared, and run:

1
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

you will meet several ERROR messages, which requires you to carry out the following modifications.

3.3 Modifications

  • File **~/.cache/bazel/_bazel_jiapei/051cd94cedc722db8c69e42ce51064b5/external/jpeg/BUILD**:
1
2
3
4
5
config_setting(
name = "armeabi-v7a",
- values = {"android_cpu": "armeabi-v7a"},
+ values = {"cpu": "armeabi-v7a"},
)
  • Two Symbolic links
1
2
$ sudo ln -s /usr/local/cuda/include/crt/math_functions.hpp /usr/local/cuda/include/math_functions.hpp
$ sudo ln -s /usr/local/cuda/nvvm/libdevice/libdevice.10.bc /usr/local/cuda/nvvm/lib64/libdevice.10.bc

3.4 Bazel Build Again

1
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

It'll take around 30 minutes to have Tensorflow successfully built.

1
2
3
4
5
6
......
Target //tensorflow/tools/pip_package:build_pip_package up-to-date:
bazel-bin/tensorflow/tools/pip_package/build_pip_package
INFO: Elapsed time: 4924.451s, Critical Path: 197.91s
INFO: 8864 processes: 8864 local.
INFO: Build completed successfully, 11002 total actions

4. Tensorflow Installation

4.1 Build the pip File

1
2
3
4
5
6
7
8
9
10
11
12
13
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
Wed Jul 11 00:09:09 PDT 2018 : === Preparing sources in dir: /tmp/tmp.5X0zsqfxYo
/media/jiapei/Data/Downloads/machinelearning/deeplearning/tensorflow/tensorflow /media/jiapei/Data/Downloads/machinelearning/deeplearning/tensorflow/tensorflow
/media/jiapei/Data/Downloads/machinelearning/deeplearning/tensorflow/tensorflow
Wed Jul 11 00:09:35 PDT 2018 : === Building wheel
warning: no files found matching '*.dll' under directory '*'
warning: no files found matching '*.lib' under directory '*'
warning: no files found matching '*.h' under directory 'tensorflow/include/tensorflow'
warning: no files found matching '*' under directory 'tensorflow/include/Eigen'
warning: no files found matching '*.h' under directory 'tensorflow/include/google'
warning: no files found matching '*' under directory 'tensorflow/include/third_party'
warning: no files found matching '*' under directory 'tensorflow/include/unsupported'
Wed Jul 11 00:10:58 PDT 2018 : === Output wheel file is in: /tmp/tensorflow_pkg

Let's have a look at what's been built:

1
2
$ ls /tmp/tensorflow_pkg/
tensorflow-1.9.0rc0-cp36-cp36m-linux_x86_64.whl

4.2 Pip Installation

And, let's have tensorflow-1.9.0rc0-cp36-cp36m-linux_x86_64.whl installed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ pip3 install /tmp/tensorflow_pkg/tensorflow-1.9.0rc0-cp36-cp36m-linux_x86_64.whl
Processing /tmp/tensorflow_pkg/tensorflow-1.9.0rc0-cp36-cp36m-linux_x86_64.whl
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (0.31.1)
Requirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (0.2.0)
Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (1.14.5)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (0.7.1)
Requirement already satisfied: six>=1.10.0 in ./.local/lib/python3.6/site-packages (from tensorflow==1.9.0rc0) (1.11.0)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (1.13.0)
Requirement already satisfied: absl-py>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (0.2.2)
Requirement already satisfied: tensorboard<1.9.0,>=1.8.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (1.8.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (1.1.0)
Requirement already satisfied: setuptools<=39.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (39.1.0)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (3.6.0)
Requirement already satisfied: bleach==1.5.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow==1.9.0rc0) (1.5.0)
Requirement already satisfied: html5lib==0.9999999 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow==1.9.0rc0) (0.9999999)
Requirement already satisfied: werkzeug>=0.11.10 in ./.local/lib/python3.6/site-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow==1.9.0rc0) (0.14.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow==1.9.0rc0) (2.6.11)
Successfully installed tensorflow-1.9.0rc0

Let's test if Tensorflow has been successfully installed.

4.3 Check Tensorflow

1
2
3
4
5
6
7
$ python
Python 3.6.5 (default, Apr 1 2018, 05:46:30)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.__version__
'1.9.0-rc0'

5. Keras Installation

After successfully check out Keras, we can easily have Keras installed by command python setup.py install.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
$ python setup.py install
running install
running bdist_egg
running egg_info
creating Keras.egg-info
writing Keras.egg-info/PKG-INFO
writing dependency_links to Keras.egg-info/dependency_links.txt
writing requirements to Keras.egg-info/requires.txt
writing top-level names to Keras.egg-info/top_level.txt
writing manifest file 'Keras.egg-info/SOURCES.txt'
reading manifest file 'Keras.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'Keras.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib
creating build/lib/docs
copying docs/autogen.py -> build/lib/docs
copying docs/__init__.py -> build/lib/docs
creating build/lib/keras
copying keras/activations.py -> build/lib/keras
copying keras/callbacks.py -> build/lib/keras
copying keras/constraints.py -> build/lib/keras
copying keras/initializers.py -> build/lib/keras
copying keras/losses.py -> build/lib/keras
copying keras/metrics.py -> build/lib/keras
copying keras/models.py -> build/lib/keras
copying keras/objectives.py -> build/lib/keras
copying keras/optimizers.py -> build/lib/keras
copying keras/regularizers.py -> build/lib/keras
copying keras/__init__.py -> build/lib/keras
creating build/lib/keras/applications
copying keras/applications/densenet.py -> build/lib/keras/applications
copying keras/applications/imagenet_utils.py -> build/lib/keras/applications
copying keras/applications/inception_resnet_v2.py -> build/lib/keras/applications
copying keras/applications/inception_v3.py -> build/lib/keras/applications
copying keras/applications/mobilenet.py -> build/lib/keras/applications
copying keras/applications/mobilenetv2.py -> build/lib/keras/applications
copying keras/applications/nasnet.py -> build/lib/keras/applications
copying keras/applications/resnet50.py -> build/lib/keras/applications
copying keras/applications/vgg16.py -> build/lib/keras/applications
copying keras/applications/vgg19.py -> build/lib/keras/applications
copying keras/applications/xception.py -> build/lib/keras/applications
copying keras/applications/__init__.py -> build/lib/keras/applications
creating build/lib/keras/backend
copying keras/backend/cntk_backend.py -> build/lib/keras/backend
copying keras/backend/common.py -> build/lib/keras/backend
copying keras/backend/tensorflow_backend.py -> build/lib/keras/backend
copying keras/backend/theano_backend.py -> build/lib/keras/backend
copying keras/backend/__init__.py -> build/lib/keras/backend
creating build/lib/keras/datasets
copying keras/datasets/boston_housing.py -> build/lib/keras/datasets
copying keras/datasets/cifar.py -> build/lib/keras/datasets
copying keras/datasets/cifar10.py -> build/lib/keras/datasets
copying keras/datasets/cifar100.py -> build/lib/keras/datasets
copying keras/datasets/fashion_mnist.py -> build/lib/keras/datasets
copying keras/datasets/imdb.py -> build/lib/keras/datasets
copying keras/datasets/mnist.py -> build/lib/keras/datasets
copying keras/datasets/reuters.py -> build/lib/keras/datasets
copying keras/datasets/__init__.py -> build/lib/keras/datasets
creating build/lib/keras/engine
copying keras/engine/base_layer.py -> build/lib/keras/engine
copying keras/engine/input_layer.py -> build/lib/keras/engine
copying keras/engine/network.py -> build/lib/keras/engine
copying keras/engine/saving.py -> build/lib/keras/engine
copying keras/engine/sequential.py -> build/lib/keras/engine
copying keras/engine/topology.py -> build/lib/keras/engine
copying keras/engine/training.py -> build/lib/keras/engine
copying keras/engine/training_arrays.py -> build/lib/keras/engine
copying keras/engine/training_generator.py -> build/lib/keras/engine
copying keras/engine/training_utils.py -> build/lib/keras/engine
copying keras/engine/__init__.py -> build/lib/keras/engine
creating build/lib/keras/layers
copying keras/layers/advanced_activations.py -> build/lib/keras/layers
copying keras/layers/convolutional.py -> build/lib/keras/layers
copying keras/layers/convolutional_recurrent.py -> build/lib/keras/layers
copying keras/layers/core.py -> build/lib/keras/layers
copying keras/layers/cudnn_recurrent.py -> build/lib/keras/layers
copying keras/layers/embeddings.py -> build/lib/keras/layers
copying keras/layers/local.py -> build/lib/keras/layers
copying keras/layers/merge.py -> build/lib/keras/layers
copying keras/layers/noise.py -> build/lib/keras/layers
copying keras/layers/normalization.py -> build/lib/keras/layers
copying keras/layers/pooling.py -> build/lib/keras/layers
copying keras/layers/recurrent.py -> build/lib/keras/layers
copying keras/layers/wrappers.py -> build/lib/keras/layers
copying keras/layers/__init__.py -> build/lib/keras/layers
creating build/lib/keras/legacy
copying keras/legacy/interfaces.py -> build/lib/keras/legacy
copying keras/legacy/layers.py -> build/lib/keras/legacy
copying keras/legacy/__init__.py -> build/lib/keras/legacy
creating build/lib/keras/preprocessing
copying keras/preprocessing/image.py -> build/lib/keras/preprocessing
copying keras/preprocessing/sequence.py -> build/lib/keras/preprocessing
copying keras/preprocessing/text.py -> build/lib/keras/preprocessing
copying keras/preprocessing/__init__.py -> build/lib/keras/preprocessing
creating build/lib/keras/utils
copying keras/utils/conv_utils.py -> build/lib/keras/utils
copying keras/utils/data_utils.py -> build/lib/keras/utils
copying keras/utils/generic_utils.py -> build/lib/keras/utils
copying keras/utils/io_utils.py -> build/lib/keras/utils
copying keras/utils/layer_utils.py -> build/lib/keras/utils
copying keras/utils/multi_gpu_utils.py -> build/lib/keras/utils
copying keras/utils/np_utils.py -> build/lib/keras/utils
copying keras/utils/test_utils.py -> build/lib/keras/utils
copying keras/utils/vis_utils.py -> build/lib/keras/utils
copying keras/utils/__init__.py -> build/lib/keras/utils
creating build/lib/keras/wrappers
copying keras/wrappers/scikit_learn.py -> build/lib/keras/wrappers
copying keras/wrappers/__init__.py -> build/lib/keras/wrappers
creating build/bdist.linux-x86_64
creating build/bdist.linux-x86_64/egg
creating build/bdist.linux-x86_64/egg/docs
copying build/lib/docs/autogen.py -> build/bdist.linux-x86_64/egg/docs
copying build/lib/docs/__init__.py -> build/bdist.linux-x86_64/egg/docs
creating build/bdist.linux-x86_64/egg/keras
copying build/lib/keras/activations.py -> build/bdist.linux-x86_64/egg/keras
creating build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/densenet.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/imagenet_utils.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/inception_resnet_v2.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/inception_v3.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/mobilenet.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/mobilenetv2.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/nasnet.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/resnet50.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/vgg16.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/vgg19.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/xception.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/__init__.py -> build/bdist.linux-x86_64/egg/keras/applications
creating build/bdist.linux-x86_64/egg/keras/backend
copying build/lib/keras/backend/cntk_backend.py -> build/bdist.linux-x86_64/egg/keras/backend
copying build/lib/keras/backend/common.py -> build/bdist.linux-x86_64/egg/keras/backend
copying build/lib/keras/backend/tensorflow_backend.py -> build/bdist.linux-x86_64/egg/keras/backend
copying build/lib/keras/backend/theano_backend.py -> build/bdist.linux-x86_64/egg/keras/backend
copying build/lib/keras/backend/__init__.py -> build/bdist.linux-x86_64/egg/keras/backend
copying build/lib/keras/callbacks.py -> build/bdist.linux-x86_64/egg/keras
copying build/lib/keras/constraints.py -> build/bdist.linux-x86_64/egg/keras
creating build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/boston_housing.py -> build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/cifar.py -> build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/cifar10.py -> build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/cifar100.py -> build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/fashion_mnist.py -> build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/imdb.py -> build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/mnist.py -> build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/reuters.py -> build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/__init__.py -> build/bdist.linux-x86_64/egg/keras/datasets
creating build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/base_layer.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/input_layer.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/network.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/saving.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/sequential.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/topology.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/training.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/training_arrays.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/training_generator.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/training_utils.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/__init__.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/initializers.py -> build/bdist.linux-x86_64/egg/keras
creating build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/advanced_activations.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/convolutional.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/convolutional_recurrent.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/core.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/cudnn_recurrent.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/embeddings.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/local.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/merge.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/noise.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/normalization.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/pooling.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/recurrent.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/wrappers.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/__init__.py -> build/bdist.linux-x86_64/egg/keras/layers
creating build/bdist.linux-x86_64/egg/keras/legacy
copying build/lib/keras/legacy/interfaces.py -> build/bdist.linux-x86_64/egg/keras/legacy
copying build/lib/keras/legacy/layers.py -> build/bdist.linux-x86_64/egg/keras/legacy
copying build/lib/keras/legacy/__init__.py -> build/bdist.linux-x86_64/egg/keras/legacy
copying build/lib/keras/losses.py -> build/bdist.linux-x86_64/egg/keras
copying build/lib/keras/metrics.py -> build/bdist.linux-x86_64/egg/keras
copying build/lib/keras/models.py -> build/bdist.linux-x86_64/egg/keras
copying build/lib/keras/objectives.py -> build/bdist.linux-x86_64/egg/keras
copying build/lib/keras/optimizers.py -> build/bdist.linux-x86_64/egg/keras
creating build/bdist.linux-x86_64/egg/keras/preprocessing
copying build/lib/keras/preprocessing/image.py -> build/bdist.linux-x86_64/egg/keras/preprocessing
copying build/lib/keras/preprocessing/sequence.py -> build/bdist.linux-x86_64/egg/keras/preprocessing
copying build/lib/keras/preprocessing/text.py -> build/bdist.linux-x86_64/egg/keras/preprocessing
copying build/lib/keras/preprocessing/__init__.py -> build/bdist.linux-x86_64/egg/keras/preprocessing
copying build/lib/keras/regularizers.py -> build/bdist.linux-x86_64/egg/keras
creating build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/conv_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/data_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/generic_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/io_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/layer_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/multi_gpu_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/np_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/test_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/vis_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/__init__.py -> build/bdist.linux-x86_64/egg/keras/utils
creating build/bdist.linux-x86_64/egg/keras/wrappers
copying build/lib/keras/wrappers/scikit_learn.py -> build/bdist.linux-x86_64/egg/keras/wrappers
copying build/lib/keras/wrappers/__init__.py -> build/bdist.linux-x86_64/egg/keras/wrappers
copying build/lib/keras/__init__.py -> build/bdist.linux-x86_64/egg/keras
byte-compiling build/bdist.linux-x86_64/egg/docs/autogen.py to autogen.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/docs/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/activations.py to activations.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/densenet.py to densenet.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/imagenet_utils.py to imagenet_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/inception_resnet_v2.py to inception_resnet_v2.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/inception_v3.py to inception_v3.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/mobilenet.py to mobilenet.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/mobilenetv2.py to mobilenetv2.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/nasnet.py to nasnet.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/resnet50.py to resnet50.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/vgg16.py to vgg16.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/vgg19.py to vgg19.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/xception.py to xception.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/backend/cntk_backend.py to cntk_backend.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/backend/common.py to common.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/backend/tensorflow_backend.py to tensorflow_backend.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/backend/theano_backend.py to theano_backend.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/backend/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/callbacks.py to callbacks.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/constraints.py to constraints.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/boston_housing.py to boston_housing.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/cifar.py to cifar.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/cifar10.py to cifar10.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/cifar100.py to cifar100.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/fashion_mnist.py to fashion_mnist.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/imdb.py to imdb.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/mnist.py to mnist.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/reuters.py to reuters.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/base_layer.py to base_layer.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/input_layer.py to input_layer.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/network.py to network.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/saving.py to saving.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/sequential.py to sequential.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/topology.py to topology.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/training.py to training.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/training_arrays.py to training_arrays.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/training_generator.py to training_generator.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/training_utils.py to training_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/initializers.py to initializers.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/advanced_activations.py to advanced_activations.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/convolutional.py to convolutional.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/convolutional_recurrent.py to convolutional_recurrent.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/core.py to core.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/cudnn_recurrent.py to cudnn_recurrent.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/embeddings.py to embeddings.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/local.py to local.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/merge.py to merge.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/noise.py to noise.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/normalization.py to normalization.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/pooling.py to pooling.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/recurrent.py to recurrent.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/wrappers.py to wrappers.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/legacy/interfaces.py to interfaces.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/legacy/layers.py to layers.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/legacy/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/losses.py to losses.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/metrics.py to metrics.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/models.py to models.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/objectives.py to objectives.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/optimizers.py to optimizers.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/preprocessing/image.py to image.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/preprocessing/sequence.py to sequence.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/preprocessing/text.py to text.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/preprocessing/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/regularizers.py to regularizers.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/conv_utils.py to conv_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/data_utils.py to data_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/generic_utils.py to generic_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/io_utils.py to io_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/layer_utils.py to layer_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/multi_gpu_utils.py to multi_gpu_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/np_utils.py to np_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/test_utils.py to test_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/vis_utils.py to vis_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/wrappers/scikit_learn.py to scikit_learn.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/wrappers/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/__init__.py to __init__.cpython-36.pyc
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying Keras.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying Keras.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying Keras.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying Keras.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying Keras.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
zip_safe flag not set; analyzing archive contents...
creating dist
creating 'dist/Keras-2.2.0-py3.6.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing Keras-2.2.0-py3.6.egg
Copying Keras-2.2.0-py3.6.egg to /usr/local/lib/python3.6/dist-packages
Adding Keras 2.2.0 to easy-install.pth file

Installed /usr/local/lib/python3.6/dist-packages/Keras-2.2.0-py3.6.egg
Processing dependencies for Keras==2.2.0
Searching for Keras-Preprocessing==1.0.1
Best match: Keras-Preprocessing 1.0.1
Adding Keras-Preprocessing 1.0.1 to easy-install.pth file

Using /usr/local/lib/python3.6/dist-packages
Searching for Keras-Applications==1.0.2
Best match: Keras-Applications 1.0.2
Adding Keras-Applications 1.0.2 to easy-install.pth file

Using /usr/local/lib/python3.6/dist-packages
Searching for h5py==2.8.0
Best match: h5py 2.8.0
Adding h5py 2.8.0 to easy-install.pth file

Using /usr/local/lib/python3.6/dist-packages
Searching for PyYAML==3.13
Best match: PyYAML 3.13
Adding PyYAML 3.13 to easy-install.pth file

Using /usr/local/lib/python3.6/dist-packages
Searching for six==1.11.0
Best match: six 1.11.0
Adding six 1.11.0 to easy-install.pth file

Using /usr/lib/python3/dist-packages
Searching for scipy==1.1.0
Best match: scipy 1.1.0
Adding scipy 1.1.0 to easy-install.pth file

Using /usr/local/lib/python3.6/dist-packages
Searching for numpy==1.14.5
Best match: numpy 1.14.5
Adding numpy 1.14.5 to easy-install.pth file

Using /usr/local/lib/python3.6/dist-packages
Finished processing dependencies for Keras==2.2.0

That's all for today. I think Python is seriously cool, handy indeed. I myself will still recommend Pytorch, but it seems Tensorflow and Keras are also very popular in North America.

I'm quite busy today. So, I'd just post some videos to show the performance about jiu bu gao su ni 😆 Click on the pictures to open an uploaded Facebook video.

1. Key Point Localization

1.1 Nobody - Yeah, it's ME, 10 YEARS ago. How time flies...

Nobody - Yeah, it's ME

1.2 FRANCK - What a Canonical Annotated Face Dataset

FRANCK

2. Orientation

01_al_pacino 02_alanis_morissette 03_anderson_cooper
01_al_pacino.mp4 02_alanis_morissette.mp4 03_anderson_cooper.mp4
04_angelina_jolie 05_bill_clinton 06_bill_gates
04_angelina_jolie.mp4 05_bill_clinton.mp4 06_bill_gates.mp4
07_gloria_estefan 08_jet_li 09_julia_roberts
07_gloria_estefan.mp4 08_jet_li.mp4 09_julia_roberts.mp4
10_noam_chomsky 11_sylvester_stallone 12_tony_blair
10_noam_chomsky.mp4 11_sylvester_stallone.mp4 12_tony_blair.mp4
13_victoria_beckham 14_vladimir_putin
13_victoria_beckham.mp4 14_vladimir_putin.mp4

Yesterday is Canada Day... ^_^ Happy Canada Day everybody...

Happy Canada Day

Just notice this news about ImageAI today. So, I just had it tested for fun. I seriously don't want to talk about ImageAI too much, you can follow the author's github and it shouldn't be that hard to have everything done in minutes.

1. Preparation

1.1 Prerequisite Dependencies

As described by on ImageAI's Github, multiple Python dependencies need to be installed:

  • Tensorflow
  • Numpy
  • SciPy
  • OpenCV
  • Pillow
  • Matplotlib
  • h5py
  • Keras

All packages can be easily installed by command:

1
pip3 install PackageName

Afterwards, ImageAI can be installed by a single command:

1
pip3 install https://github.com/OlafenwaMoses/ImageAI/releases/download/2.0.1/imageai-2.0.1-py3-none-any.whl 

1.2 CNN Models

Two models are adopted as in the examples prediction and detection.

2. Examples

2.1 Prediction

Simple examples are given at https://github.com/OlafenwaMoses/ImageAI/tree/master/imageai/Prediction.

I modified FirstPrediction.py a bit as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
from imageai.Prediction import ImagePrediction
import os
import numpy as np

prediction = ImagePrediction()
prediction.setModelTypeAsResNet()
prediction.setModelPath("/media/jiapei/Data/Programs/Python/ImageAI/resnet50_weights_tf_dim_ordering_tf_kernels.h5")
prediction.loadModel()


predictions, probabilities = prediction.predictImage(("./01.jpg"), result_count=5)
for eachPrediction, eachProbability in zip(predictions, probabilities):
print(eachPrediction + " : " + np.format_float_positional(eachProbability, precision=3))

For the 1st image:

ImageAI Test Image 1

From bash, you will get:

1
2
3
4
5
6
7
$ python FirstPrediction.py 
2018-07-02 18:12:09.275412: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
convertible : 52.45954394340515
sports_car : 37.61279881000519
pickup : 3.1751133501529694
car_wheel : 1.817503571510315
minivan : 1.748703233897686

2.2 Detection

Simple examples are given at https://github.com/OlafenwaMoses/ImageAI/tree/master/imageai/Detection.

Trivial modification is also implemented upon FirstObjectDetection.py.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
from imageai.Detection import ObjectDetection
import os
import numpy as np

detector = ObjectDetection()
detector.setModelTypeAsRetinaNet()
detector.setModelPath( os.path.join("/media/jiapei/Data/Programs/Python/ImageAI/resnet50_coco_best_v2.1.0.h5"))
detector.loadModel()

input_image="./03.jpg"
output_image="./imageai_output_03.jpg"
detections = detector.detectObjectsFromImage(input_image, output_image)

for eachObject in detections:
print(eachObject["name"] + " : " + np.format_float_positional(eachObject["percentage_probability"], precision=3))

2.2.1 For the 2nd image:

ImageAI Test Image 2

From bash, you will get

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
$ python FirstDetection.py 
Using TensorFlow backend.
2018-07-02 18:23:09.634037: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-07-02 18:23:11.744790: W tensorflow/core/framework/allocator.cc:101] Allocation of 68300800 exceeds 10% of system memory.
2018-07-02 18:23:11.958081: W tensorflow/core/framework/allocator.cc:101] Allocation of 68403200 exceeds 10% of system memory.
2018-07-02 18:23:12.174739: W tensorflow/core/framework/allocator.cc:101] Allocation of 68403200 exceeds 10% of system memory.
2018-07-02 18:23:12.433540: W tensorflow/core/framework/allocator.cc:101] Allocation of 68403200 exceeds 10% of system memory.
2018-07-02 18:23:12.694631: W tensorflow/core/framework/allocator.cc:101] Allocation of 68403200 exceeds 10% of system memory.
2018-07-02 18:23:16.267111: W tensorflow/core/framework/allocator.cc:101] Allocation of 64224000 exceeds 10% of system memory.
2018-07-02 18:23:16.370939: W tensorflow/core/framework/allocator.cc:101] Allocation of 64224000 exceeds 10% of system memory.
2018-07-02 18:23:16.403353: W tensorflow/core/framework/allocator.cc:101] Allocation of 67435200 exceeds 10% of system memory.
person : 55.596935749053955
person : 66.90954566001892
person : 67.96322464942932
person : 50.80411434173584
bicycle : 64.87574577331543
bicycle : 72.0929205417633
person : 80.02063035964966
person : 85.82872748374939
truck : 59.56767797470093
person : 66.69963002204895
person : 79.37889695167542
person : 64.81361389160156
bus : 65.35580158233643
bus : 97.16107249259949
bus : 68.20474863052368
truck : 67.65954494476318
truck : 77.73774266242981
bus : 69.96590495109558
truck : 69.54039335250854
car : 61.26518249511719
car : 59.965676069259644

And, under the program folder, you will get an output image:

ImageAI Output Image for Test Image 2

2.2.2 For the 3rd image:

ImageAI Test Image 3

From bash, you will get

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
$ python FirstDetection.py 
Using TensorFlow backend.
2018-07-02 18:25:24.919351: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
person : 53.27633619308472
person : 52.71329879760742
person : 63.67729902267456
person : 55.00321388244629
person : 74.53054189682007
person : 51.54905915260315
motorcycle : 59.057921171188354
bus : 93.79504919052124
bus : 86.21828556060791
bus : 77.143394947052
person : 59.69809293746948
car : 71.79147601127625
car : 60.15858054161072
person : 62.758803367614746
person : 58.786213397979736
person : 76.49624943733215
car : 56.977421045303345
person : 67.86248683929443
person : 50.977784395217896
person : 52.3215651512146
motorcycle : 52.81376242637634
person : 76.79281234741211
motorcycle : 74.65972304344177
person : 55.96961975097656
person : 68.15704107284546
motorcycle : 56.21282458305359
bicycle : 71.78951501846313
motorcycle : 69.68616843223572
bicycle : 91.09067916870117
motorcycle : 83.16765427589417
motorcycle : 61.57424449920654

And, under the program folder, you will get an output image:

ImageAI Output Image for Test Image 3

It has been quite a while that my VOSM has NOT been updated. My bad for sure. But today, I have it updated, and VOSM-0.3.5 is released. Just refer to the following 3 pages on github:

We'll still explain a bit on How to use VOSM in the following:

1. Building

1.1 Building Commands

Currently, there are 7 types of models to be built. Namely, there are 7 choices for the parameter "-t":

  • SM
  • TM
  • AM
  • IA
  • FM
  • SMLTC
  • SMNDPROFILE
1
2
3
4
5
6
7
$ testsmbuilding -o "./output" -a "./annotations/training/" -i "./images/training/" -s "../VOSM/shapeinfo/IMM/ShapeInfo.txt" -d "IMM" -c 1 -t "SM" -l 4 -p 0.95
$ testsmbuilding -o "./output" -a "./annotations/training/" -i "./images/training/" -s "../VOSM/shapeinfo/IMM/ShapeInfo.txt" -d "IMM" -c 1 -t "TM" -l 4 -p 0.95
$ testsmbuilding -o "./output" -a "./annotations/training/" -i "./images/training/" -s "../VOSM/shapeinfo/IMM/ShapeInfo.txt" -d "IMM" -c 1 -t "AM" -l 4 -p 0.95
$ testsmbuilding -o "./output" -a "./annotations/training/" -i "./images/training/" -s "../VOSM/shapeinfo/IMM/ShapeInfo.txt" -d "IMM" -c 1 -t "IA" -l 4 -p 0.95
$ testsmbuilding -o "./output" -a "./annotations/training/" -i "./images/training/" -s "../VOSM/shapeinfo/IMM/ShapeInfo.txt" -d "IMM" -c 1 -t "FM" -l 4 -p 0.95
$ testsmbuilding -o "./output" -a "./annotations/training/" -i "./images/training/" -s "../VOSM/shapeinfo/IMM/ShapeInfo.txt" -d "IMM" -c 1 -t "SMLTC" -l 4 -p 0.95
$ testsmbuilding -o "./output" -a "./annotations/training/" -i "./images/training/" -s "../VOSM/shapeinfo/IMM/ShapeInfo.txt" -d "IMM" -c 1 -t "SMNDPROFILE" -l 4 -p 0.95

1.2 Output Folders

After these 7 commands, 9 folders will be generated:

  • Point2DDistributionModel
  • ShapeModel
  • TextureModel
  • AppearanceModel
  • AAMICIA
  • AFM
  • AXM
  • ASMLTCs
  • ASMNDProfiles

1.3 Output Images

Under folder TextureModel, 3 key images are generated:

Reference.jpg edges.jpg ellipses.jpg
Reference.jpg edges.jpg ellipses

Under folder AAMICIA, another 3 key images are generated:

m_IplImageTempFace.jpg m_IplImageTempFaceX.jpg m_IplImageTempFaceY.jpg
m_IplImageTempFace.jpg - same as Reference.jpg m_IplImageTempFaceX.jpg m_IplImageTempFaceY.jpg

2. Fitting

2.1 Fitting Commands

Current VOSM supports 5 fitting methods.

  • ASM_PROFILEND
  • ASM_LTC
  • AAM_BASIC
  • AAM_CMUICIA
  • AAM_IAIA
1
2
3
4
5
$ testsmfitting -o "./output/" -t "ASM_PROFILEND" -i "./images/testing/" -a "./annotations/testing/" -d "IMM" -s true -r true
$ testsmfitting -o "./output/" -t "ASM_LTC" -i "./images/testing/" -a "./annotations/testing/" -d "IMM" -s true -r true
$ testsmfitting -o "./output/" -t "AAM_BASIC" -i "./images/testing/" -a "./annotations/testing/" -d "IMM" -s true -r true
$ testsmfitting -o "./output/" -t "AAM_CMUICIA" -i "./images/testing/" -a "./annotations/testing/" -d "IMM" -s true -r true
$ testsmfitting -o "./output/" -t "AAM_IAIA" -i "./images/testing/" -a "./annotations/testing/" -d "IMM" -s true -r true

2.2 Fitting Results

Let's just take ASM_PROFILEND as an example.

1
$ testsmfitting -o "./output/" -t "ASM_PROFILEND" -i "./images/testing/" -a "./annotations/testing/" -d "IMM" -s true -r true

All fitted images are generated under current folder, some are well fitted:

11-1m.jpg 33-4m.jpg 40-6m.jpg
11-1m.jpg 33-4m.jpg 40-6m

others are NOT well fitted:

12-3f.jpg 20-6m.jpg 23-4m.jpg
12-3f.jpg 20-6m.jpg 23-4m

2.3 Process of Fitting

The fitting process can also be recorded for each image if the parameter "-r" is enabled by -r true. Let's take a look at what's in folder 40-6m.

00.jpg 01.jpg 02.jpg
00.jpg 01.jpg 02.jpg
03.jpg 04.jpg 05.jpg
03.jpg 04.jpg 05.jpg
06.jpg 07.jpg 08.jpg
06.jpg 07.jpg 08.jpg
09.jpg 10.jpg 11.jpg
09.jpg 10.jpg 11.jpg
09.jpg 10.jpg 11.jpg
12.jpg 13.jpg 14.jpg
15.jpg 16.jpg
15.jpg 16.jpg

Clearly, the technology of pyramids is adopted during the fitting process.

To download videos from Youtube is sometimes required. Some Chrome plugins can be used to download Youtube videos, such as: Youtube Downloader. Other methods can also be found on various resources, such as: WikiHow.

In this blog, I'm going to cite (copy and paste) from WikiHow about How to download Youtube videos by using VLC.

STEP1: Copy Youtube URL

Find the Youtube video that you would like to download, and copy the Youtube URL.

STEP 2: Broadcast Youtube Video in VLC

Paste the URL under VLC Media Player->Media->Open Network Stream->Network Tab->Please enter a network URL:, and Play:

Video Broadcasting

STEP 3: Get the Real Location of Youtube Video

Then, copy the URL under Tools->Codec Information->Codec->Location:

Current Media Information

STEP 4: Save Video As

Finally, paste the URL in Chrome:

Save Video As

and Save video as....

1. IRC and Freenode

People have been using IRC for decades.

Freenode is an IRC network used to discuss peer-directed projects.

cited from Wikipedia.

It's recommended to have some background about IRC and Freenode on Wikipedia:

2. Popular IRC Clients

Lots of IRC clients can be found on Google. Here, we just summarize some from IRC Wikipedia:

3. 7 Best IRC Clients for Linux

Please just refer to https://www.tecmint.com/best-irc-clients-for-linux/.

4. WeeChat in Linux

4.1 Installation

We can of course directly download the package from WeeChat Download, and have it installed from source. However, installing WeeChat from repository is recommended.

Step 1: Add the WeeChat repository

1
$ sudo sh -c 'echo "deb https://weechat.org/ubuntu $(lsb_release -cs) main" >> /etc/apt/sources.list.d/WeeChat.list'

Step 2: Add WeeChat repository key

1
$ sudo apt-key adv --keyserver keys.gnupg.net --recv-keys 11E9DE8848F2B65222AA75B8D1820DB22A11534E

Step 3: WeeChat Installation

1
2
$ sudo apt update
$ sudo apt install WeeChat

4.2 How to use WeeChat

Refer to WeeChat Official Documentation.

Step 1: Run WeeChat

1
$ weechat

will display:

WeeChat

And we FIRST type in /help

WeeChat

Step 2: Connect Freenode

1
/connect freenode
WeeChat

Step 3: Join a Channel

Here, we select channel OpenCV as an example:

1
/join #opencv
WeeChat

Step 4: Close a Channel

1
/close

5. Pidgin (Optional)

We're NOT going to talk about it in this blog.

To DIY a home security camera is comparatively simple by using a low-cost embedded board with Linux installed. There are ONLY 3 steps in total.

STEP1: Install MOTION from Repository

1
$ sudo apt install motion

STEP 2: start_motion_daemon=yes

1
$ sudo vim /etc/default/motion

Change start_motion_daemon from no to yes.

STEP 3: stream_localhost off

1
$ sudo vim /etc/motion/motion.conf

Change stream_localhost on to stream_localhost off.

STEP 4: Restart Motion Service

Run the following command:

1
$ sudo /etc/init.d/motion restart

STEP 5: Run Motion

1
$ sudo motion

STEP 6: Video Surveillance Remotely

Open up Chrome and type in the IP address:8081 to show the captured video at 1FPS. In my case, 192.168.0.86:8081, and the video effect is as:

Motion Video Surveillance

Finally, I’ve got some time to write something about PyTorch, a popular deep learning tool. We suppose you have had fundamental understanding of Anaconda Python, created Anaconda virtual environment (in my case, it’s named condaenv), and had PyTorch installed successfully under this Anaconda virtual environment condaenv.

Since I’m using Visual Studio Code to test my Python code (of course, you can use whichever coding tool you like), I suppose you’ve already had your own coding tool configured. Now, you are ready to go!

In my case, I’m giving a tutorial, instead of coding by myself. Therefore, Jupyter Notebook is selected as my presentation tool. So, I’ll demonstrate everything both in .py files, as well as .ipynb files. All codes can be found at Longer Vision PyTorch_Examples. However, ONLY Jupyter Notebook presentation is given in my blogs. Therefore, I suppose you’ve already successfully installed Jupyter Notebook, as well as any other required packages under your Anaconda virtual environment condaenv.

Now, let’s pop up Jupyter Notebook server.

Anaconda Python Virtual Environment

Clearly, Anaconda comes with the NEWEST version. So far, it is Python 3.6.4.

PART A: Hello PyTorch

Our very FIRST test code is of only 6 lines including an empty line:

After popping up Jupyter Notebook server, and click Run, you will see this view:

Jupyter Notebook Hello PyTorch

Clearly, the current torch is of version 0.3.1, and torchvision is of version 0.2.0.

PART B: Convolutional Neural Network in PyTorch

1. General Concepts

1) Online Resource

We are NOT going to discuss the background details of Convolutional Neural Network. A lot online frameworks/courses are available for you to catch up:

Several fabulous blogs are strongly recommended, including:

2) Architecture

One picture for all (cited from Convolutional Neural Network )

Typical Convolutional Neural Network Architecture

3) Back Propagation (BP)

The ONLY concept in CNN we want to emphasize here is Back Propagation, which has been widely used in traditional neural networks, which takes the solution similar to the final Fully Connected Layer of CNN. You are welcome to get some more details from https://brilliant.org/wiki/backpropagation/.

Pre-defined Variables

Training Database
  • \(X={(\vec{x_1},\vec{y_1}),(\vec{x_2},\vec{y_2}),…,(\vec{x_N},\vec{y_N})}\): the training dataset \(X\) is composed of \(N\) pairs of training samples, where \((\vec{x_i},\vec{y_i}),1 \le i \le N\)
  • \((\vec{x_i},\vec{y_i}),1 \le i \le N\): the \(i\)th training sample pair
  • \(\vec{x_i}\): the \(i\)th input vector (can be an original image, can also be a vector of extracted features, etc.)
  • \(\vec{y_i}\): the \(i\)th desired output vector (can be a one-hot vector, can also be a scalar, which is a 1-element vector)
  • \(\hat{\vec{y_i} }\): the \(i\)th output vector from the nerual network by using the \(i\)th input vector \(\vec{x_i}\)
  • \(N\): size of dataset, namely, how many training samples
  • \(w_{ij}^k\): in the neural network’s architecture, at level \(k\), the weight of the node connecting the \(i\)th input and the \(j\)th output
  • \(\theta\): a generalized denotion for any parameter inside the neural networks, which can be looked on as any element from a set of \(w_{ij}^k\).
Evaluation Function
  • Mean squared error (MSE): \[E(X, \theta) = \frac{1}{2N} \sum_{i=1}^N \left(||\hat{\vec{y_i} } - \vec{y_i}||\right)^2\]
  • Cross entropy: \[E(X, prob) = - \frac{1}{N} \sum_{i=1}^N log_2\left({prob(\vec{y_i})}\right)\]

Particularly, for binary classification, logistic regression is often adopted. A logistic function is defined as:

\[f(x)=\frac{1}{(1+e^{-x})}\]

In such a case, the loss function can easily be deducted as:

\[E(X,W) = - \frac{1}{N} \sum_{i=1}^N [y_i log({\hat{y_i} })+(1-y_i)log(1-\hat{y_i})]\]

where

\[y_i=0/1\]

\[\hat{y_i} \equiv g(\vec{w} \cdot \vec{x_i}) = \frac{1}{(1+e^{-\vec{w} \cdot \vec{x_i} })}\]

Some PyTorch explaination can be found at torch.nn.CrossEntropyLoss.

BP Deduction Conclusions

Only MSE is considered here (Please refer to https://brilliant.org/wiki/backpropagation/):

\[ \frac {\partial {E(X,\theta)} } {\partial w_{ij}^k} = \frac {1}{N} \sum_{d=1}^N \frac {\partial}{\partial w_{ij}^k} \Big( {\frac{1}{2} (\vec{y_d}-y_d)^2 } \Big) = \frac{1}{N} \sum_{d=1}^N {\frac{\partial E_d}{\partial w_{ij}^k} } \]

The updating weights is also determined as:

\[ \Delta w_{ij}^k = - \alpha \frac {\partial {E(X,\theta)} } {\partial w_{ij}^k} \]

2. PyTorch Implementation of Canonical CNN

Hi, today, I'm going to revisit a very old topic, Setup Repeater Bridge Using A dd-wrt Router. This reminds me of my FIRST startup Misc Vision that used to resell Foscam home security IP cameras. That is a long story. Anyway, this blog is going to heavily cite the contents on this page.

PART A: Flash A Supported Router with DD-WRT Firmware

Before we start to tackle the problem, the open source firmware DD-WRT is strongly recommended to flash your router.

1. DD-WRT and OpenWRT

Two widely used open source router firmware have been published online for quite a bit of time: DD-WRT, OpenWRT In fact, each of these two are OK for router firmware flashing.

2. DD-WRT Supported Routers

To check if your router is supported by DD-WRT, you need to check this page.

3. DD-WRT SL-R7202

In my case, I got a lot of such routers in hand and am selling them on Craigslist (if anybody is interested in it, please let me know). You can also find some either continued or discontinued such products on the market, such as Gearbest.

DD-WRT SL-R7202 looks good:

DD-WRT SL-R7202 Top
DD-WRT SL-R7202 Size

In one word, find some DD-WRT supported router, and have it successfully flashed FIRST. As a lazy man, I'm NOT going to flash anything but just use a router coming with an existing DD-WRT firmware, namely, DD-WRT SL-R7202.

PART B: The Problem

The problem that we are going to deal with is how to connect multiple routers, so that the Internet range can be expanded. For routers with dd-wrt firmware, there are mulitple ways to connect routers, please refer to https://www.dd-wrt.com/wiki/index.php/Linking_Routers.

1. Linking Routers

In the following, most contents are CITED directly from Linking Routers and its extended webpages.

  • Access Point / Switch - Extend the Wireless access area using more routers, with WIRED connections between routers, or turn a WIRED port on an existing network into a Wireless Access Point. All computers will be on the same network segment, and will be able to see one another in Windows Network. This works with all devices with LAN ports, and does not require dd-wrt to be installed. "

    • Wireless Access Point - Extend Wi-Fi & LAN (Requires physical(just WIRED) ethernet connection between routers)
    • Switch - Similar config as WAP, but radio disabled (accepts only WIRED connections)
DD-WRT Access Point Mode
  • Repeater / Repeater Bridge - Extend the Wireless access area using a second router WIRELESSLY connected to the primary. The secondary router must have dd-wrt installed; the primary does not need dd-wrt.

    • Repeater Bridge - A wireless repeater with DHCP & NAT disabled, clients on same subnet as host AP (primary router). That is, all computers can see one another in Windows Network.
    • Repeater - A wireless repeater with DHCP & NAT enabled, clients on different subnet from host AP (primary router). Computers connected to one router can not see computers connected to other routers in Windows Network.
    • Universal Wireless Repeater - Uses a program/script called AutoAP to keep a connection to the nearest/best host AP.
DD-WRT Repeater Bridge Mode
  • Client / Client Bridge - Connect two wired networks using a WiFi link (WIRELESS connection between two routers). The secondary router must have dd-wrt installed; the primary router does not need to have dd-wrt.

    • Client Bridged - Join two wired networks by two Wireless routers building a bridge. All computers can see one another in Windows Network.
    • Client Mode - Join two wired networks by two Wireless routers (unbridged). Computers on one wired network can not see computers on other wired network in Windows Network.
DD-WRT Client Bridge Mode
  • WDS - Extend the Wireless access area using more routers connected WIRELESSLY. WDS is a mesh network. Routers must almost always have the SAME chipset type for WDS to work, and any non dd-wrt routers must be WDS compatible. Using identical routers is best, but not always necessary if all devices have the same chipset types. (All Broadcom or all Atheros etc)

  • OLSR - Extend the Wireless access area using more routers. Extra routers do not need any wired connections to each other. Use several ISP (Internet) connections. OLSR is a mesh network.

2. Down-to-earth Case

In my case, my laptop and some of my IoT devices can be easily connected to the ShawCable Modem/Router combo, which is on the 2nd floor in my living room and directly connected to the Internet. However, my R&D office is located in my garage on the 1st floor, where dd-wrt router is put in. In such a case, my dd-wrt router must be connected to my ShawCable Modem/Router WIRELESSLY.

Due to the reason explained in the section Difference between Client Bridge and Repeater Bridge. A standard wireless bridge (Client Bridge) connects wired clients to a secondary router as if they were connected to your main router with a cable. Secondary clients share the bandwidth of a wireless connection back to your main router. Of course, you can still connect clients to your main router using either a cable connection or a wireless connection.

The limitation with standard bridging is that it only allows WIRED clients to connect to your secondary router. WIRELESS clients cannot connect to your secondary router configured as a standard bridge. Repeater Bridge allows both WIRELESS AND WIRED clients to connect to a the Repeater Bridge router, and through that device WIRELESSLY to a primary router. You can still use this mode if you only need to bridge wired clients; the extra wireless repeater capability comes along for free; however, you are not required to use it.

Therefore, we select Repeater Bridge as our solution. Here, DD-WRT SL-R7202 is selected as the Repeater Bridge. The difficulty is how to setup DD-WRT SL-R7202 to make it work as a wireless Repeater Bridge?

PART C: About Firmware (Optional)

1. Do We Need Firmware Upgrading?

Actually, before I come to this step, I've already: * strictly followed the configuration process written on https://www.dd-wrt.com/wiki/index.php/Repeater_Bridge, but FAILED many times. * and afterwards, I found a video solution at https://www.youtube.com/watch?v=ByB8vVGBjh4, which seems to be suitable for my case. However, it finally turns out that my DD-WRT SL-R7202 router comes with a firmware version DD-WRT (build 13064), but the tutorial on Youtube uses a router with firmware version DD-WRT (build 21061).

It seems to me that a firmware upgrading is a must before we setup the Wireless Repeater Bridge Mode. But, How?

2. Reset to Factory Defaults on DD-WRT Router (The 2nd Router)

First of all, we need to reset DD-WRT. There is a black button on my DD-WRT SL-R7202, as shown:

Reset Button at DD-WRT's Back

In fact, how to reset DD-WRT needs to be paid attention to. > Hold the reset button until lights flash (10-30sec) or 30-30-30 if appropriate for your router. (DO NOT 30-30-30 ARM routers.)

3. Visit http://192.168.1.1:

After successfully reset DD-WRT router to factory default, connect it with your host computer via a WIRED cable, for better stablity. Then, we change our network connection from our FIRST router to SECOND router: dd-wrt router. Afterwards, let's visit http://192.168.1.1.

  • NOTE: My host computer's IP address on the FIRST router, namely, ShawCable Modem/Router is staticly set as 192.168.0.8 (which is able to connect to the Internet). After I switch to the Wifi dd-wrt, my host's IP is DHCP allocated to 192.168.1.103. The gateway on ShawCable is 192.168.0.1, which is different from the gateway on dd-wrt 192.168.1.1 by default.

1) FIRST Page

DD-WRT home page looks like:

DD-WRT First Login

And it's suggested we input a new router username root and password admin. After the button Change Password is clicked, we are entering the very first page of DD-WRT Web GUI.

DD-WRT First Page
  • We can easily tell the firmware's info on the top right corner.
1
2
3
Firmware: DD-WRT v24-sp2 (10/10/09) std
Time: 00:01:13 up 1 min, load average: 0.12, 0.04, 0.00
WAN IP: 0.0.0.0
  • And, we can also easily tell from the bottom of this page that for now, only 1 DHCP client is connected to this router, which is just our host computer: jiapei-GT72-6QE.

2) System Info on Status Page

DD-WRT Status page tells a lot about what's in this router:

DD-WRT Status - Before Upgrading

Clearly, our router model is Buffalo WHR-G300N, and our firmware build version is 13064.

3) Upgradable?

We then visit https://www.dd-wrt.com/site/support/router-database, and input WHR-G300N, two routers will be listed as:

DD-WRT WHR-G300N

By click the FIRST one, namely v1, we get:

DD-WRT WHR-G300N v1: from 13064 to 14896

Clearly, we should be able to upgrade from our current firmware version 13064 to the newer one 14896.

With the existing firmware version 13064 installed on the router SL-R7202, we can directly upgrade the firmware through web flash by downloading the firmware WHR-G300N-webflash.bin, but ignoring downloading all TFTP and openvpn files.

4) Firmware Upgrading

Just click Administration -> Firmware Upgrade, and then choose WHR-G300N-webflash.bin.

DD-WRT Administration - Firmware Upgrade

After clicking Choose file, the system is automatically upgraded after several minutes.

NOTE: Do NOT touch anything during the upgrading process.

DD-WRT Upgrade Successful

After firmware upgrading, we can easily tell some difference: From Setup Page, on the left-top corner, we can see the builder version is now 14896. And from the right-top corner, the current firmware info is:

1
2
3
Firmware: DD-WRT v24-sp2 (08/07/10) std
Time: 00:08:20 up 8 min, load average: 0.00, 0.02, 0.00
WAN IP: 0.0.0.0
DD-WRT Setup - Basic Setup

From the Status Page, we can doublecheck the firmware version is now 14896.

DD-WRT Status - After Upgrading

5) Firmware Upgrading is Problematic !

However, upgrading the firmware version from 13064 to the newer one 14896 is problematic, please refer to https://jgiam.com/2012/07/06/setting-up-a-repeater-bridge-with-dd-wrt-and-d-link-dir-600. After carefully reading this article, and due to my practical try later on, I summarized the conclusion here first:

  • NOTE:
    • KEEP using firmware version 13064, instead of 14896
    • NEVER adding the Virtual Interfaces

PART D: Wireless Repeater Bridge (Same Subset)

1. Wireless -> Basic Settings

We modify the parameters accordingly (Cited from the blog Setting up a repeater bridge with DD-WRT and D-Link DIR-600 ):

  • Physical Interface Section
    • Wireless Mode: Repeater Bridge
    • Wireless Network Mode: Must Be Exactly The Same As Primary Router
    • Wireless Network Name (SSID): Must Be Exactly The Same As Primary Router. Be Careful: DD-WRT is ONLY a 2.4G router. If your primary router has two bands, namely, 2.4G and 5G, make sure you connect to the 2.4G network.
    • Wireless Channel: Must Be Exactly The Same As Primary Router
    • Channel Width: Must Be Exactly The Same As Primary Router
    • Wireless SSID Broadcast: Enabled
    • Network Configuration: Bridged

Then, click Save without Apply, we get:

DD-WRT Wireless - Basic Settings

2. Wireless -> Wireless Security

  • Physical Interface section
    • Security Mode: Must Be WPA2 Personal, and The Primary Router Must Be Exactly The Same.
    • WPA Algorithms: Must Be AES, and The Primary Router Must Be Exactly The Same.
    • WPA Shared Key: Must Be Exactly The Same As Primary Router
    • Key Renewal Interval (in seconds): Leave as it is, normally, 3600.

Then, click Save without Apply, we get:

DD-WRT Wireless - Wireless Security

3. Setup -> Basic Setup

  • WAN Connection Type Section
    • Connection Type: Disabled
    • STP: Disabled
  • Optional Settings
    • Router Name: DD-WRT, just let it be
  • Router IP
    • IP Address: 192.168.0.2 (in my case, my Primary Router IP is 192.168.0.1, and 192.168.0.2 has NOT been used yet.)
    • Subnet Mask: 255.255.255.0
    • Gateway: 192.168.0.1 (Must Be Primary Router IP)
    • Local DNS: 192.168.0.1 (Must Be Primary Router IP)
  • WAN Port
    • Assign WAN Port to Switch: tick
  • Time Settings
    • Time Zone: set correspondingly

Then, click Save without Apply, we get:

DD-WRT Setup - Basic Setup

4. Setup -> Advanced Routing

  • Operating Mode Section
    • Operating Mode: Router

Then, click Save without Apply, we get:

DD-WRT Setup - Advanced Routing

5. Services -> Services

  • DNSMasq Section
    • DNSMasq: Diable
  • Secure Shell Section
    • SSHd: Enable
  • Telnet Section
    • Telnet: Enable

Then, click Save without Apply, we get:

DD-WRT Services - Services

6. Security -> Firewall

Note: Process in Sequence. * Block WAN Requests - Block Anonymous WAN Requests (Ping): untick - Filter Multicast: tick - Filter WAN NAT Redirection: untick - Filter IDENT (Port 113): untick * Firewall Protection - SPI Firewall: Disable

Then, click Save without Apply, we get:

DD-WRT Security - Firewall

7. Administration Management

  • Remote Access Section
    • Web GUI Management: Enable
    • SSH Management: Enable
    • Telnet Remote Port: Enable
    • Allow Any Remote IP: Enable

Then, click Save without Apply, we get:

DD-WRT Administration - Management

Now, it's the time to Apply Settings

DD-WRT Administration - Management - Apply Settings

And finally, click Reboot Router

DD-WRT Administration - Management - Reboot Router

PART E: Doublechek Repeater Bridge

1. Ping

Let's see the ping results directly:

1) PC Wired Connecting to DD-WRT (Router 2)

DD-WRT Wired Connection Between PC and Router 2, Namely, DD-WRT

2) PC Wirelessly Connecting to Router 1 - 5G

DD-WRT Wifi Connection Between PC and Primary Router 1's 5G Network

3) PC Wirelessly Connecting to Router 1 - 2.4G

DD-WRT Wifi Connection Between PC and Primary Router 1's 2.4G Network

2. Visit http://192.168.0.1 and http://192.168.0.2

Most of the time, in order to get the fastest Internet speed, I use Primary Router 1's 5G network. Therefore, for my final demonstration, my PC is wirelessly connected to the 5G network. And, I can successfully visit http://192.168.0.1 and http://192.168.0.2.

1) http://192.168.0.1

http://192.168.0.1

2) http://192.168.0.2

http://192.168.0.2

Today, we are going to install a runnable Linux OS Armbian and then flash the most recent supported Linux Kernel onto an Orange Pi Plus 2, which adopts AllWinner H3 as its CPU. The board looks like (cited from Orange Pi Plus 2 ):

Orange Pi Plus 2

Similar to our previous blog Install Armbian Debian Server onto NanoPi NEO, to build the Mainline Linux for Orange Pi Plus 2, we use the Embedded Linux management open source Armbian. However, this time, we'll have the most recently supported Linux Kernel updated finally.

PART A: Install Ubuntu Desktop Built By Armbian onto Orange Pi Plus 2

1. Download Armbian Ubuntu Desktop for Orange Pi Plus 2

We FIRST go visiting the website https://www.armbian.com/orange-pi-plus-2/ and click Ubuntu desktop - legacy kernel icon, a file named Armbian_5.38_Orangepiplus_Ubuntu_xenial_default_3.4.113_desktop.7z will be automatically downloaded.

Then, we extract this .7z file by

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ 7z x Armbian_5.38_Orangepiplus_Ubuntu_xenial_default_3.4.113_desktop.7z

7-Zip (A) [64] 9.20 Copyright (c) 1999-2010 Igor Pavlov 2010-11-18
p7zip Version 9.20 (locale=en_CA.UTF-8,Utf16=on,HugeFiles=on,8 CPUs)

Processing archive: Armbian_5.38_Orangepiplus_Ubuntu_xenial_default_3.4.113_desktop.7z

Extracting Armbian_5.38_Orangepiplus_Ubuntu_xenial_default_3.4.113_desktop.img
Extracting armbian.txt
Extracting armbian.txt.asc
Extracting Armbian_5.38_Orangepiplus_Ubuntu_xenial_default_3.4.113_desktop.img.asc
Extracting sha256sum.sha

Everything is Ok

Files: 5
Size: 3061862273
Compressed: 606477386

2. Install Armbian Ubuntu Desktop for Orange Pi Plus 2

After the extracted image file is prepared, it's the time to install the Armbian Ubuntu Desktop onto our TF card. We FIRST format the TF card:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ sudo mkfs.ext4 /dev/mmcblk0 
[sudo] password for jiapei:
mke2fs 1.42.13 (17-May-2015)
Found a dos partition table in /dev/mmcblk0
Proceed anyway? (y,n) y
Discarding device blocks: done
Creating filesystem with 7791744 4k blocks and 1949696 inodes
Filesystem UUID: 8873533f-4a66-4e8c-8633-844eaa90116d
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

Afterwards, use dd to install the downloaded Armbian Ubuntu Desktop image.

1
2
3
4
5
$ sudo dd bs=4M if=Armbian_5.38_Orangepiplus_Ubuntu_xenial_default_3.4.113_desktop.img of=/dev/mmcblk0 conv=fsync
[sudo] password for jiapei:
730+0 records in
730+0 records out
3061841920 bytes (3.1 GB, 2.9 GiB) copied, 381.133 s, 8.0 MB/s

3. Boot Into Orange Pi Plus 2

We now unplug the TF card from the host and put it into the Orange Pi Plus 2 board, Armbian Ubuntu Desktop boots successfully. Make sure the username and password are respectively: root and 1234. And you will notice that

1
2
3
4
5
6
You are required to change your password immediately (root enforced)
Changing password for root.
(current) UNIX password:
Enter new UNIX password:
Retype new UNIX password:
...

And, for the FIRST boot, we'll be asked to create a NEW user besides root:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Creating a new user account. Press <Ctrl-C> to abort
Desktop enironment will not be enabled if you abort the new user creation

Please provide a username (eg. your forename): orangepiplus2
Trying to add user orangepiplus2
Adding user 'orangepiplus2' ...
Adding new group 'orangepiplus2' (1000) ...
Adding new user 'orangepiplus2' (1000) with group 'orangepiplus2' ...
Creating home directory '/home/orangepiplus2' ...
Copying files from '/etc/ske1' ...
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for orangepiplus2
Eter the new value, or press ENTER for the default
Full Name []:
Room Number []:
Work Phone []:
Home Phone []:
Other []:
Is the information correct? [Y/n] Y
...

Then, Orange Pi Plus 2 will boot into GUI as follows.

Armbian Ubuntu Desktop Overview

We are now able to see the kernel version is of 3.4.113, which is a very old Linux Kernel.

Armbian Kernel Version

We then update/upgrade all upgradable packages, and have the system rebooted.

Armbian Apt Update
Armbian Apt Upgrade

After finishing the upgrading, Ubuntu has successfully upgraded from 16.04.3 to 16.04.4, but Linux Kernel is still of version 3.4.113.

Armbian All Upgraded

PART B: Build The Newest Armbian U-Boot and Linux Kernel for Orange Pi Plus 2

As shown in PART A, the current Linux Kernel on our Orange Pi Plus 2 is of an old version 3.4.113. Are we able to upgrade the Linux Kernel to the most recent one? The ANSWER is of course YES.

1. Download Armbian Source Code

1
$ git clone git@github.com:armbian/build.git armbian

2. Test Compiling

We now try to compile Armbian for our very FIRST attempt. This will help us to download a lot required packages, including all Linaro toolchains, U-Boot, etc., and save those packages under folder cache,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
$ cd ./armbian
$ ./compile.sh
[ o.k. ] Using config file [ config-default.conf ]
[ warn ] This script requires root privileges, trying to use sudo
[sudo] password for jiapei:
[ o.k. ] Using config file [ config-default.conf ]
[ o.k. ] This script will try to update
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
Already on 'master'
Your branch is up-to-date with 'origin/master'.
[ o.k. ] Preparing [ host ]
[ o.k. ] Build host OS release [ xenial ]
[ o.k. ] Syncing clock [ host ]
[ .... ] Downloading [ gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_linux ]
######################################################################## 100.0%
######################################################################## 100.0%
[ .... ] Verifying
gpg: keyring `/home/jiapei/Downloads/OperatingSystems/linux/distros/armbian/armbian/cache/.gpg/pubring.gpg' created
gpg: /home/jiapei/Downloads/OperatingSystems/linux/distros/armbian/armbian/cache/.gpg/trustdb.gpg: trustdb created
gpg: error reading key: public key not found
gpg: keyring `/home/jiapei/Downloads/OperatingSystems/linux/distros/armbian/armbian/cache/.gpg/secring.gpg' created
gpg: requesting key 8F427EAF from hkp server keyserver.ubuntu.com
gpg: key 8F427EAF: public key "Linaro Toolchain Builder <michael.hope+cbuild@linaro.org>" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
gpg: assuming signed data in `gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_linux.tar.xz'
gpg: Signature made Wed 16 Apr 2014 11:29:14 AM PDT using RSA key ID 8F427EAF
gpg: Good signature from "Linaro Toolchain Builder <michael.hope+cbuild@linaro.org>"
[ .... ] Extracting
[ o.k. ] Download complete
[ .... ] Downloading [ gcc-linaro-4.9.4-2017.01-x86_64_aarch64-linux-gnu ]
######################################################################## 100.0%
######################################################################## 100.0%
[ .... ] Verifying
[ .... ] Extracting
[ o.k. ] Download complete
[ .... ] Downloading [ gcc-linaro-4.9.4-2017.01-x86_64_arm-linux-gnueabi ]
######################################################################## 100.0%
######################################################################## 100.0%
[ .... ] Verifying
[ .... ] Extracting
[ o.k. ] Download complete
[ .... ] Downloading [ gcc-linaro-4.9.4-2017.01-x86_64_arm-linux-gnueabihf ]
######################################################################## 100.0%
######################################################################## 100.0%
[ .... ] Verifying
[ .... ] Extracting
[ o.k. ] Download complete
[ .... ] Downloading [ gcc-linaro-5.5.0-2017.10-x86_64_aarch64-linux-gnu ]
######################################################################## 100.0%
######################################################################## 100.0%
[ .... ] Verifying
[ .... ] Extracting
[ o.k. ] Download complete
[ .... ] Downloading [ gcc-linaro-5.5.0-2017.10-x86_64_arm-linux-gnueabi ]
######################################################################## 100.0%
######################################################################## 100.0%
[ .... ] Verifying
[ .... ] Extracting
[ o.k. ] Download complete
[ .... ] Downloading [ gcc-linaro-5.5.0-2017.10-x86_64_arm-linux-gnueabihf ]
######################################################################## 100.0%
######################################################################## 100.0%
[ .... ] Verifying
[ .... ] Extracting
[ o.k. ] Download complete
[ .... ] Downloading [ gcc-linaro-6.4.1-2017.11-x86_64_arm-linux-gnueabihf ]
######################################################################## 100.0%
######################################################################## 100.0%
[ .... ] Verifying
[ .... ] Extracting
[ o.k. ] Download complete
[ .... ] Downloading [ gcc-linaro-6.4.1-2017.11-x86_64_aarch64-linux-gnu ]
######################################################################## 100.0%
######################################################################## 100.0%
[ .... ] Verifying
[ .... ] Extracting
[ o.k. ] Download complete
[ .... ] Downloading [ gcc-linaro-7.2.1-2017.11-x86_64_aarch64-linux-gnu ]
######################################################################## 100.0%
######################################################################## 100.0%
[ .... ] Verifying
[ .... ] Extracting
[ o.k. ] Download complete
[ .... ] Downloading [ gcc-linaro-7.2.1-2017.11-x86_64_arm-linux-gnueabihf ]
######################################################################## 100.0%
######################################################################## 100.0%
[ .... ] Verifying
[ .... ] Extracting
[ o.k. ] Download complete

Then, a canonical GUI will jump out onto the screen for you to make selections:

U-boot and kernel packages
Show a kernel configuration menu before compilation
Officially supported boards

Clearly, our board Orange Pi Plus 2 is NOT in the list. Therefore, we select Cancel here for now, and some ERROR messages will be generated as follows.

1
2
3
[ error ] ERROR in function source [ main.sh:198 ]
[ error ] No kernel branch selected
[ o.k. ] Process terminated

3. Prepare The Board Configuration File

We doublecheck the officially supported boards:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ cd ./config/boards
$ ls -1 *orangepi*
orangepi2.conf
orangepi2g-iot.csc
orangepi.eos
orangepilite.conf
orangepimini.eos
orangepione.conf
orangepipc2.conf
orangepipc.conf
orangepipcplus.conf
orangepiplus2e.conf
orangepiplus.conf
orangepiprime.conf
orangepi-r1.conf
orangepiwin.conf
orangepizero.conf
orangepizeroplus2-h3.conf
orangepizeroplus2-h5.conf
orangepizeroplus.conf

As we can see, it is certain that our board Orange Pi Plus 2 is NOT officially supported. Due to the trivial difference between Orange Pi Plus 2 and Orange Pi Plus 2e, we configure our Orange Pi Plus 2 board as a Orange Pi Plus 2e board by:

1
$ cp orangepiplus2e.conf orangepiplus2.conf

Afterwards, we modify the file content in orangepiplus2.conf manually:

1
$ vim orangepiplus2.conf

and, the file content is correspondingly modified to:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# H3 quad core 2GB RAM WiFi eMMC
BOARD_NAME="Orange Pi+ 2"
BOARDFAMILY="sun8i"
BOOTCONFIG="orangepi_plus2_defconfig"
#
MODULES="8189fs #w1-sunxi #w1-gpio #w1-therm #gc2035 #vfe_v4l2 #sunxi-cir"
MODULES_NEXT="8189fs"
CPUMIN="408000"
CPUMAX="1296000"
#
KERNEL_TARGET="default,next,dev"
CLI_TARGET="stretch,xenial:next"
DESKTOP_TARGET="xenial:default"
#
CLI_BETA_TARGET=""
DESKTOP_BETA_TARGET=""
#
RECOMMENDED="Ubuntu_xenial_default_desktop:90,Debian_stretch_next:75"
#
BOARDRATING=""
CHIP="http://docs.armbian.com/Hardware_Allwinner-H3/"
HARDWARE="http://www.orangepi.org/orangepiplus2/"
FORUMS="http://forum.armbian.com/index.php/forum/13-allwinner-h3/"
BUY="http://s.click.aliexpress.com/e/VbA6AEq"

4. Prepare Board Configuration File for U-Boot

Then, we recompile armbian with some particular options (Please refer to Armbian Build Options for parameter details):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ cd ../../
$ ./compile.sh BOARD="orangepiplus2" BRANCH="next" KERNEL_ONLY="yes" KERNEL_CONFIGURE="no"
[ o.k. ] Using config file [ config-default.conf ]
[ warn ] This script requires root privileges, trying to use sudo
[sudo] password for jiapei:
[ o.k. ] Using config file [ config-default.conf ]
[ o.k. ] This script will try to update
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
Already on 'master'
Your branch is up-to-date with 'origin/master'.
[ o.k. ] Command line: setting BOARD to [ orangepiplus2 ]
[ o.k. ] Command line: setting BRANCH to [ next ]
[ o.k. ] Command line: setting KERNEL_ONLY to [ no ]
[ o.k. ] Command line: setting KERNEL_CONFIGURE to [ no ]
[ o.k. ] Preparing [ host ]
[ o.k. ] Build host OS release [ xenial ]
[ o.k. ] Syncing clock [ host ]

Again, the canonical GUI will jump out onto the screen for you to make selections, here we select Ubuntu Xenial 16.04 LTS and Image with console interface (server) respectively.

Target OS release
Target image type
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
[ o.k. ] Downloading sources 
[ o.k. ] Checking git sources [ u-boot v2017.11 ]
[ .... ] Creating local copy
[ .... ] Fetching updates
remote: Counting objects: 13853, done.
remote: Compressing objects: 100% (12500/12500), done.
remote: Total 13853 (delta 2493), reused 5244 (delta 1104)
Receiving objects: 100% (13853/13853), 17.27 MiB | 1.00 MiB/s, done.
Resolving deltas: 100% (2493/2493), done.
From git://git.denx.de/u-boot
* tag v2017.11 -> FETCH_HEAD
[ .... ] Checking out
[ o.k. ] Checking git sources [ linux-mainline linux-4.14.y ]
[ .... ] Creating local copy
[ .... ] Fetching updates
remote: Counting objects: 65193, done.
remote: Compressing objects: 100% (60945/60945), done.
remote: Total 65193 (delta 5919), reused 26531 (delta 3272)
Receiving objects: 100% (65193/65193), 172.40 MiB | 3.26 MiB/s, done.
Resolving deltas: 100% (5919/5919), done.
From git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable
* branch linux-4.14.y -> FETCH_HEAD
* [new branch] linux-4.14.y -> origin/linux-4.14.y
[ .... ] Checking out
[ o.k. ] Checking git sources [ sunxi-tools master ]
[ .... ] Creating local copy
[ .... ] Fetching updates
remote: Counting objects: 83, done.
remote: Compressing objects: 100% (78/78), done.
remote: Total 83 (delta 13), reused 33 (delta 4), pack-reused 0
Unpacking objects: 100% (83/83), done.
From https://github.com/linux-sunxi/sunxi-tools
* branch master -> FETCH_HEAD
* [new branch] master -> origin/master
[ .... ] Checking out
[ o.k. ] Checking git sources [ rkbin-tools master ]
[ .... ] Creating local copy
[ .... ] Fetching updates
remote: Counting objects: 232, done.
remote: Compressing objects: 100% (182/182), done.
remote: Total 232 (delta 57), reused 209 (delta 50), pack-reused 0
Receiving objects: 100% (232/232), 22.60 MiB | 8.32 MiB/s, done.
Resolving deltas: 100% (57/57), done.
From https://github.com/rockchip-linux/rkbin
* branch master -> FETCH_HEAD
* [new branch] master -> origin/master
[ .... ] Checking out
[ o.k. ] Checking git sources [ marvell-tools A3700_utils-armada-17.10 ]
[ .... ] Creating local copy
[ .... ] Fetching updates
remote: Counting objects: 294, done.
remote: Compressing objects: 100% (150/150), done.
remote: Total 294 (delta 151), reused 229 (delta 140), pack-reused 0
Receiving objects: 100% (294/294), 5.80 MiB | 4.40 MiB/s, done.
Resolving deltas: 100% (151/151), done.
From https://github.com/MarvellEmbeddedProcessors/A3700-utils-marvell
* branch A3700_utils-armada-17.10 -> FETCH_HEAD
* [new branch] A3700_utils-armada-17.10 -> origin/A3700_utils-armada-17.10
[ .... ] Checking out
[ o.k. ] Checking git sources [ odroidc2-blobs master ]
[ .... ] Creating local copy
[ .... ] Fetching updates
remote: Counting objects: 13, done.
remote: Compressing objects: 100% (13/13), done.
remote: Total 13 (delta 0), reused 11 (delta 0), pack-reused 0
Unpacking objects: 100% (13/13), done.
From https://github.com/armbian/odroidc2-blobs
* branch master -> FETCH_HEAD
* [new branch] master -> origin/master
[ .... ] Checking out
[ o.k. ] Compiling [ sunxi-tools ]
Setting version information: 5c19710

[ o.k. ] Installing [ rkbin-tools ]
[ o.k. ] Cleaning output/debs for [ orangepiplus2 next ]
[ o.k. ] Cleaning [ u-boot/v2017.11 ]
[ o.k. ] Compiling u-boot [ 2017.11 ]
[ o.k. ] Compiler version [ arm-linux-gnueabihf-gcc 7.2.1 ]
[ .... ] Checking out sources
[ o.k. ] Cleaning [ u-boot/v2017.11 ]
[ o.k. ] Started patching process for [ u-boot sunxi-orangepiplus2-next ]
[ o.k. ] Looking for user patches in [ userpatches/u-boot/u-boot-sunxi ]
[ o.k. ] * [l][c] 0020-sunxi-call-fdt_fixup_ethernet-again-to-set-macaddr-f.patch
[ o.k. ] * [l][c] 4kfix-limit-screen-to-full-hd.patch
[ o.k. ] * [l][c] Add-A20-Olimex-SOM204-EVB-board.patch
[ o.k. ] * [l][c] add-a20-olinuxino-micro-emmc-support.patch
[ o.k. ] * [l][c] add-a20-optional-eMMC.patch
[ o.k. ] * [l][c] add-bananapi-bpi-m2-zero.patch
[ o.k. ] * [l][c] add-beelink-x2.patch
[ o.k. ] * [l][c] add-nanopi-air-emmc.patch
[ o.k. ] * [l][c] add-nanopi-duo.patch
[ o.k. ] * [l][c] add-nanopi-m1-plus2-emmc.patch
[ o.k. ] * [l][c] add-nanopineoplus2.patch
[ o.k. ] * [l][c] add-orangepi-plus2-emmc.patch
[ o.k. ] * [l][c] add-orangepi-zeroplus.patch
[ o.k. ] * [l][c] add-orangepi-zeroplus2_h3.patch
[ o.k. ] * [l][c] add-sunvell-r69.patch
[ o.k. ] * [l][c] add-tritium.patch
[ o.k. ] * [l][c] add_emmc_olinuxino_a64.patch
[ o.k. ] * [l][c] add_emmc_orangepiwin.patch
[ o.k. ] * [l][c] adjust-default-dram-clockspeeds.patch
[ o.k. ] * [l][c] adjust-small-boards-cpufreq.patch
[ o.k. ] * [l][c] enable-DT-overlays-support.patch
[ o.k. ] * [l][c] enable-autoboot-keyed.patch
[ o.k. ] * [l][c] fdt-setprop-fix-unaligned-access.patch
[ o.k. ] * [l][c] fix-sdcard-detect-bpi-m2z.patch
[ o.k. ] * [l][c] fix-sunxi-gpio-driver.patch
[ o.k. ] * [l][c] fix-usb1-vbus-opiwin.patch
[ o.k. ] * [l][c] h3-Fix-PLL1-setup-to-never-use-dividers.patch
[ o.k. ] * [l][c] h3-enable-power-led.patch
[ o.k. ] * [l][c] h3-set-safe-axi_apb-clock-dividers.patch
[ o.k. ] * [l][c] lower-default-DRAM-freq-A64-H5.patch
[ o.k. ] * [l][c] lower-default-cpufreq-H5.patch
[ o.k. ] * [l][c] sun8i-set-machid.patch
[ o.k. ] * [l][c] video-fix-vsync-polarity-bits.patch
[ o.k. ] * [l][b] workaround-reboot-is-poweroff-olimex-a20.patch
HOSTCC scripts/basic/fixdep
HOSTCC scripts/kconfig/conf.o
SHIPPED scripts/kconfig/zconf.tab.c
SHIPPED scripts/kconfig/zconf.lex.c
SHIPPED scripts/kconfig/zconf.hash.c
HOSTCC scripts/kconfig/zconf.tab.o
HOSTLD scripts/kconfig/conf
***
*** Can't find default configuration "arch/../configs/orangepi_plus2_defconfig"!
***
scripts/kconfig/Makefile:121: recipe for target 'orangepi_plus2_defconfig' failed
make[1]: *** [orangepi_plus2_defconfig] Error 1
Makefile:479: recipe for target 'orangepi_plus2_defconfig' failed
make: *** [orangepi_plus2_defconfig] Error 2
sed: can't read .config: No such file or directory
scripts/kconfig/conf --silentoldconfig Kconfig
***
*** Configuration file ".config" not found!
***
*** Please run some configurator (e.g. "make oldconfig" or
*** "make menuconfig" or "make xconfig").
***
scripts/kconfig/Makefile:46: recipe for target 'silentoldconfig' failed
make[2]: *** [silentoldconfig] Error 1
Makefile:479: recipe for target 'silentoldconfig' failed
make[1]: *** [silentoldconfig] Error 2
make: *** No rule to make target 'include/config/auto.conf', needed by 'include/config/uboot.release'. Stop.
[ error ] ERROR in function compile_uboot [ compilation.sh:156 ]
[ error ] U-boot compilation failed
[ o.k. ] Process terminated

We end up with ERROR messages again. Clearly, it's because U-Boot does NOT support orangepi_plus2_defconfig. Therefore, we do the same to U-Boot board configuration by two steps:

1) STEP 1:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ cd ./cache/sources/u-boot/v2017.11/configs
$ ls -1 *orangepi*
orangepi_2_defconfig
orangepi_2_defconfig.orig
orangepi_lite_defconfig
orangepi_one_defconfig
orangepi_pc2_defconfig
orangepi_pc2_defconfig.orig
orangepi_pc_defconfig
orangepi_pc_plus_defconfig
orangepi_plus2e_defconfig
orangepi_plus2e_defconfig.orig
orangepi_plus_defconfig
orangepi_plus_defconfig.orig
orangepi_prime_defconfig
orangepi_win_defconfig
orangepi_win_defconfig.orig
orangepi_zero_defconfig
orangepi_zero_defconfig.orig
orangepi_zero_plus2_defconfig
orangepi_zero_plus2_h3_defconfig
orangepizero_plus_defconfig

Clearly, orangepi_plus2_defconfig is NOT in the list. Therefore, we do:

1
2
$ sudo cp orangepi_plus2e_defconfig orangepi_plus2_defconfig
$ sudo vim orangepi_plus2_defconfig

and ensure any plus2e is now plus2.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ cat orangepi_plus2_defconfig
CONFIG_ARM=y
CONFIG_ARCH_SUNXI=y
CONFIG_MACH_SUN8I_H3=y
CONFIG_DRAM_CLK=624
CONFIG_DRAM_ZQ=3881979
CONFIG_DRAM_ODT_EN=y
CONFIG_MACPWR="PD6"
CONFIG_MMC_SUNXI_SLOT_EXTRA=2
CONFIG_DEFAULT_DEVICE_TREE="sun8i-h3-orangepi-plus2"
# CONFIG_SYS_MALLOC_CLEAR_ON_INIT is not set
CONFIG_SPL=y
CONFIG_SPL_I2C_SUPPORT=y
# CONFIG_CMD_FLASH is not set
# CONFIG_CMD_FPGA is not set
# CONFIG_SPL_DOS_PARTITION is not set
# CONFIG_SPL_ISO_PARTITION is not set
# CONFIG_SPL_EFI_PARTITION is not set
CONFIG_SUN8I_EMAC=y
CONFIG_SY8106A_POWER=y
CONFIG_USB_EHCI_HCD=y
CONFIG_SYS_USB_EVENT_POLL_VIA_INT_QUEUE=y

2) STEP 2:

1
2
3
4
5
6
7
8
9
10
$ cd ./cache/sources/u-boot/v2017.11/arch/arm/dts
$ ls -1 *sun8i-h3-orangepi*
sun8i-h3-orangepi-2.dts
sun8i-h3-orangepi-lite.dts
sun8i-h3-orangepi-one.dts
sun8i-h3-orangepi-pc.dts
sun8i-h3-orangepi-pc-plus.dts
sun8i-h3-orangepi-plus2e.dts
sun8i-h3-orangepi-plus.dts
sun8i-h3-orangepi-zeroplus2.dts

Clearly, sun8i-h3-orangepi-plus2.dts is NOT in the list. Therefore, we do:

1
2
$ sudo cp sun8i-h3-orangepi-plus2e.dts sun8i-h3-orangepi-plus2.dts
$ sudo vim sun8i-h3-orangepi-plus2.dts

and ensure any 2e/2E is now 2.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
$ cat sun8i-h3-orangepi-plus2.dts
...... // ignore
/*
* The Orange Pi Plus 2 is an extended version of the Orange Pi PC Plus 2,
* with 2G RAM and an external gbit ethernet phy.
*/

#include "sun8i-h3-orangepi-pc-plus.dts"

/ {
model = "Xunlong Orange Pi Plus 2";
compatible = "xunlong,orangepi-plus2", "allwinner,sun8i-h3";

reg_gmac_3v3: gmac-3v3 {
compatible = "regulator-fixed";
pinctrl-names = "default";
pinctrl-0 = <&gmac_power_pin_orangepi>;
regulator-name = "gmac-3v3";
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
startup-delay-us = <100000>;
enable-active-high;
gpio = <&pio 3 6 GPIO_ACTIVE_HIGH>;
};
};

&emac {
/* The Orange Pi Plus 2 uses an external gbit phy */
pinctrl-names = "default";
pinctrl-0 = <&emac_rgmii_pins>;
phy-supply = <&reg_gmac_3v3>;
phy-mode = "rgmii";
/delete-property/allwinner,use-internal-phy;
};

&pio {
gmac_power_pin_orangepi: gmac_power_pin@0 {
allwinner,pins = "PD6";
allwinner,function = "gpio_out";
allwinner,drive = <SUN4I_PINCTRL_10_MA>;
allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
};
};

5. Build the Newest Supported Linux Kernels

Before we build the system again, we just need to ensure our python is of version 2 instead of 3, otherwise, you'll get some ERROR message like:

1
ImportError: No module named _libfdt

NOW, we build the system AGAIN:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
$ ./compile.sh BOARD="orangepiplus2" BRANCH="next" KERNEL_ONLY="yes" KERNEL_CONFIGURE="no"
[ o.k. ] Using config file [ config-default.conf ]
[ warn ] This script requires root privileges, trying to use sudo
[ o.k. ] Using config file [ config-default.conf ]
[ o.k. ] This script will try to update
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
Already on 'master'
Your branch is up-to-date with 'origin/master'.
[ o.k. ] Command line: setting BOARD to [ orangepiplus2 ]
[ o.k. ] Command line: setting BRANCH to [ next ]
[ o.k. ] Command line: setting KERNEL_ONLY to [ yes ]
[ o.k. ] Command line: setting KERNEL_CONFIGURE to [ no ]
[ o.k. ] Preparing [ host ]
[ o.k. ] Build host OS release [ xenial ]
[ o.k. ] Syncing clock [ host ]
[ o.k. ] Downloading sources
[ o.k. ] Checking git sources [ u-boot v2017.11 ]
[ .... ] Checking out
[ o.k. ] Checking git sources [ linux-mainline linux-4.14.y ]
[ .... ] Up to date
[ o.k. ] Checking git sources [ sunxi-tools master ]
[ .... ] Up to date
[ o.k. ] Checking git sources [ rkbin-tools master ]
[ .... ] Up to date
[ o.k. ] Checking git sources [ marvell-tools A3700_utils-armada-17.10 ]
[ .... ] Up to date
[ o.k. ] Checking git sources [ odroidc2-blobs master ]
[ .... ] Up to date
[ o.k. ] Cleaning output/debs for [ orangepiplus2 next ]
[ o.k. ] Cleaning [ u-boot/v2017.11 ]
[ o.k. ] Compiling u-boot [ 2017.11 ]
[ o.k. ] Compiler version [ arm-linux-gnueabihf-gcc 7.2.1 ]
[ .... ] Checking out sources
[ o.k. ] Cleaning [ u-boot/v2017.11 ]
[ o.k. ] Started patching process for [ u-boot sunxi-orangepiplus2-next ]
[ o.k. ] Looking for user patches in [ userpatches/u-boot/u-boot-sunxi ]
[ o.k. ] * [l][c] 0020-sunxi-call-fdt_fixup_ethernet-again-to-set-macaddr-f.patch
[ o.k. ] * [l][c] 4kfix-limit-screen-to-full-hd.patch
[ o.k. ] * [l][c] Add-A20-Olimex-SOM204-EVB-board.patch
[ o.k. ] * [l][c] add-a20-olinuxino-micro-emmc-support.patch
[ o.k. ] * [l][c] add-a20-optional-eMMC.patch
[ o.k. ] * [l][c] add-bananapi-bpi-m2-zero.patch
[ o.k. ] * [l][c] add-beelink-x2.patch
[ o.k. ] * [l][c] add-nanopi-air-emmc.patch
[ o.k. ] * [l][c] add-nanopi-duo.patch
[ o.k. ] * [l][c] add-nanopi-m1-plus2-emmc.patch
[ o.k. ] * [l][c] add-nanopineoplus2.patch
[ o.k. ] * [l][c] add-orangepi-plus2-emmc.patch
[ o.k. ] * [l][c] add-orangepi-zeroplus.patch
[ o.k. ] * [l][c] add-orangepi-zeroplus2_h3.patch
[ o.k. ] * [l][c] add-sunvell-r69.patch
[ o.k. ] * [l][c] add-tritium.patch
[ o.k. ] * [l][c] add_emmc_olinuxino_a64.patch
[ o.k. ] * [l][c] add_emmc_orangepiwin.patch
[ o.k. ] * [l][c] adjust-default-dram-clockspeeds.patch
[ o.k. ] * [l][c] adjust-small-boards-cpufreq.patch
[ o.k. ] * [l][c] enable-DT-overlays-support.patch
[ o.k. ] * [l][c] enable-autoboot-keyed.patch
[ o.k. ] * [l][c] fdt-setprop-fix-unaligned-access.patch
[ o.k. ] * [l][c] fix-sdcard-detect-bpi-m2z.patch
[ o.k. ] * [l][c] fix-sunxi-gpio-driver.patch
[ o.k. ] * [l][c] fix-usb1-vbus-opiwin.patch
[ o.k. ] * [l][c] h3-Fix-PLL1-setup-to-never-use-dividers.patch
[ o.k. ] * [l][c] h3-enable-power-led.patch
[ o.k. ] * [l][c] h3-set-safe-axi_apb-clock-dividers.patch
[ o.k. ] * [l][c] lower-default-DRAM-freq-A64-H5.patch
[ o.k. ] * [l][c] lower-default-cpufreq-H5.patch
[ o.k. ] * [l][c] sun8i-set-machid.patch
[ o.k. ] * [l][c] video-fix-vsync-polarity-bits.patch
[ o.k. ] * [l][b] workaround-reboot-is-poweroff-olimex-a20.patch
HOSTCC scripts/basic/fixdep
HOSTCC scripts/kconfig/conf.o
HOSTCC scripts/kconfig/zconf.tab.o
HOSTLD scripts/kconfig/conf
#
# configuration written to .config
#
scripts/kconfig/conf --silentoldconfig Kconfig
.config:1120:warning: override: reassigning to symbol BOOTDELAY
#
# configuration written to .config
#
CHK include/config.h
UPD include/config.h
GEN include/autoconf.mk.dep
CFG u-boot.cfg
CFG spl/u-boot.cfg
GEN include/autoconf.mk
GEN spl/include/autoconf.mk
CHK include/config/uboot.release
CHK include/generated/timestamp_autogenerated.h
UPD include/generated/timestamp_autogenerated.h
UPD include/config/uboot.release
CHK include/generated/version_autogenerated.h
HOSTCC scripts/dtc/dtc.o
......

And, it took me about 5 minutes to have everything built.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
......
patching file tools/include/tools/be_byteshift.h
patching file tools/include/tools/le_byteshift.h
CLEAN scripts/basic
CLEAN scripts/dtc
CLEAN scripts/kconfig
CLEAN scripts/mod
CLEAN scripts/selinux/genheaders
CLEAN scripts/selinux/mdp
CLEAN scripts
dpkg-deb: building package 'linux-dtb-next-sunxi' in '../linux-dtb-next-sunxi_5.41_armhf.deb'.
dpkg-deb: building package 'linux-headers-next-sunxi' in '../linux-headers-next-sunxi_5.41_armhf.deb'.
dpkg-deb: building package 'linux-image-next-sunxi' in '../linux-image-next-sunxi_5.41_armhf.deb'.
dpkg-genchanges: warning: package linux-libc-dev-next-sunxi in control file but not in files list
dpkg-genchanges: binary-only upload (no source code included)
dpkg-deb: building package 'linux-source-4.14.22-next-sunxi' in '/home/jiapei/Downloads/OperatingSystems/linux/distros/armbian/armbian/.tmp/linux-source-next-sunxi_5.41_all.deb'.
[ o.k. ] Kernel build done [ @host ]
[ o.k. ] Target directory [ /home/jiapei/Downloads/OperatingSystems/linux/distros/armbian/armbian/output/debs/ ]
[ o.k. ] File name [ linux-image-next-sunxi_5.41_armhf.deb ]
[ o.k. ] Runtime [ 5 min ]

PART C: Copy Linux Kernel DEBs for Orange Pi Plus 2

1
2
3
4
5
6
7
8
9
$ cd ./output/debs
$ ls -1 *
linux-dtb-next-sunxi_5.41_armhf.deb
linux-headers-next-sunxi_5.41_armhf.deb
linux-image-next-sunxi_5.41_armhf.deb
linux-source-next-sunxi_5.41_all.deb
linux-u-boot-next-orangepiplus2_5.41_armhf.deb

extra:

Five .deb files have been successfully generated, and the folder extra is empty.

2. Copy Built DEBs onto TF Card

Since Armbian Ubuntu Desktop has already been installed on our TF card, after plugging TF card back to my host computer again, it's automatically mounted as /media/jiapei/ab9545b9-0d2d-4927-83f3-fae97ced83a9. Then, we copy all five .deb files onto TF card by:

1
$ cp *.deb /media/jiapei/ab9545b9-0d2d-4927-83f3-fae97ced83a9/home/orangepiplus2/

PART D: Install the Built DEBs, Remove the Old Kernel(s), and Wifi Configuration

Now we plug the TF card back into the Orange Pi Plus 2 board and boot into Armbian Ubuntu Desktop with kernel 3.4.113.

1. Install NEW Linux Kernels

Single command will do.

1
$ sudo dpkg -i *.deb
Install Built DEBs

It's OK for us NOT to upgrade u-boot.

2. Remove OLD Linux Kernel(s)

Reboot Orange Pi Plus 2 board, and you'll see the NEW kernel 4.14.22 is now booted successfully. Now, it's optional for us to remove old kernel(s) 3.4.113. Two commands will do.

1
dpkg --list | grep linux-image

will help to list all installed Linux Kernel. And then, we remove all unwanted kernels, for instance:

1
dpkg --purge linux-image-sun8i
Remove Old Kernels

3. Wifi Configuration

As known, Orange Pi Plus 2 board comes with Wifi 2.4G support. To enable Wifi for Orange Pi Plus 2 board, we need to make sure there are ONLY 3 effective lines in file /etc/network/interfaces:

1
2
3
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
Network Interfaces

Finally, you are recommended to solve all the other issues about Ubuntu Desktop via Google.

0%