It seems Tensorflow evolves pretty fast. Today we are testing object tracking based on Tensorflow.

1. Environment

1
2
3
4
5
6
7
8
9
10
➜  ~ python
Python 3.6.6 (default, Sep 12 2018, 18:26:19)
[GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.__version__
'1.12.0-rc0'
>>> import cv2
>>> cv2.__version__
'3.4.3'

2. Object Tracking

2.1 Concepts

There are several fundamental concepts to be re-emphasized (Here, we took one single concerned object as our example. There might be multiple concerned objects): - detection: You don't know whethere there is a concerned object in the field of view or not, which you will know after the detection. And, if there is such a concerned object in the view, the object location is to be given. - tracking: You know where the concerned object was. Based on the prior knowledge, you are to determine where this object is going to be next? - location: Both detection and tracking are looked on as locating the concerned object. - recognition: Only after the concerned object has been located, more detailed information may be recognized afterwards.

2.2 Testing

Now, let's test out Deep Object Tracking Implementation in Tensorflow.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
➜  tf-adnet-tracking git:(master) ✗ python runner.py by_dataset  --vid-path=./data/freeman1/
VIDEOIO ERROR: V4L: can't open camera by index 0
Test: 0.0
Ratio: 0.0
Frame Rate: 0.0
Width: 0.0
Height: 0.0
Brightness: 0.0
Contrast: 0.0
Saturation: 0.0
Hue: 0.0
Gain: 0.0
Exposure: 0.0
2018-11-04 19:20:54.997017: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:993] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-11-04 19:20:54.997514: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: GeForce GTX 980M major: 5 minor: 2 memoryClockRate(GHz): 1.1265
pciBusID: 0000:01:00.0
totalMemory: 3.94GiB freeMemory: 3.14GiB
2018-11-04 19:20:54.997565: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2018-11-04 19:20:55.222053: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-11-04 19:20:55.222092: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2018-11-04 19:20:55.222100: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2018-11-04 19:20:55.222278: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2854 MB memory) -> physical GPU (device: 0, name: GeForce GTX 980M, pci bus id: 0000:01:00.0, compute capability: 5.2)
[2018-11-04 19:20:55,738] [networks] [INFO] all global variables initialized
[2018-11-04 19:20:55,865] [networks] [INFO] conv1/weights:0 : original weights assigned. [0]=[[[-0.08093996 -0.03
[2018-11-04 19:20:55,877] [networks] [INFO] conv1/biases:0 : original weights assigned. [0]=-1.5706328
[2018-11-04 19:20:55,922] [networks] [INFO] conv2/weights:0 : original weights assigned. [0]=[[[-0.00856966 -0.00
[2018-11-04 19:20:55,936] [networks] [INFO] conv2/biases:0 : original weights assigned. [0]=-0.06253582
[2018-11-04 19:20:56,016] [networks] [INFO] conv3/weights:0 : original weights assigned. [0]=[[[ 0.00339902 0.00
[2018-11-04 19:20:56,033] [networks] [INFO] conv3/biases:0 : original weights assigned. [0]=-0.071603894
[2018-11-04 19:20:56,236] [networks] [INFO] fc4/weights:0 : original weights assigned. [0]=[[[ 2.6850728e-03 2
[2018-11-04 19:20:56,254] [networks] [INFO] fc4/biases:0 : original weights assigned. [0]=0.10009623
[2018-11-04 19:20:56,286] [networks] [INFO] fc5/weights:0 : original weights assigned. [0]=[[[-0.01758034 0.01
[2018-11-04 19:20:56,310] [networks] [INFO] fc5/biases:0 : original weights assigned. [0]=0.13894661
[2018-11-04 19:20:56,332] [networks] [INFO] fc6_1/weights:0 : original weights assigned. [0]=[[[ 0.001359 0.01
[2018-11-04 19:20:56,358] [networks] [INFO] fc6_1/biases:0 : original weights assigned. [0]=0.01
[2018-11-04 19:20:56,379] [networks] [INFO] fc6_2/weights:0 : original weights assigned. [0]=[[[-1.23225665e-02
[2018-11-04 19:20:56,400] [networks] [INFO] fc6_2/biases:0 : original weights assigned. [0]=0.0
[]
[2018-11-04 19:20:56,482] [ADNetRunner] [INFO] ---- start dataset l=326
[2018-11-04 19:20:58,451] [ADNetRunner] [INFO] ADNetRunner.initial_finetune t=1541388056.484
[2018-11-04 19:20:59,194] [ADNetRunner] [DEBUG] redetection success=True
[2018-11-04 19:20:59,487] [ADNetRunner] [DEBUG] finetuned
[2018-11-04 19:21:01,750] [ADNetRunner] [DEBUG] finetuned
[2018-11-04 19:21:03,691] [ADNetRunner] [DEBUG] redetection success=False
[2018-11-04 19:21:03,975] [ADNetRunner] [DEBUG] finetuned
[2018-11-04 19:21:04,878] [ADNetRunner] [DEBUG] finetuned
[2018-11-04 19:21:07,524] [ADNetRunner] [DEBUG] finetuned
[2018-11-04 19:21:08,925] [ADNetRunner] [DEBUG] redetection success=True
[2018-11-04 19:21:09,249] [ADNetRunner] [DEBUG] finetuned
[2018-11-04 19:21:10,825] [ADNetRunner] [DEBUG] finetuned
[2018-11-04 19:21:11,768] [ADNetRunner] [DEBUG] redetection success=True
[2018-11-04 19:21:12,096] [ADNetRunner] [DEBUG] finetuned
[2018-11-04 19:21:14,021] [ADNetRunner] [DEBUG] finetuned
[2018-11-04 19:21:16,867] [ADNetRunner] [DEBUG] finetuned
[2018-11-04 19:21:19,710] [ADNetRunner] [DEBUG] finetuned
[2018-11-04 19:21:22,677] [ADNetRunner] [DEBUG] finetuned
[2018-11-04 19:21:25,711] [ADNetRunner] [DEBUG] finetuned
[2018-11-04 19:21:26,202] [ADNetRunner] [DEBUG] redetection success=True
[2018-11-04 19:21:26,567] [ADNetRunner] [DEBUG] finetuned
[2018-11-04 19:21:26,837] [ADNetRunner] [DEBUG] redetection success=True
[2018-11-04 19:21:27,152] [ADNetRunner] [DEBUG] finetuned
[2018-11-04 19:21:29,387] [ADNetRunner] [DEBUG] finetuned
[2018-11-04 19:21:31,371] [ADNetRunner] [INFO] ----
[2018-11-04 19:21:31,371] [ADNetRunner] [INFO] total: 34.8886
initial_finetune: 1.9675
tracking.do_action: 5.4896
tracking.save_samples.roi: 0.5116
tracking.save_samples.feat: 17.9738
tracking.online_finetune: 6.2583
[2018-11-04 19:21:31,371] [ADNetRunner] [INFO] 9.344 FPS

Face Tracking Based on Tensorflow

Tensorflow is always problematic, particularly, for guys like me...

1. Configuration

Linux Kernel 4.17.3 + Ubuntu 18.04 + GCC 7.3.0 + Python

1
2
$ uname -r
4.17.3-041703-generic
1
2
3
4
5
6
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04 LTS
Release: 18.04
Codename: bionic
1
2
3
4
5
$ gcc --version
gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
1
2
$ python --version
Python 3.6.5

2. Install Bazel

From Bazel's official website Installing Bazel on Ubuntu:

1
2
3
$ echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
$ curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -
$ sudo apt update && sudo apt install bazel

Current bazel 0.15.0 will be installed.

3. Let's Build Tensorflow from Source

3.1 Configuration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
$ ./configure
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.protobuf.UnsafeUtil (file:~/.cache/bazel/_bazel_jiapei/install/ce085f519b017357185750fe457b4648/_embedded_binaries/A-server.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of com.google.protobuf.UnsafeUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.15.0 installed.
Please specify the location of python. [Default is /usr/bin/python]:


Found possible Python library paths:
/usr/local/lib/python3.6/dist-packages
/usr/lib/python3/dist-packages
Please input the desired Python library path to use. Default is [/usr/local/lib/python3.6/dist-packages]

Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: Y
jemalloc as malloc support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: Y
Google Cloud Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: Y
Hadoop File System support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Amazon AWS Platform support? [Y/n]: Y
Amazon AWS Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: Y
Apache Kafka Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with XLA JIT support? [y/N]: N
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with GDR support? [y/N]: N
No GDR support will be enabled for TensorFlow.

Do you wish to build TensorFlow with VERBS support? [y/N]: N
No VERBS support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: N
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.

Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: 9.2


Please specify the location where CUDA 9.2 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:


Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 7.1.4


Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:


Do you wish to build TensorFlow with TensorRT support? [y/N]: N
No TensorRT support will be enabled for TensorFlow.

Please specify the NCCL version you want to use. [Leave empty to default to NCCL 1.3]:


Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 5.2]


Do you want to use clang as CUDA compiler? [y/N]: N
nvcc will be used as CUDA compiler.

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:


Do you wish to build TensorFlow with MPI support? [y/N]: N
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:


Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: N
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
Configuration finished

3.2 FIRST Bazel Build

Follow Installing TensorFlow from Sources on Tensorflow's official website, have all required packages prepared, and run:

1
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

you will meet several ERROR messages, which requires you to carry out the following modifications.

3.3 Modifications

  • File **~/.cache/bazel/_bazel_jiapei/051cd94cedc722db8c69e42ce51064b5/external/jpeg/BUILD**:
1
2
3
4
5
config_setting(
name = "armeabi-v7a",
- values = {"android_cpu": "armeabi-v7a"},
+ values = {"cpu": "armeabi-v7a"},
)
  • Two Symbolic links
1
2
$ sudo ln -s /usr/local/cuda/include/crt/math_functions.hpp /usr/local/cuda/include/math_functions.hpp
$ sudo ln -s /usr/local/cuda/nvvm/libdevice/libdevice.10.bc /usr/local/cuda/nvvm/lib64/libdevice.10.bc

3.4 Bazel Build Again

1
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

It'll take around 30 minutes to have Tensorflow successfully built.

1
2
3
4
5
6
......
Target //tensorflow/tools/pip_package:build_pip_package up-to-date:
bazel-bin/tensorflow/tools/pip_package/build_pip_package
INFO: Elapsed time: 4924.451s, Critical Path: 197.91s
INFO: 8864 processes: 8864 local.
INFO: Build completed successfully, 11002 total actions

4. Tensorflow Installation

4.1 Build the pip File

1
2
3
4
5
6
7
8
9
10
11
12
13
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
Wed Jul 11 00:09:09 PDT 2018 : === Preparing sources in dir: /tmp/tmp.5X0zsqfxYo
/media/jiapei/Data/Downloads/machinelearning/deeplearning/tensorflow/tensorflow /media/jiapei/Data/Downloads/machinelearning/deeplearning/tensorflow/tensorflow
/media/jiapei/Data/Downloads/machinelearning/deeplearning/tensorflow/tensorflow
Wed Jul 11 00:09:35 PDT 2018 : === Building wheel
warning: no files found matching '*.dll' under directory '*'
warning: no files found matching '*.lib' under directory '*'
warning: no files found matching '*.h' under directory 'tensorflow/include/tensorflow'
warning: no files found matching '*' under directory 'tensorflow/include/Eigen'
warning: no files found matching '*.h' under directory 'tensorflow/include/google'
warning: no files found matching '*' under directory 'tensorflow/include/third_party'
warning: no files found matching '*' under directory 'tensorflow/include/unsupported'
Wed Jul 11 00:10:58 PDT 2018 : === Output wheel file is in: /tmp/tensorflow_pkg

Let's have a look at what's been built:

1
2
$ ls /tmp/tensorflow_pkg/
tensorflow-1.9.0rc0-cp36-cp36m-linux_x86_64.whl

4.2 Pip Installation

And, let's have tensorflow-1.9.0rc0-cp36-cp36m-linux_x86_64.whl installed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ pip3 install /tmp/tensorflow_pkg/tensorflow-1.9.0rc0-cp36-cp36m-linux_x86_64.whl
Processing /tmp/tensorflow_pkg/tensorflow-1.9.0rc0-cp36-cp36m-linux_x86_64.whl
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (0.31.1)
Requirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (0.2.0)
Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (1.14.5)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (0.7.1)
Requirement already satisfied: six>=1.10.0 in ./.local/lib/python3.6/site-packages (from tensorflow==1.9.0rc0) (1.11.0)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (1.13.0)
Requirement already satisfied: absl-py>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (0.2.2)
Requirement already satisfied: tensorboard<1.9.0,>=1.8.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (1.8.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (1.1.0)
Requirement already satisfied: setuptools<=39.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (39.1.0)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (3.6.0)
Requirement already satisfied: bleach==1.5.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow==1.9.0rc0) (1.5.0)
Requirement already satisfied: html5lib==0.9999999 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow==1.9.0rc0) (0.9999999)
Requirement already satisfied: werkzeug>=0.11.10 in ./.local/lib/python3.6/site-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow==1.9.0rc0) (0.14.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow==1.9.0rc0) (2.6.11)
Successfully installed tensorflow-1.9.0rc0

Let's test if Tensorflow has been successfully installed.

4.3 Check Tensorflow

1
2
3
4
5
6
7
$ python
Python 3.6.5 (default, Apr 1 2018, 05:46:30)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.__version__
'1.9.0-rc0'

5. Keras Installation

After successfully check out Keras, we can easily have Keras installed by command python setup.py install.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
$ python setup.py install
running install
running bdist_egg
running egg_info
creating Keras.egg-info
writing Keras.egg-info/PKG-INFO
writing dependency_links to Keras.egg-info/dependency_links.txt
writing requirements to Keras.egg-info/requires.txt
writing top-level names to Keras.egg-info/top_level.txt
writing manifest file 'Keras.egg-info/SOURCES.txt'
reading manifest file 'Keras.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'Keras.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib
creating build/lib/docs
copying docs/autogen.py -> build/lib/docs
copying docs/__init__.py -> build/lib/docs
creating build/lib/keras
copying keras/activations.py -> build/lib/keras
copying keras/callbacks.py -> build/lib/keras
copying keras/constraints.py -> build/lib/keras
copying keras/initializers.py -> build/lib/keras
copying keras/losses.py -> build/lib/keras
copying keras/metrics.py -> build/lib/keras
copying keras/models.py -> build/lib/keras
copying keras/objectives.py -> build/lib/keras
copying keras/optimizers.py -> build/lib/keras
copying keras/regularizers.py -> build/lib/keras
copying keras/__init__.py -> build/lib/keras
creating build/lib/keras/applications
copying keras/applications/densenet.py -> build/lib/keras/applications
copying keras/applications/imagenet_utils.py -> build/lib/keras/applications
copying keras/applications/inception_resnet_v2.py -> build/lib/keras/applications
copying keras/applications/inception_v3.py -> build/lib/keras/applications
copying keras/applications/mobilenet.py -> build/lib/keras/applications
copying keras/applications/mobilenetv2.py -> build/lib/keras/applications
copying keras/applications/nasnet.py -> build/lib/keras/applications
copying keras/applications/resnet50.py -> build/lib/keras/applications
copying keras/applications/vgg16.py -> build/lib/keras/applications
copying keras/applications/vgg19.py -> build/lib/keras/applications
copying keras/applications/xception.py -> build/lib/keras/applications
copying keras/applications/__init__.py -> build/lib/keras/applications
creating build/lib/keras/backend
copying keras/backend/cntk_backend.py -> build/lib/keras/backend
copying keras/backend/common.py -> build/lib/keras/backend
copying keras/backend/tensorflow_backend.py -> build/lib/keras/backend
copying keras/backend/theano_backend.py -> build/lib/keras/backend
copying keras/backend/__init__.py -> build/lib/keras/backend
creating build/lib/keras/datasets
copying keras/datasets/boston_housing.py -> build/lib/keras/datasets
copying keras/datasets/cifar.py -> build/lib/keras/datasets
copying keras/datasets/cifar10.py -> build/lib/keras/datasets
copying keras/datasets/cifar100.py -> build/lib/keras/datasets
copying keras/datasets/fashion_mnist.py -> build/lib/keras/datasets
copying keras/datasets/imdb.py -> build/lib/keras/datasets
copying keras/datasets/mnist.py -> build/lib/keras/datasets
copying keras/datasets/reuters.py -> build/lib/keras/datasets
copying keras/datasets/__init__.py -> build/lib/keras/datasets
creating build/lib/keras/engine
copying keras/engine/base_layer.py -> build/lib/keras/engine
copying keras/engine/input_layer.py -> build/lib/keras/engine
copying keras/engine/network.py -> build/lib/keras/engine
copying keras/engine/saving.py -> build/lib/keras/engine
copying keras/engine/sequential.py -> build/lib/keras/engine
copying keras/engine/topology.py -> build/lib/keras/engine
copying keras/engine/training.py -> build/lib/keras/engine
copying keras/engine/training_arrays.py -> build/lib/keras/engine
copying keras/engine/training_generator.py -> build/lib/keras/engine
copying keras/engine/training_utils.py -> build/lib/keras/engine
copying keras/engine/__init__.py -> build/lib/keras/engine
creating build/lib/keras/layers
copying keras/layers/advanced_activations.py -> build/lib/keras/layers
copying keras/layers/convolutional.py -> build/lib/keras/layers
copying keras/layers/convolutional_recurrent.py -> build/lib/keras/layers
copying keras/layers/core.py -> build/lib/keras/layers
copying keras/layers/cudnn_recurrent.py -> build/lib/keras/layers
copying keras/layers/embeddings.py -> build/lib/keras/layers
copying keras/layers/local.py -> build/lib/keras/layers
copying keras/layers/merge.py -> build/lib/keras/layers
copying keras/layers/noise.py -> build/lib/keras/layers
copying keras/layers/normalization.py -> build/lib/keras/layers
copying keras/layers/pooling.py -> build/lib/keras/layers
copying keras/layers/recurrent.py -> build/lib/keras/layers
copying keras/layers/wrappers.py -> build/lib/keras/layers
copying keras/layers/__init__.py -> build/lib/keras/layers
creating build/lib/keras/legacy
copying keras/legacy/interfaces.py -> build/lib/keras/legacy
copying keras/legacy/layers.py -> build/lib/keras/legacy
copying keras/legacy/__init__.py -> build/lib/keras/legacy
creating build/lib/keras/preprocessing
copying keras/preprocessing/image.py -> build/lib/keras/preprocessing
copying keras/preprocessing/sequence.py -> build/lib/keras/preprocessing
copying keras/preprocessing/text.py -> build/lib/keras/preprocessing
copying keras/preprocessing/__init__.py -> build/lib/keras/preprocessing
creating build/lib/keras/utils
copying keras/utils/conv_utils.py -> build/lib/keras/utils
copying keras/utils/data_utils.py -> build/lib/keras/utils
copying keras/utils/generic_utils.py -> build/lib/keras/utils
copying keras/utils/io_utils.py -> build/lib/keras/utils
copying keras/utils/layer_utils.py -> build/lib/keras/utils
copying keras/utils/multi_gpu_utils.py -> build/lib/keras/utils
copying keras/utils/np_utils.py -> build/lib/keras/utils
copying keras/utils/test_utils.py -> build/lib/keras/utils
copying keras/utils/vis_utils.py -> build/lib/keras/utils
copying keras/utils/__init__.py -> build/lib/keras/utils
creating build/lib/keras/wrappers
copying keras/wrappers/scikit_learn.py -> build/lib/keras/wrappers
copying keras/wrappers/__init__.py -> build/lib/keras/wrappers
creating build/bdist.linux-x86_64
creating build/bdist.linux-x86_64/egg
creating build/bdist.linux-x86_64/egg/docs
copying build/lib/docs/autogen.py -> build/bdist.linux-x86_64/egg/docs
copying build/lib/docs/__init__.py -> build/bdist.linux-x86_64/egg/docs
creating build/bdist.linux-x86_64/egg/keras
copying build/lib/keras/activations.py -> build/bdist.linux-x86_64/egg/keras
creating build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/densenet.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/imagenet_utils.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/inception_resnet_v2.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/inception_v3.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/mobilenet.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/mobilenetv2.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/nasnet.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/resnet50.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/vgg16.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/vgg19.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/xception.py -> build/bdist.linux-x86_64/egg/keras/applications
copying build/lib/keras/applications/__init__.py -> build/bdist.linux-x86_64/egg/keras/applications
creating build/bdist.linux-x86_64/egg/keras/backend
copying build/lib/keras/backend/cntk_backend.py -> build/bdist.linux-x86_64/egg/keras/backend
copying build/lib/keras/backend/common.py -> build/bdist.linux-x86_64/egg/keras/backend
copying build/lib/keras/backend/tensorflow_backend.py -> build/bdist.linux-x86_64/egg/keras/backend
copying build/lib/keras/backend/theano_backend.py -> build/bdist.linux-x86_64/egg/keras/backend
copying build/lib/keras/backend/__init__.py -> build/bdist.linux-x86_64/egg/keras/backend
copying build/lib/keras/callbacks.py -> build/bdist.linux-x86_64/egg/keras
copying build/lib/keras/constraints.py -> build/bdist.linux-x86_64/egg/keras
creating build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/boston_housing.py -> build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/cifar.py -> build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/cifar10.py -> build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/cifar100.py -> build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/fashion_mnist.py -> build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/imdb.py -> build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/mnist.py -> build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/reuters.py -> build/bdist.linux-x86_64/egg/keras/datasets
copying build/lib/keras/datasets/__init__.py -> build/bdist.linux-x86_64/egg/keras/datasets
creating build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/base_layer.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/input_layer.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/network.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/saving.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/sequential.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/topology.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/training.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/training_arrays.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/training_generator.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/training_utils.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/engine/__init__.py -> build/bdist.linux-x86_64/egg/keras/engine
copying build/lib/keras/initializers.py -> build/bdist.linux-x86_64/egg/keras
creating build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/advanced_activations.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/convolutional.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/convolutional_recurrent.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/core.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/cudnn_recurrent.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/embeddings.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/local.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/merge.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/noise.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/normalization.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/pooling.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/recurrent.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/wrappers.py -> build/bdist.linux-x86_64/egg/keras/layers
copying build/lib/keras/layers/__init__.py -> build/bdist.linux-x86_64/egg/keras/layers
creating build/bdist.linux-x86_64/egg/keras/legacy
copying build/lib/keras/legacy/interfaces.py -> build/bdist.linux-x86_64/egg/keras/legacy
copying build/lib/keras/legacy/layers.py -> build/bdist.linux-x86_64/egg/keras/legacy
copying build/lib/keras/legacy/__init__.py -> build/bdist.linux-x86_64/egg/keras/legacy
copying build/lib/keras/losses.py -> build/bdist.linux-x86_64/egg/keras
copying build/lib/keras/metrics.py -> build/bdist.linux-x86_64/egg/keras
copying build/lib/keras/models.py -> build/bdist.linux-x86_64/egg/keras
copying build/lib/keras/objectives.py -> build/bdist.linux-x86_64/egg/keras
copying build/lib/keras/optimizers.py -> build/bdist.linux-x86_64/egg/keras
creating build/bdist.linux-x86_64/egg/keras/preprocessing
copying build/lib/keras/preprocessing/image.py -> build/bdist.linux-x86_64/egg/keras/preprocessing
copying build/lib/keras/preprocessing/sequence.py -> build/bdist.linux-x86_64/egg/keras/preprocessing
copying build/lib/keras/preprocessing/text.py -> build/bdist.linux-x86_64/egg/keras/preprocessing
copying build/lib/keras/preprocessing/__init__.py -> build/bdist.linux-x86_64/egg/keras/preprocessing
copying build/lib/keras/regularizers.py -> build/bdist.linux-x86_64/egg/keras
creating build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/conv_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/data_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/generic_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/io_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/layer_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/multi_gpu_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/np_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/test_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/vis_utils.py -> build/bdist.linux-x86_64/egg/keras/utils
copying build/lib/keras/utils/__init__.py -> build/bdist.linux-x86_64/egg/keras/utils
creating build/bdist.linux-x86_64/egg/keras/wrappers
copying build/lib/keras/wrappers/scikit_learn.py -> build/bdist.linux-x86_64/egg/keras/wrappers
copying build/lib/keras/wrappers/__init__.py -> build/bdist.linux-x86_64/egg/keras/wrappers
copying build/lib/keras/__init__.py -> build/bdist.linux-x86_64/egg/keras
byte-compiling build/bdist.linux-x86_64/egg/docs/autogen.py to autogen.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/docs/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/activations.py to activations.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/densenet.py to densenet.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/imagenet_utils.py to imagenet_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/inception_resnet_v2.py to inception_resnet_v2.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/inception_v3.py to inception_v3.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/mobilenet.py to mobilenet.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/mobilenetv2.py to mobilenetv2.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/nasnet.py to nasnet.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/resnet50.py to resnet50.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/vgg16.py to vgg16.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/vgg19.py to vgg19.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/xception.py to xception.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/applications/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/backend/cntk_backend.py to cntk_backend.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/backend/common.py to common.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/backend/tensorflow_backend.py to tensorflow_backend.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/backend/theano_backend.py to theano_backend.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/backend/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/callbacks.py to callbacks.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/constraints.py to constraints.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/boston_housing.py to boston_housing.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/cifar.py to cifar.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/cifar10.py to cifar10.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/cifar100.py to cifar100.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/fashion_mnist.py to fashion_mnist.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/imdb.py to imdb.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/mnist.py to mnist.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/reuters.py to reuters.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/datasets/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/base_layer.py to base_layer.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/input_layer.py to input_layer.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/network.py to network.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/saving.py to saving.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/sequential.py to sequential.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/topology.py to topology.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/training.py to training.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/training_arrays.py to training_arrays.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/training_generator.py to training_generator.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/training_utils.py to training_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/engine/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/initializers.py to initializers.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/advanced_activations.py to advanced_activations.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/convolutional.py to convolutional.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/convolutional_recurrent.py to convolutional_recurrent.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/core.py to core.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/cudnn_recurrent.py to cudnn_recurrent.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/embeddings.py to embeddings.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/local.py to local.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/merge.py to merge.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/noise.py to noise.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/normalization.py to normalization.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/pooling.py to pooling.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/recurrent.py to recurrent.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/wrappers.py to wrappers.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/layers/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/legacy/interfaces.py to interfaces.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/legacy/layers.py to layers.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/legacy/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/losses.py to losses.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/metrics.py to metrics.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/models.py to models.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/objectives.py to objectives.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/optimizers.py to optimizers.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/preprocessing/image.py to image.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/preprocessing/sequence.py to sequence.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/preprocessing/text.py to text.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/preprocessing/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/regularizers.py to regularizers.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/conv_utils.py to conv_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/data_utils.py to data_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/generic_utils.py to generic_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/io_utils.py to io_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/layer_utils.py to layer_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/multi_gpu_utils.py to multi_gpu_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/np_utils.py to np_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/test_utils.py to test_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/vis_utils.py to vis_utils.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/utils/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/wrappers/scikit_learn.py to scikit_learn.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/wrappers/__init__.py to __init__.cpython-36.pyc
byte-compiling build/bdist.linux-x86_64/egg/keras/__init__.py to __init__.cpython-36.pyc
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying Keras.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying Keras.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying Keras.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying Keras.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying Keras.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
zip_safe flag not set; analyzing archive contents...
creating dist
creating 'dist/Keras-2.2.0-py3.6.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing Keras-2.2.0-py3.6.egg
Copying Keras-2.2.0-py3.6.egg to /usr/local/lib/python3.6/dist-packages
Adding Keras 2.2.0 to easy-install.pth file

Installed /usr/local/lib/python3.6/dist-packages/Keras-2.2.0-py3.6.egg
Processing dependencies for Keras==2.2.0
Searching for Keras-Preprocessing==1.0.1
Best match: Keras-Preprocessing 1.0.1
Adding Keras-Preprocessing 1.0.1 to easy-install.pth file

Using /usr/local/lib/python3.6/dist-packages
Searching for Keras-Applications==1.0.2
Best match: Keras-Applications 1.0.2
Adding Keras-Applications 1.0.2 to easy-install.pth file

Using /usr/local/lib/python3.6/dist-packages
Searching for h5py==2.8.0
Best match: h5py 2.8.0
Adding h5py 2.8.0 to easy-install.pth file

Using /usr/local/lib/python3.6/dist-packages
Searching for PyYAML==3.13
Best match: PyYAML 3.13
Adding PyYAML 3.13 to easy-install.pth file

Using /usr/local/lib/python3.6/dist-packages
Searching for six==1.11.0
Best match: six 1.11.0
Adding six 1.11.0 to easy-install.pth file

Using /usr/lib/python3/dist-packages
Searching for scipy==1.1.0
Best match: scipy 1.1.0
Adding scipy 1.1.0 to easy-install.pth file

Using /usr/local/lib/python3.6/dist-packages
Searching for numpy==1.14.5
Best match: numpy 1.14.5
Adding numpy 1.14.5 to easy-install.pth file

Using /usr/local/lib/python3.6/dist-packages
Finished processing dependencies for Keras==2.2.0

That's all for today. I think Python is seriously cool, handy indeed. I myself will still recommend Pytorch, but it seems Tensorflow and Keras are also very popular in North America.

I'm quite busy today. So, I'd just post some videos to show the performance about jiu bu gao su ni 😆 Click on the pictures to open an uploaded Facebook video.

1. Key Point Localization

1.1 Nobody - Yeah, it's ME, 10 YEARS ago. How time flies...

Nobody - Yeah, it's ME

1.2 FRANCK - What a Canonical Annotated Face Dataset

FRANCK

2. Orientation

01_al_pacino 02_alanis_morissette 03_anderson_cooper
01_al_pacino.mp4 02_alanis_morissette.mp4 03_anderson_cooper.mp4
04_angelina_jolie 05_bill_clinton 06_bill_gates
04_angelina_jolie.mp4 05_bill_clinton.mp4 06_bill_gates.mp4
07_gloria_estefan 08_jet_li 09_julia_roberts
07_gloria_estefan.mp4 08_jet_li.mp4 09_julia_roberts.mp4
10_noam_chomsky 11_sylvester_stallone 12_tony_blair
10_noam_chomsky.mp4 11_sylvester_stallone.mp4 12_tony_blair.mp4
13_victoria_beckham 14_vladimir_putin
13_victoria_beckham.mp4 14_vladimir_putin.mp4

Yesterday is Canada Day... ^_^ Happy Canada Day everybody...

Happy Canada Day

Just notice this news about ImageAI today. So, I just had it tested for fun. I seriously don't want to talk about ImageAI too much, you can follow the author's github and it shouldn't be that hard to have everything done in minutes.

1. Preparation

1.1 Prerequisite Dependencies

As described by on ImageAI's Github, multiple Python dependencies need to be installed:

  • Tensorflow
  • Numpy
  • SciPy
  • OpenCV
  • Pillow
  • Matplotlib
  • h5py
  • Keras

All packages can be easily installed by command:

1
pip3 install PackageName

Afterwards, ImageAI can be installed by a single command:

1
pip3 install https://github.com/OlafenwaMoses/ImageAI/releases/download/2.0.1/imageai-2.0.1-py3-none-any.whl 

1.2 CNN Models

Two models are adopted as in the examples prediction and detection.

2. Examples

2.1 Prediction

Simple examples are given at https://github.com/OlafenwaMoses/ImageAI/tree/master/imageai/Prediction.

I modified FirstPrediction.py a bit as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
from imageai.Prediction import ImagePrediction
import os
import numpy as np

prediction = ImagePrediction()
prediction.setModelTypeAsResNet()
prediction.setModelPath("/media/jiapei/Data/Programs/Python/ImageAI/resnet50_weights_tf_dim_ordering_tf_kernels.h5")
prediction.loadModel()


predictions, probabilities = prediction.predictImage(("./01.jpg"), result_count=5)
for eachPrediction, eachProbability in zip(predictions, probabilities):
print(eachPrediction + " : " + np.format_float_positional(eachProbability, precision=3))

For the 1st image:

ImageAI Test Image 1

From bash, you will get:

1
2
3
4
5
6
7
$ python FirstPrediction.py 
2018-07-02 18:12:09.275412: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
convertible : 52.45954394340515
sports_car : 37.61279881000519
pickup : 3.1751133501529694
car_wheel : 1.817503571510315
minivan : 1.748703233897686

2.2 Detection

Simple examples are given at https://github.com/OlafenwaMoses/ImageAI/tree/master/imageai/Detection.

Trivial modification is also implemented upon FirstObjectDetection.py.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
from imageai.Detection import ObjectDetection
import os
import numpy as np

detector = ObjectDetection()
detector.setModelTypeAsRetinaNet()
detector.setModelPath( os.path.join("/media/jiapei/Data/Programs/Python/ImageAI/resnet50_coco_best_v2.1.0.h5"))
detector.loadModel()

input_image="./03.jpg"
output_image="./imageai_output_03.jpg"
detections = detector.detectObjectsFromImage(input_image, output_image)

for eachObject in detections:
print(eachObject["name"] + " : " + np.format_float_positional(eachObject["percentage_probability"], precision=3))

2.2.1 For the 2nd image:

ImageAI Test Image 2

From bash, you will get

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
$ python FirstDetection.py 
Using TensorFlow backend.
2018-07-02 18:23:09.634037: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-07-02 18:23:11.744790: W tensorflow/core/framework/allocator.cc:101] Allocation of 68300800 exceeds 10% of system memory.
2018-07-02 18:23:11.958081: W tensorflow/core/framework/allocator.cc:101] Allocation of 68403200 exceeds 10% of system memory.
2018-07-02 18:23:12.174739: W tensorflow/core/framework/allocator.cc:101] Allocation of 68403200 exceeds 10% of system memory.
2018-07-02 18:23:12.433540: W tensorflow/core/framework/allocator.cc:101] Allocation of 68403200 exceeds 10% of system memory.
2018-07-02 18:23:12.694631: W tensorflow/core/framework/allocator.cc:101] Allocation of 68403200 exceeds 10% of system memory.
2018-07-02 18:23:16.267111: W tensorflow/core/framework/allocator.cc:101] Allocation of 64224000 exceeds 10% of system memory.
2018-07-02 18:23:16.370939: W tensorflow/core/framework/allocator.cc:101] Allocation of 64224000 exceeds 10% of system memory.
2018-07-02 18:23:16.403353: W tensorflow/core/framework/allocator.cc:101] Allocation of 67435200 exceeds 10% of system memory.
person : 55.596935749053955
person : 66.90954566001892
person : 67.96322464942932
person : 50.80411434173584
bicycle : 64.87574577331543
bicycle : 72.0929205417633
person : 80.02063035964966
person : 85.82872748374939
truck : 59.56767797470093
person : 66.69963002204895
person : 79.37889695167542
person : 64.81361389160156
bus : 65.35580158233643
bus : 97.16107249259949
bus : 68.20474863052368
truck : 67.65954494476318
truck : 77.73774266242981
bus : 69.96590495109558
truck : 69.54039335250854
car : 61.26518249511719
car : 59.965676069259644

And, under the program folder, you will get an output image:

ImageAI Output Image for Test Image 2

2.2.2 For the 3rd image:

ImageAI Test Image 3

From bash, you will get

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
$ python FirstDetection.py 
Using TensorFlow backend.
2018-07-02 18:25:24.919351: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
person : 53.27633619308472
person : 52.71329879760742
person : 63.67729902267456
person : 55.00321388244629
person : 74.53054189682007
person : 51.54905915260315
motorcycle : 59.057921171188354
bus : 93.79504919052124
bus : 86.21828556060791
bus : 77.143394947052
person : 59.69809293746948
car : 71.79147601127625
car : 60.15858054161072
person : 62.758803367614746
person : 58.786213397979736
person : 76.49624943733215
car : 56.977421045303345
person : 67.86248683929443
person : 50.977784395217896
person : 52.3215651512146
motorcycle : 52.81376242637634
person : 76.79281234741211
motorcycle : 74.65972304344177
person : 55.96961975097656
person : 68.15704107284546
motorcycle : 56.21282458305359
bicycle : 71.78951501846313
motorcycle : 69.68616843223572
bicycle : 91.09067916870117
motorcycle : 83.16765427589417
motorcycle : 61.57424449920654

And, under the program folder, you will get an output image:

ImageAI Output Image for Test Image 3

It has been quite a while that my VOSM has NOT been updated. My bad for sure. But today, I have it updated, and VOSM-0.3.5 is released. Just refer to the following 3 pages on github:

We'll still explain a bit on How to use VOSM in the following:

1. Building

1.1 Building Commands

Currently, there are 7 types of models to be built. Namely, there are 7 choices for the parameter "-t":

  • SM
  • TM
  • AM
  • IA
  • FM
  • SMLTC
  • SMNDPROFILE
1
2
3
4
5
6
7
$ testsmbuilding -o "./output" -a "./annotations/training/" -i "./images/training/" -s "../VOSM/shapeinfo/IMM/ShapeInfo.txt" -d "IMM" -c 1 -t "SM" -l 4 -p 0.95
$ testsmbuilding -o "./output" -a "./annotations/training/" -i "./images/training/" -s "../VOSM/shapeinfo/IMM/ShapeInfo.txt" -d "IMM" -c 1 -t "TM" -l 4 -p 0.95
$ testsmbuilding -o "./output" -a "./annotations/training/" -i "./images/training/" -s "../VOSM/shapeinfo/IMM/ShapeInfo.txt" -d "IMM" -c 1 -t "AM" -l 4 -p 0.95
$ testsmbuilding -o "./output" -a "./annotations/training/" -i "./images/training/" -s "../VOSM/shapeinfo/IMM/ShapeInfo.txt" -d "IMM" -c 1 -t "IA" -l 4 -p 0.95
$ testsmbuilding -o "./output" -a "./annotations/training/" -i "./images/training/" -s "../VOSM/shapeinfo/IMM/ShapeInfo.txt" -d "IMM" -c 1 -t "FM" -l 4 -p 0.95
$ testsmbuilding -o "./output" -a "./annotations/training/" -i "./images/training/" -s "../VOSM/shapeinfo/IMM/ShapeInfo.txt" -d "IMM" -c 1 -t "SMLTC" -l 4 -p 0.95
$ testsmbuilding -o "./output" -a "./annotations/training/" -i "./images/training/" -s "../VOSM/shapeinfo/IMM/ShapeInfo.txt" -d "IMM" -c 1 -t "SMNDPROFILE" -l 4 -p 0.95

1.2 Output Folders

After these 7 commands, 9 folders will be generated:

  • Point2DDistributionModel
  • ShapeModel
  • TextureModel
  • AppearanceModel
  • AAMICIA
  • AFM
  • AXM
  • ASMLTCs
  • ASMNDProfiles

1.3 Output Images

Under folder TextureModel, 3 key images are generated:

Reference.jpg edges.jpg ellipses.jpg
Reference.jpg edges.jpg ellipses

Under folder AAMICIA, another 3 key images are generated:

m_IplImageTempFace.jpg m_IplImageTempFaceX.jpg m_IplImageTempFaceY.jpg
m_IplImageTempFace.jpg - same as Reference.jpg m_IplImageTempFaceX.jpg m_IplImageTempFaceY.jpg

2. Fitting

2.1 Fitting Commands

Current VOSM supports 5 fitting methods.

  • ASM_PROFILEND
  • ASM_LTC
  • AAM_BASIC
  • AAM_CMUICIA
  • AAM_IAIA
1
2
3
4
5
$ testsmfitting -o "./output/" -t "ASM_PROFILEND" -i "./images/testing/" -a "./annotations/testing/" -d "IMM" -s true -r true
$ testsmfitting -o "./output/" -t "ASM_LTC" -i "./images/testing/" -a "./annotations/testing/" -d "IMM" -s true -r true
$ testsmfitting -o "./output/" -t "AAM_BASIC" -i "./images/testing/" -a "./annotations/testing/" -d "IMM" -s true -r true
$ testsmfitting -o "./output/" -t "AAM_CMUICIA" -i "./images/testing/" -a "./annotations/testing/" -d "IMM" -s true -r true
$ testsmfitting -o "./output/" -t "AAM_IAIA" -i "./images/testing/" -a "./annotations/testing/" -d "IMM" -s true -r true

2.2 Fitting Results

Let's just take ASM_PROFILEND as an example.

1
$ testsmfitting -o "./output/" -t "ASM_PROFILEND" -i "./images/testing/" -a "./annotations/testing/" -d "IMM" -s true -r true

All fitted images are generated under current folder, some are well fitted:

11-1m.jpg 33-4m.jpg 40-6m.jpg
11-1m.jpg 33-4m.jpg 40-6m

others are NOT well fitted:

12-3f.jpg 20-6m.jpg 23-4m.jpg
12-3f.jpg 20-6m.jpg 23-4m

2.3 Process of Fitting

The fitting process can also be recorded for each image if the parameter "-r" is enabled by -r true. Let's take a look at what's in folder 40-6m.

00.jpg 01.jpg 02.jpg
00.jpg 01.jpg 02.jpg
03.jpg 04.jpg 05.jpg
03.jpg 04.jpg 05.jpg
06.jpg 07.jpg 08.jpg
06.jpg 07.jpg 08.jpg
09.jpg 10.jpg 11.jpg
09.jpg 10.jpg 11.jpg
09.jpg 10.jpg 11.jpg
12.jpg 13.jpg 14.jpg
15.jpg 16.jpg
15.jpg 16.jpg

Clearly, the technology of pyramids is adopted during the fitting process.

To download videos from Youtube is sometimes required. Some Chrome plugins can be used to download Youtube videos, such as: Youtube Downloader. Other methods can also be found on various resources, such as: WikiHow.

In this blog, I'm going to cite (copy and paste) from WikiHow about How to download Youtube videos by using VLC.

STEP1: Copy Youtube URL

Find the Youtube video that you would like to download, and copy the Youtube URL.

STEP 2: Broadcast Youtube Video in VLC

Paste the URL under VLC Media Player->Media->Open Network Stream->Network Tab->Please enter a network URL:, and Play:

Video Broadcasting

STEP 3: Get the Real Location of Youtube Video

Then, copy the URL under Tools->Codec Information->Codec->Location:

Current Media Information

STEP 4: Save Video As

Finally, paste the URL in Chrome:

Save Video As

and Save video as....

1. IRC and Freenode

People have been using IRC for decades.

Freenode is an IRC network used to discuss peer-directed projects.

cited from Wikipedia.

It's recommended to have some background about IRC and Freenode on Wikipedia:

2. Popular IRC Clients

Lots of IRC clients can be found on Google. Here, we just summarize some from IRC Wikipedia:

3. 7 Best IRC Clients for Linux

Please just refer to https://www.tecmint.com/best-irc-clients-for-linux/.

4. WeeChat in Linux

4.1 Installation

We can of course directly download the package from WeeChat Download, and have it installed from source. However, installing WeeChat from repository is recommended.

Step 1: Add the WeeChat repository

1
$ sudo sh -c 'echo "deb https://weechat.org/ubuntu $(lsb_release -cs) main" >> /etc/apt/sources.list.d/WeeChat.list'

Step 2: Add WeeChat repository key

1
$ sudo apt-key adv --keyserver keys.gnupg.net --recv-keys 11E9DE8848F2B65222AA75B8D1820DB22A11534E

Step 3: WeeChat Installation

1
2
$ sudo apt update
$ sudo apt install WeeChat

4.2 How to use WeeChat

Refer to WeeChat Official Documentation.

Step 1: Run WeeChat

1
$ weechat

will display:

WeeChat

And we FIRST type in /help

WeeChat

Step 2: Connect Freenode

1
/connect freenode
WeeChat

Step 3: Join a Channel

Here, we select channel OpenCV as an example:

1
/join #opencv
WeeChat

Step 4: Close a Channel

1
/close

5. Pidgin (Optional)

We're NOT going to talk about it in this blog.

To DIY a home security camera is comparatively simple by using a low-cost embedded board with Linux installed. There are ONLY 3 steps in total.

STEP1: Install MOTION from Repository

1
$ sudo apt install motion

STEP 2: start_motion_daemon=yes

1
$ sudo vim /etc/default/motion

Change start_motion_daemon from no to yes.

STEP 3: stream_localhost off

1
$ sudo vim /etc/motion/motion.conf

Change stream_localhost on to stream_localhost off.

STEP 4: Restart Motion Service

Run the following command:

1
$ sudo /etc/init.d/motion restart

STEP 5: Run Motion

1
$ sudo motion

STEP 6: Video Surveillance Remotely

Open up Chrome and type in the IP address:8081 to show the captured video at 1FPS. In my case, 192.168.0.86:8081, and the video effect is as:

Motion Video Surveillance

Finally, I’ve got some time to write something about PyTorch, a popular deep learning tool. We suppose you have had fundamental understanding of Anaconda Python, created Anaconda virtual environment (in my case, it’s named condaenv), and had PyTorch installed successfully under this Anaconda virtual environment condaenv.

Since I’m using Visual Studio Code to test my Python code (of course, you can use whichever coding tool you like), I suppose you’ve already had your own coding tool configured. Now, you are ready to go!

In my case, I’m giving a tutorial, instead of coding by myself. Therefore, Jupyter Notebook is selected as my presentation tool. So, I’ll demonstrate everything both in .py files, as well as .ipynb files. All codes can be found at Longer Vision PyTorch_Examples. However, ONLY Jupyter Notebook presentation is given in my blogs. Therefore, I suppose you’ve already successfully installed Jupyter Notebook, as well as any other required packages under your Anaconda virtual environment condaenv.

Now, let’s pop up Jupyter Notebook server.

Anaconda Python Virtual Environment

Clearly, Anaconda comes with the NEWEST version. So far, it is Python 3.6.4.

PART A: Hello PyTorch

Our very FIRST test code is of only 6 lines including an empty line:

After popping up Jupyter Notebook server, and click Run, you will see this view:

Jupyter Notebook Hello PyTorch

Clearly, the current torch is of version 0.3.1, and torchvision is of version 0.2.0.

PART B: Convolutional Neural Network in PyTorch

1. General Concepts

1) Online Resource

We are NOT going to discuss the background details of Convolutional Neural Network. A lot online frameworks/courses are available for you to catch up:

Several fabulous blogs are strongly recommended, including:

2) Architecture

One picture for all (cited from Convolutional Neural Network )

Typical Convolutional Neural Network Architecture

3) Back Propagation (BP)

The ONLY concept in CNN we want to emphasize here is Back Propagation, which has been widely used in traditional neural networks, which takes the solution similar to the final Fully Connected Layer of CNN. You are welcome to get some more details from https://brilliant.org/wiki/backpropagation/.

Pre-defined Variables

Training Database
  • \(X={(\vec{x_1},\vec{y_1}),(\vec{x_2},\vec{y_2}),…,(\vec{x_N},\vec{y_N})}\): the training dataset \(X\) is composed of \(N\) pairs of training samples, where \((\vec{x_i},\vec{y_i}),1 \le i \le N\)
  • \((\vec{x_i},\vec{y_i}),1 \le i \le N\): the \(i\)th training sample pair
  • \(\vec{x_i}\): the \(i\)th input vector (can be an original image, can also be a vector of extracted features, etc.)
  • \(\vec{y_i}\): the \(i\)th desired output vector (can be a one-hot vector, can also be a scalar, which is a 1-element vector)
  • \(\hat{\vec{y_i} }\): the \(i\)th output vector from the nerual network by using the \(i\)th input vector \(\vec{x_i}\)
  • \(N\): size of dataset, namely, how many training samples
  • \(w_{ij}^k\): in the neural network’s architecture, at level \(k\), the weight of the node connecting the \(i\)th input and the \(j\)th output
  • \(\theta\): a generalized denotion for any parameter inside the neural networks, which can be looked on as any element from a set of \(w_{ij}^k\).
Evaluation Function
  • Mean squared error (MSE): \[E(X, \theta) = \frac{1}{2N} \sum_{i=1}^N \left(||\hat{\vec{y_i} } - \vec{y_i}||\right)^2\]
  • Cross entropy: \[E(X, prob) = - \frac{1}{N} \sum_{i=1}^N log_2\left({prob(\vec{y_i})}\right)\]

Particularly, for binary classification, logistic regression is often adopted. A logistic function is defined as:

\[f(x)=\frac{1}{(1+e^{-x})}\]

In such a case, the loss function can easily be deducted as:

\[E(X,W) = - \frac{1}{N} \sum_{i=1}^N [y_i log({\hat{y_i} })+(1-y_i)log(1-\hat{y_i})]\]

where

\[y_i=0/1\]

\[\hat{y_i} \equiv g(\vec{w} \cdot \vec{x_i}) = \frac{1}{(1+e^{-\vec{w} \cdot \vec{x_i} })}\]

Some PyTorch explaination can be found at torch.nn.CrossEntropyLoss.

BP Deduction Conclusions

Only MSE is considered here (Please refer to https://brilliant.org/wiki/backpropagation/):

\[ \frac {\partial {E(X,\theta)} } {\partial w_{ij}^k} = \frac {1}{N} \sum_{d=1}^N \frac {\partial}{\partial w_{ij}^k} \Big( {\frac{1}{2} (\vec{y_d}-y_d)^2 } \Big) = \frac{1}{N} \sum_{d=1}^N {\frac{\partial E_d}{\partial w_{ij}^k} } \]

The updating weights is also determined as:

\[ \Delta w_{ij}^k = - \alpha \frac {\partial {E(X,\theta)} } {\partial w_{ij}^k} \]

2. PyTorch Implementation of Canonical CNN

0%