It seems Tensorflow evolves
pretty fast. Today we are testing object tracking based on Tensorflow.
1. Environment
1 2 3 4 5 6 7 8 9 10
➜ ~ python Python 3.6.6 (default, Sep 12 2018, 18:26:19) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux Type "help", "copyright", "credits" or "license"for more information. >>> import tensorflow as tf >>> tf.__version__ '1.12.0-rc0' >>> import cv2 >>> cv2.__version__ '3.4.3'
2. Object Tracking
2.1 Concepts
There are several fundamental concepts to be re-emphasized (Here, we
took one single concerned object as our example. There might be
multiple concerned objects): -
detection: You don't know whethere there is a concerned
object in the field of view or not, which you will know after the
detection. And, if there is such a concerned object in the view, the
object location is to be given. - tracking: You know
where the concerned object was. Based on the prior knowledge, you are to
determine where this object is going to be next? -
location: Both detection and tracking are looked on as
locating the concerned object. - recognition: Only
after the concerned object has been located, more detailed information
may be recognized afterwards.
$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04 LTS Release: 18.04 Codename: bionic
1 2 3 4 5
$ gcc --version gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the sourcefor copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$ ./configure WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.google.protobuf.UnsafeUtil (file:~/.cache/bazel/_bazel_jiapei/install/ce085f519b017357185750fe457b4648/_embedded_binaries/A-server.jar) to field java.nio.Buffer.address WARNING: Please consider reporting this to the maintainers of com.google.protobuf.UnsafeUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command"bazel shutdown". You have bazel 0.15.0 installed. Please specify the location of python. [Default is /usr/bin/python]:
Found possible Python library paths: /usr/local/lib/python3.6/dist-packages /usr/lib/python3/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python3.6/dist-packages]
Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: Y jemalloc as malloc support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: Y Google Cloud Platform support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: Y Hadoop File System support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Amazon AWS Platform support? [Y/n]: Y Amazon AWS Platform support will be enabled for TensorFlow.
Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: Y Apache Kafka Platform support will be enabled for TensorFlow.
Do you wish to build TensorFlow with XLA JIT support? [y/N]: N No XLA JIT support will be enabled for TensorFlow.
Do you wish to build TensorFlow with GDR support? [y/N]: N No GDR support will be enabled for TensorFlow.
Do you wish to build TensorFlow with VERBS support? [y/N]: N No VERBS support will be enabled for TensorFlow.
Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: N No OpenCL SYCL support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow.
Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 9.0]: 9.2
Please specify the location where CUDA 9.2 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 7.1.4
Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Do you wish to build TensorFlow with TensorRT support? [y/N]: N No TensorRT support will be enabled for TensorFlow.
Please specify the NCCL version you want to use. [Leave empty to default to NCCL 1.3]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 5.2]
Do you want to use clang as CUDA compiler? [y/N]: N nvcc will be used as CUDA compiler.
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Do you wish to build TensorFlow with MPI support? [y/N]: N No MPI support will be enabled for TensorFlow.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: N Not configuring the WORKSPACE for Android builds.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. Configuration finished
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg Wed Jul 11 00:09:09 PDT 2018 : === Preparing sources indir: /tmp/tmp.5X0zsqfxYo /media/jiapei/Data/Downloads/machinelearning/deeplearning/tensorflow/tensorflow /media/jiapei/Data/Downloads/machinelearning/deeplearning/tensorflow/tensorflow /media/jiapei/Data/Downloads/machinelearning/deeplearning/tensorflow/tensorflow Wed Jul 11 00:09:35 PDT 2018 : === Building wheel warning: no files found matching '*.dll' under directory '*' warning: no files found matching '*.lib' under directory '*' warning: no files found matching '*.h' under directory 'tensorflow/include/tensorflow' warning: no files found matching '*' under directory 'tensorflow/include/Eigen' warning: no files found matching '*.h' under directory 'tensorflow/include/google' warning: no files found matching '*' under directory 'tensorflow/include/third_party' warning: no files found matching '*' under directory 'tensorflow/include/unsupported' Wed Jul 11 00:10:58 PDT 2018 : === Output wheel file is in: /tmp/tensorflow_pkg
Let's have a look at what's been built:
1 2
$ ls /tmp/tensorflow_pkg/ tensorflow-1.9.0rc0-cp36-cp36m-linux_x86_64.whl
4.2 Pip Installation
And, let's have
tensorflow-1.9.0rc0-cp36-cp36m-linux_x86_64.whl
installed.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
$ pip3 install /tmp/tensorflow_pkg/tensorflow-1.9.0rc0-cp36-cp36m-linux_x86_64.whl Processing /tmp/tensorflow_pkg/tensorflow-1.9.0rc0-cp36-cp36m-linux_x86_64.whl Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (0.31.1) Requirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (0.2.0) Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (1.14.5) Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (0.7.1) Requirement already satisfied: six>=1.10.0 in ./.local/lib/python3.6/site-packages (from tensorflow==1.9.0rc0) (1.11.0) Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (1.13.0) Requirement already satisfied: absl-py>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (0.2.2) Requirement already satisfied: tensorboard<1.9.0,>=1.8.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (1.8.0) Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (1.1.0) Requirement already satisfied: setuptools<=39.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (39.1.0) Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.9.0rc0) (3.6.0) Requirement already satisfied: bleach==1.5.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow==1.9.0rc0) (1.5.0) Requirement already satisfied: html5lib==0.9999999 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow==1.9.0rc0) (0.9999999) Requirement already satisfied: werkzeug>=0.11.10 in ./.local/lib/python3.6/site-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow==1.9.0rc0) (0.14.1) Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow==1.9.0rc0) (2.6.11) Successfully installed tensorflow-1.9.0rc0
Let's test if Tensorflow
has been successfully installed.
4.3 Check Tensorflow
1 2 3 4 5 6 7
$ python Python 3.6.5 (default, Apr 1 2018, 05:46:30) [GCC 7.3.0] on linux Type "help", "copyright", "credits" or "license"for more information. >>> import tensorflow as tf >>> tf.__version__ '1.9.0-rc0'
5. Keras Installation
After successfully check out Keras, we can easily have
Keras installed by command python setup.py
install.
Installed /usr/local/lib/python3.6/dist-packages/Keras-2.2.0-py3.6.egg Processing dependencies for Keras==2.2.0 Searching for Keras-Preprocessing==1.0.1 Best match: Keras-Preprocessing 1.0.1 Adding Keras-Preprocessing 1.0.1 to easy-install.pth file
Using /usr/local/lib/python3.6/dist-packages Searching for Keras-Applications==1.0.2 Best match: Keras-Applications 1.0.2 Adding Keras-Applications 1.0.2 to easy-install.pth file
Using /usr/local/lib/python3.6/dist-packages Searching for h5py==2.8.0 Best match: h5py 2.8.0 Adding h5py 2.8.0 to easy-install.pth file
Using /usr/local/lib/python3.6/dist-packages Searching for PyYAML==3.13 Best match: PyYAML 3.13 Adding PyYAML 3.13 to easy-install.pth file
Using /usr/local/lib/python3.6/dist-packages Searching for six==1.11.0 Best match: six 1.11.0 Adding six 1.11.0 to easy-install.pth file
Using /usr/lib/python3/dist-packages Searching for scipy==1.1.0 Best match: scipy 1.1.0 Adding scipy 1.1.0 to easy-install.pth file
Using /usr/local/lib/python3.6/dist-packages Searching for numpy==1.14.5 Best match: numpy 1.14.5 Adding numpy 1.14.5 to easy-install.pth file
Using /usr/local/lib/python3.6/dist-packages Finished processing dependencies for Keras==2.2.0
That's all for today. I think Python is seriously cool, handy
indeed. I myself will still recommend Pytorch, but it seems Tensorflow and Keras are also very popular in North
America.
I'm quite busy today. So, I'd just post some videos to show the
performance about jiu bu gao su ni😆
Click on the pictures to open an uploaded Facebook video.
1. Key Point Localization
1.1 Nobody
- Yeah, it's ME, 10 YEARS ago. How time flies...
1.2 FRANCK -
What a Canonical Annotated Face Dataset
Just notice this
news about ImageAI today. So, I just had it tested for fun. I
seriously don't want to talk about ImageAI too much,
you can follow the author's github and
it shouldn't be that hard to have everything done in minutes.
1. Preparation
1.1 Prerequisite Dependencies
As described by on ImageAI's Github,
multiple Python dependencies need to be installed:
Tensorflow
Numpy
SciPy
OpenCV
Pillow
Matplotlib
h5py
Keras
All packages can be easily installed by command:
1
pip3 install PackageName
Afterwards, ImageAI can be
installed by a single command:
$ python FirstPrediction.py 2018-07-02 18:12:09.275412: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA convertible : 52.45954394340515 sports_car : 37.61279881000519 pickup : 3.1751133501529694 car_wheel : 1.817503571510315 minivan : 1.748703233897686
$ python FirstDetection.py Using TensorFlow backend. 2018-07-02 18:23:09.634037: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2018-07-02 18:23:11.744790: W tensorflow/core/framework/allocator.cc:101] Allocation of 68300800 exceeds 10% of system memory. 2018-07-02 18:23:11.958081: W tensorflow/core/framework/allocator.cc:101] Allocation of 68403200 exceeds 10% of system memory. 2018-07-02 18:23:12.174739: W tensorflow/core/framework/allocator.cc:101] Allocation of 68403200 exceeds 10% of system memory. 2018-07-02 18:23:12.433540: W tensorflow/core/framework/allocator.cc:101] Allocation of 68403200 exceeds 10% of system memory. 2018-07-02 18:23:12.694631: W tensorflow/core/framework/allocator.cc:101] Allocation of 68403200 exceeds 10% of system memory. 2018-07-02 18:23:16.267111: W tensorflow/core/framework/allocator.cc:101] Allocation of 64224000 exceeds 10% of system memory. 2018-07-02 18:23:16.370939: W tensorflow/core/framework/allocator.cc:101] Allocation of 64224000 exceeds 10% of system memory. 2018-07-02 18:23:16.403353: W tensorflow/core/framework/allocator.cc:101] Allocation of 67435200 exceeds 10% of system memory. person : 55.596935749053955 person : 66.90954566001892 person : 67.96322464942932 person : 50.80411434173584 bicycle : 64.87574577331543 bicycle : 72.0929205417633 person : 80.02063035964966 person : 85.82872748374939 truck : 59.56767797470093 person : 66.69963002204895 person : 79.37889695167542 person : 64.81361389160156 bus : 65.35580158233643 bus : 97.16107249259949 bus : 68.20474863052368 truck : 67.65954494476318 truck : 77.73774266242981 bus : 69.96590495109558 truck : 69.54039335250854 car : 61.26518249511719 car : 59.965676069259644
And, under the program folder, you will get an output image:
$ python FirstDetection.py Using TensorFlow backend. 2018-07-02 18:25:24.919351: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA person : 53.27633619308472 person : 52.71329879760742 person : 63.67729902267456 person : 55.00321388244629 person : 74.53054189682007 person : 51.54905915260315 motorcycle : 59.057921171188354 bus : 93.79504919052124 bus : 86.21828556060791 bus : 77.143394947052 person : 59.69809293746948 car : 71.79147601127625 car : 60.15858054161072 person : 62.758803367614746 person : 58.786213397979736 person : 76.49624943733215 car : 56.977421045303345 person : 67.86248683929443 person : 50.977784395217896 person : 52.3215651512146 motorcycle : 52.81376242637634 person : 76.79281234741211 motorcycle : 74.65972304344177 person : 55.96961975097656 person : 68.15704107284546 motorcycle : 56.21282458305359 bicycle : 71.78951501846313 motorcycle : 69.68616843223572 bicycle : 91.09067916870117 motorcycle : 83.16765427589417 motorcycle : 61.57424449920654
And, under the program folder, you will get an output image:
It has been quite a while that my VOSM has NOT been updated.
My bad for sure. But today, I have it updated, and VOSM-0.3.5 is
released. Just refer to the following 3 pages on
github:
To download videos from Youtube is sometimes
required. Some Chrome plugins can be used to download
Youtube videos, such as: Youtube
Downloader. Other methods can also be found on various resources,
such as: WikiHow.
In this blog, I'm going to cite (copy and paste)
from WikiHow about
How to download Youtube videos by using VLC.
STEP1: Copy Youtube URL
Find the Youtube video that you would like to download, and copy the
Youtube URL.
STEP 2: Broadcast Youtube
Video in VLC
Paste the URL under VLC Media
Player->Media->Open Network
Stream->Network Tab->Please
enter a network URL:, and Play:
Video Broadcasting
STEP 3: Get the
Real Location of Youtube Video
Then, copy the URL under Tools->Codec
Information->Codec->Location:
We can of course directly download the package from WeeChat Download, and have it
installed from source. However, installing WeeChat from repository is
recommended.
Finally, I’ve got some time to write something about PyTorch, a popular deep learning tool. We
suppose you have had fundamental understanding of Anaconda Python, created Anaconda
virtual environment (in my case, it’s named condaenv), and had PyTorch installed successfully under this
Anaconda virtual environment condaenv.
Since I’m using Visual
Studio Code to test my Python code (of course, you can use whichever
coding tool you like), I suppose you’ve already had your own coding tool
configured. Now, you are ready to go!
In my case, I’m giving a tutorial, instead of coding by myself.
Therefore, Jupyter Notebook is
selected as my presentation tool. So, I’ll demonstrate everything both
in .py files, as well as .ipynb files. All codes can be found at Longer Vision
PyTorch_Examples. However, ONLYJupyter Notebook presentation is given in
my blogs. Therefore, I suppose you’ve already successfully installed Jupyter Notebook, as well as any other
required packages under your Anaconda virtual environment
condaenv.
The ONLY concept in CNN we want to
emphasize here is Back
Propagation, which has been widely used in traditional neural
networks, which takes the solution similar to the final Fully
Connected Layer of CNN. You are welcome to get some more
details from https://brilliant.org/wiki/backpropagation/.
Pre-defined Variables
Training Database
\(X={(\vec{x_1},\vec{y_1}),(\vec{x_2},\vec{y_2}),…,(\vec{x_N},\vec{y_N})}\):
the training dataset \(X\) is composed
of \(N\) pairs of training samples,
where \((\vec{x_i},\vec{y_i}),1 \le i \le
N\)
\((\vec{x_i},\vec{y_i}),1 \le i \le
N\): the \(i\)th training sample
pair
\(\vec{x_i}\): the \(i\)th input vector (can be an original
image, can also be a vector of extracted features, etc.)
\(\vec{y_i}\): the \(i\)th desired output vector (can be a one-hot vector, can
also be a scalar, which is a 1-element vector)
\(\hat{\vec{y_i} }\): the \(i\)th output vector from the nerual network
by using the \(i\)th input vector \(\vec{x_i}\)
\(N\): size of dataset, namely, how
many training samples
\(w_{ij}^k\): in the neural
network’s architecture, at level \(k\),
the weight of the node connecting the \(i\)th input and the \(j\)th output
\(\theta\): a generalized denotion
for any parameter inside the neural networks, which can be looked on as
any element from a set of \(w_{ij}^k\).