Today seems to be the FIRST big day pf 2019? So many important packages (including operating systems) have released their NEW updates. Alright, let's take a look what I've done today.

Updates Today

Ubuntu 18.04.2

1
2
3
4
5
6
7
8
➜  ~ uname -r
4.18.0-15-generic
➜ ~ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic

NVidia 418.43

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
➜  ~ nvidia-smi
Thu Feb 28 17:15:24 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.43 Driver Version: 418.43 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 980M Off | 00000000:01:00.0 On | N/A |
| N/A 45C P8 8W / N/A | 659MiB / 4035MiB | 1% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1087 G /usr/lib/xorg/Xorg 16MiB |
| 0 1146 G /usr/bin/gnome-shell 48MiB |
| 0 1442 G /usr/lib/xorg/Xorg 263MiB |
| 0 1574 G /usr/bin/gnome-shell 166MiB |
| 0 2254 G ...quest-channel-token=7723035104982464121 113MiB |
| 0 8140 G /proc/self/exe 43MiB |
+-----------------------------------------------------------------------------+

Cuda 10.1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
➜  ~ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Fri_Feb__8_19:08:17_PST_2019
Cuda compilation tools, release 10.1, V10.1.105
➜ ~ deviceQuery
deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 980M"
CUDA Driver Version / Runtime Version 10.1 / 10.1
CUDA Capability Major/Minor version number: 5.2
Total amount of global memory: 4035 MBytes (4231331840 bytes)
(12) Multiprocessors, (128) CUDA Cores/MP: 1536 CUDA Cores
GPU Max Clock rate: 1126 MHz (1.13 GHz)
Memory Clock rate: 2505 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: No
Supports Cooperative Kernel Launch: No
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.1, CUDA Runtime Version = 10.1, NumDevs = 1
Result = PASS

Cudnn 7.5

1
2
3
4
5
6
7
8
➜  ~ cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
#define CUDNN_MAJOR 7
#define CUDNN_MINOR 5
#define CUDNN_PATCHLEVEL 0
--
#define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)

#include "driver_types.h"

Bugs Today

The Error

1
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.

Solution

The solution is inside BIOS. Please refer to How do I disable UEFI Secure Boot?.

Yup... The solution is just to turn from Enable to Disable inside BIOS Secure Boot.

It seems Gazebo is NOW a part of Ignition Robotics??? Please take a look at my Ignition Robotics Bitbucket Issue What's the realationship between ign-gazebo and gazebo? And, today, let's build IgnitionRobotics from source.

How to Build IgnitionRobotics

Use Branch gz11

NOTE: In order to have the LATEST version built, please make sure you always give the priority to branch gz11 rather than branch default.

For instance ign-cmake:

1
git clone https://bitbucket.org/ignitionrobotics/ign-cmake/src/gz11/

Important Things

Two things to emphasize before building. 1. To have all the packages at Ignition Robotics Development Libraries successfully built, we need to build all libraries in a particular sequence. 2. Some required 3rd-party libraries are additionally installed while building Ignition Robotics Development Libraries, including:

  • gdal
  • DART
  • OpenSceneGraph
  • Ogre2
  • vrpn
  • OSVR: Please make sure unifiedvideoinertialtracker is excluded from building for now.
    1
    #add_subdirectory(unifiedvideoinertialtracker)
  • folly: required by unifiedvideoinertialtracker, which requires boost built by C++14. For now, this is excluded from building.

Build Ignition Robotics

Now, let's start building Ignition Robotics Development Libraries.

  1. ign-cmake: BUILD_TESTING OFF
  2. ign-math: ruby-dev needs to be installed FIRST
  3. ign-common: libgts needs to be installed FIRST
  4. ign-tools
  5. ign-msgs
  6. ign-transport: CXX_FLAGS=-I/usr/include/c++/7 and comment out line #35 of file test_config.h.
    1
    //#include <filesystem> // line 35
  7. ign-rendering: Modify CMakeLists.txt line 62, ogre to ogre2
  8. ign-plugin
  9. ign-gui
  10. ign-fuel-tools
  11. SDFormat8: C++8 is required for building
  12. ign-physics: Since I'm using dart 7.0, please figure out some trivial bugs about DartLoader in file DARTDoublePendulum.cc.
  13. ign-sensors
  14. ign-gazebo
  15. gazebo: Going to be deprecated soon
    1
    2
    3
    -- BUILD WARNINGS
    -- Oculus Rift support will be disabled.
    -- END BUILD WARNINGS

Some Bugs Proposed by Me

Some Tests

ign-rendering via Ogre2 Test

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
21:31:15: Starting .../IgnitionRobotics/Ogre2Demo...
[Wrn] [ColladaLoader.cc:2070] Triangle input semantic: 'COLOR' is currently not supported
[Msg] Loading plugin [ignition-rendering1-ogre2]
===============================
TAB - Switch render engines
ESC - Exit
===============================
Selected visual at position: 309 294: pump
Selected visual at position: 458 307: sphere
Selected visual at position: 515 327: pump
Selected visual at position: 447 365: pump
Selected visual at position: 497 427: plane
Selected visual at position: 197 328: cylinder
Selected visual at position: 241 394: cylinder
Selected visual at position: 130 340: duck
Selected visual at position: 419 444: duck
No visual found at position: 503 296
Selected visual at position: 505 350: pump
Selected visual at position: 412 326: pump
Selected visual at position: 385 230: duck
Selected visual at position: 361 264: box
Selected visual at position: 396 270: sphere
Selected visual at position: 322 335: pump
Selected visual at position: 359 352: plane
Selected visual at position: 539 422: plane
Selected visual at position: 406 366: pump
Selected visual at position: 431 374: pump
Selected visual at position: 510 422: cylinder
Ogre2 Demo

ign-gazebo Performance Test

Matching Entity

ign-gazebo Integration Test

ZR3D South Survey Drone with 6 Wings Overview

In my last blog, I talked about TorchSeg, a PyTorch open source developed by my master's lab, namely: The State-Level key Laboratory of Multispectral Signal Processing in Huazhong University of Science and Technology.

Today, I'm going to introduce professional drones developed by a start-up company ZR3D, which is spinned-out from my bachelor's department, namely: School of Remote Sensing and Information Engineering in Wuhan University. By the way, The State key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing in Wuhan University is specialized in Tilt photogrammetry.

OK, now it's the time to show some of ZR3D's products.

Outdoor Work

The Drone

Outdoor video capturing can be stably done by ZR3D drones. In the following, we show some pictures of an OEMed drone manufactured/assembled by ZR3D.

ZR3D South Survey Longer Vision
In Box Open Box Layer 2 Front View
Overview Front View Open Box Layer 1
Camera Camera Pivot
Wing Wing Wing

Captured Sample Images For Some Scenary

Side View Half Side View Top View
Side View Half Side View Top View

Indoor Work

Indoor surveying & mapping is done by a cluster of servers, namely, on a small cloud. Currently, we are still dockerizing our own SDK. Three videos are used to briefly explain the MOST important three steps of indoor surveying & mapping, as shown:

Point Cloud Meshlized Texturized
Point Cloud Meshlized Texturized

PX4 Autopilot Software

Popular open source drone firmwares and websites that I've been testing are briefly listed in the following:

Happily got the info that my master's supervisor's lab, namely: The State-Level key Laboratory of Multispectral Signal Processing in Huazhong University of Science and Technology released TorchSeg just yesterday. I can't helping testing it out.

Preparation

Python Packages

According to the README.md on TorchSeg, serveral packages need to be prepared FIRST:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
➜  ~ pip show torch
Name: torch
Version: 1.1.0a0+b6a8c45
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: UNKNOWN
Author: UNKNOWN
Author-email: UNKNOWN
License: UNKNOWN
Location: /home/jiapei/.local/lib/python3.6/site-packages
Requires:
Required-by: torchvision, torchtext, torchgan, pytorch-pretrained-bert, pyro-ppl, flair, autokeras
➜ ~ pip show torchvision
Name: torchvision
Version: 0.2.1
Summary: image and video datasets and models for torch deep learning
Home-page: https://github.com/pytorch/vision
Author: PyTorch Core Team
Author-email: soumith@pytorch.org
License: BSD
Location: /home/jiapei/.local/lib/python3.6/site-packages
Requires: numpy, torch, pillow, six
Required-by: torchgan, torchfusion, autokeras
➜ ~ pip show easydict
Name: easydict
Version: 1.9
Summary: Access dict values as attributes (works recursively).
Home-page: https://github.com/makinacorpus/easydict
Author: Mathieu Leplatre
Author-email: mathieu.leplatre@makina-corpus.com
License: LPGL, see LICENSE file.
Location: /home/jiapei/.local/lib/python3.6/site-packages
Requires:
Required-by: luminoth
➜ ~ pip show apex
Name: apex
Version: 0.1
Summary: PyTorch Extensions written by NVIDIA
Home-page: UNKNOWN
Author: UNKNOWN
Author-email: UNKNOWN
License: UNKNOWN
Location: /home/jiapei/.local/lib/python3.6/site-packages/apex-0.1-py3.6.egg
Requires:
Required-by:
➜ ~ pip show tqdm
Name: tqdm
Version: 4.29.1
Summary: Fast, Extensible Progress Meter
Home-page: https://github.com/tqdm/tqdm
Author: Noam Yorav-Raphael
Author-email: noamraph@gmail.com
License: MPLv2.0, MIT Licences
Location: /home/jiapei/.local/lib/python3.6/site-packages
Requires:
Required-by: TPOT, torchtext, torchfusion, thinc, tensorpack, skorch, shap, pytorch-pretrained-bert, pyro-ppl, optimuspyspark, kaggle, flair, autokeras, tf-pose

PyTorch Models

Download all PyTorch models provided from within all .py files from PyTorch Vision Models. Let's briefly summarize the models as follows:

Test

TorchSeg config.py modification

After TorchSeg is checked out, we need to modify all the config.py files and ensure all variables C.pretrained_model are specified to the RIGHT location and with the RIGHT names. In my case, I just downloaded all PyTorch models under the same directory as TorchSeg, therefore, all C.pretrained_model are designated as:

1
2
3
C.pretrained_model = "./resnet18-5c106cde.pth"
C.pretrained_model = "./resnet50-19c8e357.pth"
C.pretrained_model = "./resnet101-5d3b4d8f.pth"

etc.

We also need to modify all variables C.dataset_path and make sure we are using the RIGHT dataset. In fact, ONLY two datasets are directly adopted in the originally checked-out code of TorchSeg.

Currently, it seems there is still some tricks about how to configure these datasets, please refer to my Github issue.

Today, we are going to test out Facebook Prophet by following this DigitalOcean Tutorial.

Preparation

Required Python Packages

We FIRST make sure 2 Python packages - Prophet and PyStan have been successfully installed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
➜  ~ pip show prophet
Name: prophet
Version: 0.1.1
Summary: Microframework for analyzing financial markets.
Home-page: http://prophet.michaelsu.io/
Author: Michael Su
Author-email: mdasu1@gmail.com
License: BSD
Location: /home/jiapei/.local/lib/python3.6/site-packages
Requires: six, pytz, pandas
Required-by:
➜ ~ pip show pystan
Name: pystan
Version: 2.18.1.0
Summary: Python interface to Stan, a package for Bayesian inference
Home-page: https://github.com/stan-dev/pystan
Author: None
Author-email: None
License: GPLv3
Location: /home/jiapei/.local/lib/python3.6/site-packages
Requires: Cython, numpy
Required-by: fbprophet

Download the Time Series Data

We just need to download the CSV file to some directory:

1
2
3
4
➜  facebookprophet curl -O https://assets.digitalocean.com/articles/eng_python/prophet/AirPassengers.csv
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1748 100 1748 0 0 2281 0 --:--:-- --:--:-- --:--:-- 2279

Test

The Code

Trivial modifications have been done upon the code on this DigitalOcean Tutorial, as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
import pandas as pd
from fbprophet import Prophet

import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')

df = pd.read_csv('AirPassengers.csv')
df.head(5)
df.dtypes

df['Month'] = pd.DatetimeIndex(df['Month'])
df.dtypes

df = df.rename(columns={'Month': 'ds', 'AirPassengers': 'y'})
df.head(5)

ax = df.set_index('ds').plot(figsize=(12, 8))
ax.set_ylabel('Monthly Number of Airline Passengers')
ax.set_xlabel('Date')

plt.show()

# set the uncertainty interval to 95% (the Prophet default is 80%)
my_model = Prophet(interval_width=0.95)
my_model.fit(df)
future_dates = my_model.make_future_dataframe(periods=36, freq='MS')
future_dates.tail()
forecast = my_model.predict(future_dates)
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
fig1 = my_model.plot(forecast, uncertainty=True)
fig1.show()

my_model.plot_components(forecast).savefig('prophet_forcast.png');

Outcome

Original Data
Forcast
Forcast Component

AttributeError: type object 'Path' has no attribute 'home'

This error message is found while I was trying to from matplotlib.pyplot as plt today. I've got NO idea what had happened to my python. But, it seems this issue had been met and solved in some public posts. For instance:

Solutions:

In my case, I need to manually modify 3 files under folder ~/.local/lib/python3.6/site-packages/matplotlib.

Path.home() -> os.path.expanduser('~')

  • In file ~/.local/lib/python3.6/site-packages/matplotlib/font_manager.py, around line 135, change
1
2
3
if not USE_FONTCONFIG and sys.platform != 'win32':
OSXFontDirectories.append(str(Path.home() / "Library/Fonts"))
X11FontDirectories.append(str(Path.home() / ".fonts"))

to

1
2
3
if not USE_FONTCONFIG and sys.platform != 'win32':
OSXFontDirectories.append(str(os.path.expanduser('~')+'/'+"Library/Fonts"))
X11FontDirectories.append(str(os.path.expanduser('~')+'/'+".fonts"))

Remove exist_ok=True in function mkdir()

  • In file ~/.local/lib/python3.6/site-packages/matplotlib/init.py, around line 615
  • In file ~/.local/lib/python3.6/site-packages/matplotlib/texmanager.py, around line 56 and 104

change all

1
mkdir(parents=True, exist_ok=True)

to

1
mkdir(parents=True)

Haven't successfully tested three packages (all related to PyTorch), PyTorch, FlowNet2-Pytorch and vid2vid. Looking forward to assistance...

PyTorch

The Bug

After having successfully installed PyTorch current version 1.1, I still failed to import torch. Please refer to the following ERROR.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
➜  ~ python
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "~/.local/lib/python3.6/site-packages/torch/__init__.py", line 84, in <module>
from torch._C import *
ImportError: ~/.local/lib/python3.6/site-packages/torch/lib/libcaffe2.so: undefined symbol: _ZTIN3c1010TensorImplE
>>> import caffe2
>>> caffe2.__version__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'caffe2' has no attribute '__version__'
>>> caffe2.__file__
'~/.local/lib/python3.6/site-packages/caffe2/__init__.py'

In order to have PyTorch successully imported, I've got to remove the manually installed PyTorch v 1.1, but had it installed by pip

1
pip3 install https://download.pytorch.org/whl/cu100/torch-1.0.0-cp36-cp36m-linux_x86_64.whl

This is PyTorch v1.0, which seems NOT come with caffe2, and of course should NOT be compatible with the installed caffe2 built with PyTorch v1.1. Can anybody help to solve this issue? Please also refer to Github issue.

Solution

Remove anything/everything related to your previously installed PyTorch. In my case, file /usr/local/lib/libc10.s0 is to be removed. In order to analyze which files are possibly related to the concerned package, we can use the command ldd.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
➜  lib ldd libcaffe2.so
linux-vdso.so.1 (0x00007ffcf3dc9000)
libc10.so => /usr/local/lib/libc10.so (0x00007fca41b88000)
libmpi.so.12 => /opt/intel/mpi/intel64/lib/libmpi.so.12 (0x00007f8acdad9000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f8acd8d1000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f8acd6b2000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f8acd4ae000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f8acd296000)
libmkl_intel_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_intel_lp64.so (0x00007f8acc765000)
libmkl_gnu_thread.so => /opt/intel/mkl/lib/intel64/libmkl_gnu_thread.so (0x00007f8acaf2c000)
libmkl_core.so => /opt/intel/mkl/lib/intel64/libmkl_core.so (0x00007f8ac6df3000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f8ac6a55000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f8ac66cc000)
libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 (0x00007f8ac649d000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f8ac60ac000)
/lib64/ld-linux-x86-64.so.2 (0x00007f8ad250d000)
libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1 (0x00007f8ac5ea1000)
libfabric.so.1 => /usr/lib/x86_64-linux-gnu/libfabric.so.1 (0x00007f8ac5bf4000)
librdmacm.so.1 => /usr/lib/x86_64-linux-gnu/librdmacm.so.1 (0x00007f8ac59de000)
libibverbs.so.1 => /usr/lib/x86_64-linux-gnu/libibverbs.so.1 (0x00007f8ac57c8000)
libpsm_infinipath.so.1 => /usr/lib/x86_64-linux-gnu/libpsm_infinipath.so.1 (0x00007f8ac556f000)
libnl-route-3.so.200 => /usr/lib/x86_64-linux-gnu/libnl-route-3.so.200 (0x00007f8ac52fa000)
libnl-3.so.200 => /lib/x86_64-linux-gnu/libnl-3.so.200 (0x00007f8ac50da000)
libinfinipath.so.4 => /usr/lib/x86_64-linux-gnu/libinfinipath.so.4 (0x00007f8ac4ecb000)
libuuid.so.1 => /lib/x86_64-linux-gnu/libuuid.so.1 (0x00007f8ac4cc4000)

FlowNet2-Pytorch

Installation

It's not hard to have FlowNet2-Pytorch installed by one line of commands:

1
➜  flownet2-pytorch git:(master) ✗ ./install.sh

After installation, there will be 3 packages installed under folder ~/.local/lib/python3.6/site-packages:

  • correlation-cuda
  • resample2d-cuda
  • channelnorm-cuda
1
2
3
4
5
6
7
8
9
10
➜  site-packages ls -lsd correlation*
4 drwxrwxr-x 4 jiapei jiapei 4096 Jan 7 00:07 correlation_cuda-0.0.0-py3.6-linux-x86_64.egg
➜ site-packages ls -lsd channelnorm*
4 drwxrwxr-x 4 jiapei jiapei 4096 Jan 7 00:07 channelnorm_cuda-0.0.0-py3.6-linux-x86_64.egg
➜ site-packages ls -lsd resample2d*
4 drwxrwxr-x 4 jiapei jiapei 4096 Jan 7 00:07 resample2d_cuda-0.0.0-py3.6-linux-x86_64.egg
➜ site-packages ls -lsd flownet2*
zsh: no matches found: flownet2*
➜ site-packages pwd
~/.local/lib/python3.6/site-packages

That is to say: you should NEVER import flownet2, nor correlation, nor channelnorm, nor resampled2d, but

  • correlation_cuda
  • resample2d_cuda
  • channelnorm_cuda

Current Bug

Here comes the ERROR:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
➜  ~ python
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import correlation_cuda
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: ~/.local/lib/python3.6/site-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE
>>> import channelnorm_cuda
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: ~/.local/lib/python3.6/site-packages/channelnorm_cuda-0.0.0-py3.6-linux-x86_64.egg/channelnorm_cuda.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE
>>> import resample2d_cuda
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: ~/.local/lib/python3.6/site-packages/resample2d_cuda-0.0.0-py3.6-linux-x86_64.egg/resample2d_cuda.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE
>>>

I've already posted an issue on github. Had anybody solved this problem?

Solution

import torch FIRST.

1
2
3
4
5
6
7
8
➜  ~ python
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> import correlation_cuda
>>> import resample2d_cuda
>>> import channelnorm_cuda

Still Buggy

FlowNet2-Pytorch Inference

Where can we find and download checkpoints? Please refer to Inference on FlowNet2-Pytorch:

1
python main.py --inference --model FlowNet2 --save_flow --inference_dataset MpiSintelClean --inference_dataset_root /path/to/mpi-sintel/clean/dataset --resume /path/to/checkpoints

FlowNet2-Pytorch Training

Training on FlowNet2-Pytorch gives the following ERROR:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
➜  flownet2-pytorch git:(master) ✗ python main.py --batch_size 8 --model FlowNet2 --loss=L1Loss --optimizer=Adam --optimizer_lr=1e-4 --training_dataset MpiSintelFinal --training_dataset_root /path/to/mpi-sintel/training/final/mountain_1  --validation_dataset MpiSintelClean --validation_dataset_root /path/tompi-sintel/training/clean/mountain_1
Parsing Arguments
[0.013s] batch_size: 8
[0.013s] crop_size: [256, 256]
[0.013s] fp16: False
[0.014s] fp16_scale: 1024.0
[0.014s] gradient_clip: None
[0.014s] inference: False
[0.014s] inference_batch_size: 1
[0.014s] inference_dataset: MpiSintelClean
[0.014s] inference_dataset_replicates: 1
[0.014s] inference_dataset_root: ./MPI-Sintel/flow/training
[0.014s] inference_n_batches: -1
[0.014s] inference_size: [-1, -1]
[0.014s] log_frequency: 1
[0.014s] loss: L1Loss
[0.014s] model: FlowNet2
[0.014s] model_batchNorm: False
[0.014s] model_div_flow: 20.0
[0.014s] name: run
[0.014s] no_cuda: False
[0.014s] number_gpus: 1
[0.014s] number_workers: 8
[0.014s] optimizer: Adam
[0.014s] optimizer_amsgrad: False
[0.014s] optimizer_betas: (0.9, 0.999)
[0.014s] optimizer_eps: 1e-08
[0.014s] optimizer_lr: 0.0001
[0.014s] optimizer_weight_decay: 0
[0.014s] render_validation: False
[0.014s] resume:
[0.014s] rgb_max: 255.0
[0.014s] save: ./work
[0.014s] save_flow: False
[0.014s] schedule_lr_fraction: 10
[0.014s] schedule_lr_frequency: 0
[0.014s] seed: 1
[0.014s] skip_training: False
[0.014s] skip_validation: False
[0.014s] start_epoch: 1
[0.014s] total_epochs: 10000
[0.014s] train_n_batches: -1
[0.014s] training_dataset: MpiSintelFinal
[0.014s] training_dataset_replicates: 1
[0.014s] training_dataset_root: ....../mpi-sintel/training/final/mountain_1
[0.014s] validation_dataset: MpiSintelClean
[0.014s] validation_dataset_replicates: 1
[0.014s] validation_dataset_root: ....../mpi-sintel/training/clean/mountain_1
[0.014s] validation_frequency: 5
[0.014s] validation_n_batches: -1
[0.016s] Operation finished

Source Code
Current Git Hash: b'ac1602a72f0454f65872126b70665a596fae8009'

Initializing Datasets
[0.003s] Operation failed

Traceback (most recent call last):
File "main.py", line 139, in <module>
train_dataset = args.training_dataset_class(args, True, **tools.kwargs_from_args(args, 'training_dataset'))
File "....../flownet2-pytorch/datasets.py", line 112, in __init__
super(MpiSintelFinal, self).__init__(args, is_cropped = is_cropped, root = root, dstype = 'final', replicates = replicates)
File "....../flownet2-pytorch/datasets.py", line 66, in __init__
self.frame_size = frame_utils.read_gen(self.image_list[0][0]).shape
IndexError: list index out of range

Therefore, I posted a further issue on Github.

vid2vid

Preparation

There are 4 scripts under folder scripts to run before testing vid2vid.

1
2
3
4
5
➜  vid2vid git:(master) ✗ ll scripts/*.py
-rwxrwxrwx 1 jiapei jiapei 322 Jan 5 22:04 scripts/download_datasets.py
-rwxrwxrwx 1 jiapei jiapei 579 Jan 5 22:04 scripts/download_flownet2.py
-rwxrwxrwx 1 jiapei jiapei 1.3K Jan 5 22:04 scripts/download_gdrive.py
-rwxrwxrwx 1 jiapei jiapei 257 Jan 5 22:04 scripts/download_models_flownet2.py

However, ONLY 3 of the scripts can be successfully run, but scripts/download_flownet2.py failed to run, as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
➜  vid2vid git:(master) python scripts/download_flownet2.py 
Compiling correlation kernels by nvcc...
rm: cannot remove '../_ext': No such file or directory
Traceback (most recent call last):
File "build.py", line 3, in <module>
import torch.utils.ffi
File "~/.local/lib/python3.6/site-packages/torch/utils/ffi/__init__.py", line 1, in <module>
raise ImportError("torch.utils.ffi is deprecated. Please use cpp extensions instead.")
ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead.
Compiling resample2d kernels by nvcc...
rm: cannot remove 'Resample2d_kernel.o': No such file or directory
rm: cannot remove '../_ext': No such file or directory
In file included from Resample2d_kernel.cu:1:0:
~/.local/lib/python3.6/site-packages/torch/lib/include/THC/THC.h:4:10: fatal error: THC/THCGeneral.h: No such file or directory
#include <THC/THCGeneral.h>
^~~~~~~~~~~~~~~~~~
compilation terminated.
Traceback (most recent call last):
File "build.py", line 3, in <module>
import torch.utils.ffi
File "~/.local/lib/python3.6/site-packages/torch/utils/ffi/__init__.py", line 1, in <module>
raise ImportError("torch.utils.ffi is deprecated. Please use cpp extensions instead.")
ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead.
Compiling channelnorm kernels by nvcc...
rm: cannot remove 'ChannelNorm_kernel.o': No such file or directory
rm: cannot remove '../_ext': No such file or directory
In file included from ChannelNorm_kernel.cu:1:0:
~/.local/lib/python3.6/site-packages/torch/lib/include/THC/THC.h:4:10: fatal error: THC/THCGeneral.h: No such file or directory
#include <THC/THCGeneral.h>
^~~~~~~~~~~~~~~~~~
compilation terminated.
Traceback (most recent call last):
File "build.py", line 3, in <module>
import torch.utils.ffi
File "~/.local/lib/python3.6/site-packages/torch/utils/ffi/__init__.py", line 1, in <module>
raise ImportError("torch.utils.ffi is deprecated. Please use cpp extensions instead.")
ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead.

Testing is ALSO Buggy

If we run Testing on vid2vid homepage, it gives the following ERROR:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
➜  vid2vid git:(master) python test.py --name label2city_2048 --label_nc 35 --loadSize 2048 --n_scales_spatial 3 --use_instance --fg --use_single_G
------------ Options -------------
add_face_disc: False
aspect_ratio: 1.0
basic_point_only: False
batchSize: 1
checkpoints_dir: ./checkpoints
dataroot: datasets/Cityscapes/
dataset_mode: temporal
debug: False
densepose_only: False
display_id: 0
display_winsize: 512
feat_num: 3
fg: True
fg_labels: [26]
fineSize: 512
gpu_ids: [0]
how_many: 300
input_nc: 3
isTrain: False
label_feat: False
label_nc: 35
loadSize: 2048
load_features: False
load_pretrain:
max_dataset_size: inf
model: vid2vid
nThreads: 2
n_blocks: 9
n_blocks_local: 3
n_downsample_E: 3
n_downsample_G: 3
n_frames_G: 3
n_gpus_gen: 1
n_local_enhancers: 1
n_scales_spatial: 3
name: label2city_2048
ndf: 64
nef: 32
netE: simple
netG: composite
ngf: 128
no_canny_edge: False
no_dist_map: False
no_first_img: False
no_flip: False
no_flow: False
norm: batch
ntest: inf
openpose_only: False
output_nc: 3
phase: test
random_drop_prob: 0.05
random_scale_points: False
remove_face_labels: False
resize_or_crop: scaleWidth
results_dir: ./results/
serial_batches: False
start_frame: 0
tf_log: False
use_instance: True
use_real_img: False
use_single_G: True
which_epoch: latest
-------------- End ----------------
CustomDatasetDataLoader
dataset [TestDataset] was created
vid2vid
Traceback (most recent call last):
File "test.py", line 25, in <module>
model = create_model(opt)
File "....../vid2vid/models/models.py", line 7, in create_model
from .vid2vid_model_G import Vid2VidModelG
File "....../vid2vid/models/vid2vid_model_G.py", line 13, in <module>
from . import networks
File "....../vid2vid/models/networks.py", line 12, in <module>
from .flownet2_pytorch.networks.resample2d_package.resample2d import Resample2d
File "....../vid2vid/models/flownet2_pytorch/networks/resample2d_package/resample2d.py", line 2, in <module>
from .functions.resample2d import Resample2dFunction
File "....../vid2vid/models/flownet2_pytorch/networks/resample2d_package/functions/resample2d.py", line 3, in <module>
from .._ext import resample2d
ModuleNotFoundError: No module named 'models.flownet2_pytorch.networks.resample2d_package._ext'

This is still annoying me. A detailed github issue has been posted today. Please give me a hand. Thanks...

With the most inspiring speech by Arnold Schwarzenegger, 2019 arrived... I've got something in my mind as well...

Schwarzenegger's Speech

Everybody has a dream/goal. NEVER EVER blow it out... Here comes the preface of my PhD's thesis... It's been 10 years already... This is NOT ONLY my attitude towards science, BUT ALSO towards those so-called professors(叫兽) and specialists(砖家).

Preface of My PhD Thesis

So, today, I'm NOT going to repeat any cliches, BUT happily encouraged by Arnold Schwarzenegger's speech. We will definitely have a fruitful 2019. Let's have some pizza.

Pizza

Merry Christmas everybody. Let's take a look at the Christmas Lights around Lafarge Lake in Coquitlam FIRST.

Lafarge Lake Christmas Lights 01 Lafarge Lake Christmas Lights 02
Lafarge Lake Christmas Lights 01 Lafarge Lake Christmas Lights 02
Lafarge Lake Christmas Lights 03 Lafarge Lake Christmas Lights 04
Lafarge Lake Christmas Lights 03 Lafarge Lake Christmas Lights 04
Lafarge Lake Christmas Lights 05 Lafarge Lake Christmas Lights 06
Lafarge Lake Christmas Lights 05 Lafarge Lake Christmas Lights 06

Now, it's the time to test PyDicom.

Get Started

Copy and paste that piece of code from Medical Image Analysis with Deep Learning  with trivial modifications as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
import cv2
import numpy as np
import matplotlib.pyplot as plt
import pydicom as pdicom
import os
import glob
import pandas as pd
import scipy.ndimage
from skimage import measure, morphology
from mpl_toolkits.mplot3d.art3d import Poly3DCollection


# Prepare dicom images
INPUT_FOLDER = './'
patients = os.listdir(INPUT_FOLDER)
patients.sort()


# Collect all dicom images
lstFilesDCM = [] # create an empty list
def load_scan2(path):
for dirName, subdirList, fileList in os.walk(path):
for filename in fileList:
if ".dcm" in filename.lower():
lstFilesDCM.append(os.path.join(dirName, filename))
return lstFilesDCM

first_patient = load_scan2(INPUT_FOLDER)


# Get ref file
print (lstFilesDCM[0])
RefDs = pdicom.read_file(lstFilesDCM[0])

# Load dimensions based on the number of rows, columns and slices (along z axis)
ConstPixelDims = (int(RefDs.Rows), int(RefDs.Columns), len(lstFilesDCM))

# Load spacing values (in mm)
ConstPixelSpacing = (float(RefDs.PixelSpacing[0]), float(RefDs.PixelSpacing[1]), float(RefDs.SliceThickness))


x = np.arange(0.0, (ConstPixelDims[0]+1)*ConstPixelSpacing[0], ConstPixelSpacing[0])
y = np.arange(0.0, (ConstPixelDims[1]+1)*ConstPixelSpacing[1], ConstPixelSpacing[1])
z = np.arange(0.0, (ConstPixelDims[2]+1)*ConstPixelSpacing[2], ConstPixelSpacing[2])


# The array is sized based on 'ConstPixelDims'

ArrayDicom = np.zeros(ConstPixelDims, dtype=RefDs.pixel_array.dtype)

# loop through all the DICOM files
for filenameDCM in lstFilesDCM:
# read the file
ds = pdicom.read_file(filenameDCM)
# store the raw image data
ArrayDicom[:, :, lstFilesDCM.index(filenameDCM)] = ds.pixel_array


plt.figure(dpi=1600)
plt.axes().set_aspect('equal', 'datalim')
plt.set_cmap(plt.gray())
plt.pcolormesh(y, x, np.flipud(ArrayDicom[:, :, 80]))
plt.show()

Display DICOM

I did a MRI scan in 2015, in SIAT, for the purpose of research. The scan resolution is NOT very high. Anyway, a bunch of DICOM images can be viewed as follows:

DICOM images
MRI - me

Merry Christmas buddy!

Christmas Lights in Surrey

It's been quite a while without writing anything. Today, we are going to introduce PyFlux for time seriers analysis. Cannonical models are to be directly adopted from PyFlux Documents and tested in this blog.

Get Started

Copy and paste that piece of code from PyFlux Documents with trivial modifications as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import pandas as pd
import numpy as np
from pandas_datareader.data import DataReader
from datetime import datetime
import pyflux as pf
import matplotlib.pyplot as plt


a = DataReader('JPM', 'yahoo', datetime(2006,6,1), datetime(2016,6,1))
a_returns = pd.DataFrame(np.diff(a['Adj Close'].values))
a_returns.index = a.index.values[1:a.index.values.shape[0]]
a_returns.columns = ["JPM Returns"]

print(a_returns.head())

plt.figure(figsize=(15, 5))
plt.ylabel("Returns")
plt.plot(a_returns)
plt.show()

pf.acf_plot(a_returns.values.T[0])
pf.acf_plot(np.square(a_returns.values.T[0]))

After running the above piece of code, we'll get the following time series loaded:

1
2
3
4
5
6
            JPM Returns
2006-06-02 0.167995
2006-06-05 -0.613506
2006-06-06 -0.430914
2006-06-07 -0.094942
2006-06-08 0.073048

and the following resultant figures:

Time Series
Autocorrelation
Squared Autocorrelation

Cannonical Models

ARIMA

We can load and test ARIMA model using the following piece of code:

1
2
3
4
5
6
7
8
9
10
my_model = pf.ARIMA(data=a_returns, ar=4, ma=4, family=pf.Normal())
print(my_model.latent_variables)

result = my_model.fit("MLE")
result.summary()

my_model.plot_z(figsize=(15,5))
my_model.plot_fit(figsize=(15,10))
my_model.plot_predict_is(h=50, figsize=(15,5))
my_model.plot_predict(h=20,past_values=20,figsize=(15,5))

ARIMA's latent variables are printed as:

1
2
3
4
5
6
7
8
9
10
11
12
Index    Latent Variable           Prior           Prior Hyperparameters     V.I. Dist  Transform
======== ========================= =============== ========================= ========== ==========
0 Constant Normal mu0: 0, sigma0: 3 Normal None
1 AR(1) Normal mu0: 0, sigma0: 0.5 Normal None
2 AR(2) Normal mu0: 0, sigma0: 0.5 Normal None
3 AR(3) Normal mu0: 0, sigma0: 0.5 Normal None
4 AR(4) Normal mu0: 0, sigma0: 0.5 Normal None
5 MA(1) Normal mu0: 0, sigma0: 0.5 Normal None
6 MA(2) Normal mu0: 0, sigma0: 0.5 Normal None
7 MA(3) Normal mu0: 0, sigma0: 0.5 Normal None
8 MA(4) Normal mu0: 0, sigma0: 0.5 Normal None
9 Normal Scale Flat n/a (non-informative) Normal exp

And the fitting result is summarized as:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Normal ARIMA(4,0,4)
======================================================= ==================================================
Dependent Variable: JPM Returns Method: MLE
Start Date: 2006-06-08 00:00:00 Log Likelihood: -3106.9508
End Date: 2016-06-02 00:00:00 AIC: 6233.9016
Number of observations: 2514 BIC: 6292.1979
==========================================================================================================
Latent Variable Estimate Std Error z P>|z| 95% C.I.
======================================== ========== ========== ======== ======== =========================
Constant 0.0126 0.0136 0.9273 0.3538 (-0.0141 | 0.0394)
AR(1) 0.0596 0.198 0.3008 0.7635 (-0.3284 | 0.4475)
AR(2) -0.3591 0.1858 -1.9331 0.0532 (-0.7231 | 0.005)
AR(3) -0.2205 0.2687 -0.8208 0.4118 (-0.7471 | 0.3061)
AR(4) 0.5272 0.1373 3.8406 0.0001 (0.2581 | 0.7962)
MA(1) -0.1667 0.1995 -0.8354 0.4035 (-0.5578 | 0.2244)
MA(2) 0.3234 0.202 1.6008 0.1094 (-0.0726 | 0.7193)
MA(3) 0.1603 0.2396 0.6692 0.5033 (-0.3093 | 0.63)
MA(4) -0.5817 0.1618 -3.5942 0.0003 (-0.8989 | -0.2645)
Normal Scale 0.8331
==========================================================================================================

And 4 images are to be produced as:

ARIMA Latent Variable Plot
ARIMA Model vs. Data
ARIMA Forcast vs. Data
ARIMA Forcast

GARCH

We can load and test GARCH model using the following piece of code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
my_model = pf.GARCH(p=1, q=1, data=a_returns)
print(my_model.latent_variables)

my_model.adjust_prior(1, pf.TruncatedNormal(0.01, 0.5, lower=0.0, upper=1.0))
my_model.adjust_prior(2, pf.TruncatedNormal(0.97, 0.5, lower=0.0, upper=1.0))

result = my_model.fit('M-H', nsims=20000)
result.summary()

my_model.plot_z([1,2])
my_model.plot_fit(figsize=(15,5))
my_model.plot_sample(nsims=10, figsize=(15,7))

from scipy.stats import kurtosis
my_model.plot_ppc(T=kurtosis)
my_model.plot_predict(h=30, figsize=(15,5))

GARCH's latent variables are printed as:

1
2
3
4
5
6
Index    Latent Variable           Prior           Prior Hyperparameters     V.I. Dist  Transform
======== ========================= =============== ========================= ========== ==========
0 Vol Constant Normal mu0: 0, sigma0: 3 Normal exp
1 q(1) Normal mu0: 0, sigma0: 0.5 Normal logit
2 p(1) Normal mu0: 0, sigma0: 0.5 Normal logit
3 Returns Constant Normal mu0: 0, sigma0: 3 Normal None

And the fitting result is summarized as:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
~/.local/lib/python3.6/site-packages/numdifftools/limits.py:126: UserWarning: All-NaN slice encountered
warnings.warn(str(msg))
Acceptance rate of Metropolis-Hastings is 0.000125
Acceptance rate of Metropolis-Hastings is 0.00075
Acceptance rate of Metropolis-Hastings is 0.105525
Acceptance rate of Metropolis-Hastings is 0.13335
Acceptance rate of Metropolis-Hastings is 0.1907
Acceptance rate of Metropolis-Hastings is 0.232
Acceptance rate of Metropolis-Hastings is 0.299

Tuning complete! Now sampling.
Acceptance rate of Metropolis-Hastings is 0.36655
GARCH(1,1)
======================================================= ==================================================
Dependent Variable: JPM Returns Method: Metropolis Hastings
Start Date: 2006-06-05 00:00:00 Unnormalized Log Posterior: -2671.5492
End Date: 2016-06-02 00:00:00 AIC: 5351.717896880396
Number of observations: 2517 BIC: 5375.041188860938
==========================================================================================================
Latent Variable Median Mean 95% Credibility Interval
======================================== ================== ================== =========================
Vol Constant 0.0059 0.0057 (0.004 | 0.0076)
q(1) 0.0721 0.0728 (0.0556 | 0.0921)
p(1) 0.9202 0.9199 (0.9013 | 0.9373)
Returns Constant 0.0311 0.0311 (0.0109 | 0.0511)
==========================================================================================================

And 5 images are to be produced as:

GARCH Latent Variable Plot
GARCH Volatility
GARCH Volatility Sampling
GARCH Posterior Predictive
GARCH Forcast

GAS

VAR

0%