0%

ZR3D South Survey Drone with 6 Wings Overview

In my last blog, I talked about TorchSeg, a PyTorch open source developed by my master’s lab, namely: The State-Level key Laboratory of Multispectral Signal Processing in Huazhong University of Science and Technology.

Today, I’m going to introduce professional drones developed by a start-up company ZR3D, which is spinned-out from my bachelor’s department, namely: School of Remote Sensing and Information Engineering in Wuhan University. By the way, The State key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing in Wuhan University is specialized in Tilt photogrammetry.

OK, now it’s the time to show some of ZR3D‘s products.

Outdoor Work

The Drone

Outdoor video capturing can be stably done by ZR3D drones. In the following, we show some pictures of an OEMed drone manufactured/assembled by ZR3D.

ZR3D South Survey Longer Vision
In Box Open Box Layer 2 Front View
Overview Front View Open Box Layer 1
Camera Camera Pivot
Wing Wing Wing

Captured Sample Images For Some Scenary

Side View Half Side View Top View
Side View Half Side View Top View

Indoor Work

Indoor surveying & mapping is done by a cluster of servers, namely, on a small cloud. Currently, we are still dockerizing our own SDK.
Three videos are used to briefly explain the MOST important three steps of indoor surveying & mapping, as shown:

Point Cloud Meshlized Texturized
Point Cloud Meshlized Texturized

PX4 Autopilot Software

Popular open source drone firmwares and websites that I’ve been testing are briefly listed in the following:

Happily got the info that my master’s supervisor’s lab, namely: The State-Level key Laboratory of Multispectral Signal Processing in Huazhong University of Science and Technology released TorchSeg just yesterday. I can’t helping testing it out.

Preparation

Python Packages

According to the README.md on TorchSeg, serveral packages need to be prepared FIRST:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
➜  ~ pip show torch
Name: torch
Version: 1.1.0a0+b6a8c45
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: UNKNOWN
Author: UNKNOWN
Author-email: UNKNOWN
License: UNKNOWN
Location: /home/jiapei/.local/lib/python3.6/site-packages
Requires:
Required-by: torchvision, torchtext, torchgan, pytorch-pretrained-bert, pyro-ppl, flair, autokeras
➜ ~ pip show torchvision
Name: torchvision
Version: 0.2.1
Summary: image and video datasets and models for torch deep learning
Home-page: https://github.com/pytorch/vision
Author: PyTorch Core Team
Author-email: soumith@pytorch.org
License: BSD
Location: /home/jiapei/.local/lib/python3.6/site-packages
Requires: numpy, torch, pillow, six
Required-by: torchgan, torchfusion, autokeras
➜ ~ pip show easydict
Name: easydict
Version: 1.9
Summary: Access dict values as attributes (works recursively).
Home-page: https://github.com/makinacorpus/easydict
Author: Mathieu Leplatre
Author-email: mathieu.leplatre@makina-corpus.com
License: LPGL, see LICENSE file.
Location: /home/jiapei/.local/lib/python3.6/site-packages
Requires:
Required-by: luminoth
➜ ~ pip show apex
Name: apex
Version: 0.1
Summary: PyTorch Extensions written by NVIDIA
Home-page: UNKNOWN
Author: UNKNOWN
Author-email: UNKNOWN
License: UNKNOWN
Location: /home/jiapei/.local/lib/python3.6/site-packages/apex-0.1-py3.6.egg
Requires:
Required-by:
➜ ~ pip show tqdm
Name: tqdm
Version: 4.29.1
Summary: Fast, Extensible Progress Meter
Home-page: https://github.com/tqdm/tqdm
Author: Noam Yorav-Raphael
Author-email: noamraph@gmail.com
License: MPLv2.0, MIT Licences
Location: /home/jiapei/.local/lib/python3.6/site-packages
Requires:
Required-by: TPOT, torchtext, torchfusion, thinc, tensorpack, skorch, shap, pytorch-pretrained-bert, pyro-ppl, optimuspyspark, kaggle, flair, autokeras, tf-pose

PyTorch Models

Download all PyTorch models provided from within all .py files from PyTorch Vision Models. Let’s briefly summarize the models as follows:

Test

TorchSeg config.py modification

After TorchSeg is checked out, we need to modify all the config.py files and ensure all variables C.pretrained_model are specified to the RIGHT location and with the RIGHT names. In my case, I just downloaded all PyTorch models under the same directory as TorchSeg, therefore, all C.pretrained_model are designated as:

1
2
3
C.pretrained_model = "./resnet18-5c106cde.pth"
C.pretrained_model = "./resnet50-19c8e357.pth"
C.pretrained_model = "./resnet101-5d3b4d8f.pth"

etc.

We also need to modify all variables C.dataset_path and make sure we are using the RIGHT dataset. In fact, ONLY two datasets are directly adopted in the originally checked-out code of TorchSeg.

Currently, it seems there is still some tricks about how to configure these datasets, please refer to my Github issue.

Today, we are going to test out Facebook Prophet by following this DigitalOcean Tutorial.

Preparation

Required Python Packages

We FIRST make sure 2 Python packages - Prophet and PyStan have been successfully installed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
➜  ~ pip show prophet
Name: prophet
Version: 0.1.1
Summary: Microframework for analyzing financial markets.
Home-page: http://prophet.michaelsu.io/
Author: Michael Su
Author-email: mdasu1@gmail.com
License: BSD
Location: /home/jiapei/.local/lib/python3.6/site-packages
Requires: six, pytz, pandas
Required-by:
➜ ~ pip show pystan
Name: pystan
Version: 2.18.1.0
Summary: Python interface to Stan, a package for Bayesian inference
Home-page: https://github.com/stan-dev/pystan
Author: None
Author-email: None
License: GPLv3
Location: /home/jiapei/.local/lib/python3.6/site-packages
Requires: Cython, numpy
Required-by: fbprophet

Download the Time Series Data

We just need to download the CSV file to some directory:

1
2
3
4
➜  facebookprophet curl -O https://assets.digitalocean.com/articles/eng_python/prophet/AirPassengers.csv
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1748 100 1748 0 0 2281 0 --:--:-- --:--:-- --:--:-- 2279

Test

The Code

Trivial modifications have been done upon the code on this DigitalOcean Tutorial, as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
import pandas as pd
from fbprophet import Prophet

import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')

df = pd.read_csv('AirPassengers.csv')
df.head(5)
df.dtypes

df['Month'] = pd.DatetimeIndex(df['Month'])
df.dtypes

df = df.rename(columns={'Month': 'ds', 'AirPassengers': 'y'})
df.head(5)

ax = df.set_index('ds').plot(figsize=(12, 8))
ax.set_ylabel('Monthly Number of Airline Passengers')
ax.set_xlabel('Date')

plt.show()

# set the uncertainty interval to 95% (the Prophet default is 80%)
my_model = Prophet(interval_width=0.95)
my_model.fit(df)
future_dates = my_model.make_future_dataframe(periods=36, freq='MS')
future_dates.tail()
forecast = my_model.predict(future_dates)
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
fig1 = my_model.plot(forecast, uncertainty=True)
fig1.show()

my_model.plot_components(forecast).savefig('prophet_forcast.png');

Outcome

Original Data

Forcast

Forcast Component

AttributeError: type object ‘Path’ has no attribute ‘home’

This error message is found while I was trying to from matplotlib.pyplot as plt today. I’ve got NO idea what had happened to my python. But, it seems this issue had been met and solved in some public posts. For instance:

Solutions:

In my case, I need to manually modify 3 files under folder ~/.local/lib/python3.6/site-packages/matplotlib.

Path.home() -> os.path.expanduser(‘~’)

  • In file ~/.local/lib/python3.6/site-packages/matplotlib/font_manager.py, around line 135, change
1
2
3
if not USE_FONTCONFIG and sys.platform != 'win32':
OSXFontDirectories.append(str(Path.home() / "Library/Fonts"))
X11FontDirectories.append(str(Path.home() / ".fonts"))

to

1
2
3
if not USE_FONTCONFIG and sys.platform != 'win32':
OSXFontDirectories.append(str(os.path.expanduser('~')+'/'+"Library/Fonts"))
X11FontDirectories.append(str(os.path.expanduser('~')+'/'+".fonts"))

Remove exist_ok=True in function mkdir()

  • In file ~/.local/lib/python3.6/site-packages/matplotlib/init.py, around line 615
  • In file ~/.local/lib/python3.6/site-packages/matplotlib/texmanager.py, around line 56 and 104

change all

1
mkdir(parents=True, exist_ok=True)

to

1
mkdir(parents=True)

Haven’t successfully tested three packages (all related to PyTorch), PyTorch, FlowNet2-Pytorch and vid2vid. Looking forward to assistance…

PyTorch

The Bug

After having successfully installed PyTorch current version 1.1, I still failed to import torch. Please refer to the following ERROR.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
➜  ~ python
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "~/.local/lib/python3.6/site-packages/torch/__init__.py", line 84, in <module>
from torch._C import *
ImportError: ~/.local/lib/python3.6/site-packages/torch/lib/libcaffe2.so: undefined symbol: _ZTIN3c1010TensorImplE
>>> import caffe2
>>> caffe2.__version__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'caffe2' has no attribute '__version__'
>>> caffe2.__file__
'~/.local/lib/python3.6/site-packages/caffe2/__init__.py'

In order to have PyTorch successully imported, I’ve got to remove the manually installed PyTorch v 1.1, but had it installed by pip

1
pip3 install https://download.pytorch.org/whl/cu100/torch-1.0.0-cp36-cp36m-linux_x86_64.whl

This is PyTorch v1.0, which seems NOT come with caffe2, and of course should NOT be compatible with the installed caffe2 built with PyTorch v1.1. Can anybody help to solve this issue? Please also refer to Github issue.

Solution

Remove anything/everything related to your previously installed PyTorch. In my case, file /usr/local/lib/libc10.s0 is to be removed. In order to analyze which files are possibly related to the concerned package, we can use the command ldd.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
➜  lib ldd libcaffe2.so
linux-vdso.so.1 (0x00007ffcf3dc9000)
libc10.so => /usr/local/lib/libc10.so (0x00007fca41b88000)
libmpi.so.12 => /opt/intel/mpi/intel64/lib/libmpi.so.12 (0x00007f8acdad9000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f8acd8d1000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f8acd6b2000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f8acd4ae000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f8acd296000)
libmkl_intel_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_intel_lp64.so (0x00007f8acc765000)
libmkl_gnu_thread.so => /opt/intel/mkl/lib/intel64/libmkl_gnu_thread.so (0x00007f8acaf2c000)
libmkl_core.so => /opt/intel/mkl/lib/intel64/libmkl_core.so (0x00007f8ac6df3000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f8ac6a55000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f8ac66cc000)
libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 (0x00007f8ac649d000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f8ac60ac000)
/lib64/ld-linux-x86-64.so.2 (0x00007f8ad250d000)
libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1 (0x00007f8ac5ea1000)
libfabric.so.1 => /usr/lib/x86_64-linux-gnu/libfabric.so.1 (0x00007f8ac5bf4000)
librdmacm.so.1 => /usr/lib/x86_64-linux-gnu/librdmacm.so.1 (0x00007f8ac59de000)
libibverbs.so.1 => /usr/lib/x86_64-linux-gnu/libibverbs.so.1 (0x00007f8ac57c8000)
libpsm_infinipath.so.1 => /usr/lib/x86_64-linux-gnu/libpsm_infinipath.so.1 (0x00007f8ac556f000)
libnl-route-3.so.200 => /usr/lib/x86_64-linux-gnu/libnl-route-3.so.200 (0x00007f8ac52fa000)
libnl-3.so.200 => /lib/x86_64-linux-gnu/libnl-3.so.200 (0x00007f8ac50da000)
libinfinipath.so.4 => /usr/lib/x86_64-linux-gnu/libinfinipath.so.4 (0x00007f8ac4ecb000)
libuuid.so.1 => /lib/x86_64-linux-gnu/libuuid.so.1 (0x00007f8ac4cc4000)

FlowNet2-Pytorch

Installation

It’s not hard to have FlowNet2-Pytorch installed by one line of commands:

1
➜  flownet2-pytorch git:(master) ✗ ./install.sh

After installation, there will be 3 packages installed under folder ~/.local/lib/python3.6/site-packages:

  • correlation-cuda
  • resample2d-cuda
  • channelnorm-cuda
1
2
3
4
5
6
7
8
9
10
➜  site-packages ls -lsd correlation*
4 drwxrwxr-x 4 jiapei jiapei 4096 Jan 7 00:07 correlation_cuda-0.0.0-py3.6-linux-x86_64.egg
➜ site-packages ls -lsd channelnorm*
4 drwxrwxr-x 4 jiapei jiapei 4096 Jan 7 00:07 channelnorm_cuda-0.0.0-py3.6-linux-x86_64.egg
➜ site-packages ls -lsd resample2d*
4 drwxrwxr-x 4 jiapei jiapei 4096 Jan 7 00:07 resample2d_cuda-0.0.0-py3.6-linux-x86_64.egg
➜ site-packages ls -lsd flownet2*
zsh: no matches found: flownet2*
➜ site-packages pwd
~/.local/lib/python3.6/site-packages

That is to say: you should NEVER import flownet2, nor correlation, nor channelnorm, nor resampled2d, but

  • correlation_cuda
  • resample2d_cuda
  • channelnorm_cuda

Current Bug

Here comes the ERROR:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
➜  ~ python
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import correlation_cuda
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: ~/.local/lib/python3.6/site-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE
>>> import channelnorm_cuda
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: ~/.local/lib/python3.6/site-packages/channelnorm_cuda-0.0.0-py3.6-linux-x86_64.egg/channelnorm_cuda.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE
>>> import resample2d_cuda
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: ~/.local/lib/python3.6/site-packages/resample2d_cuda-0.0.0-py3.6-linux-x86_64.egg/resample2d_cuda.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE
>>>

I’ve already posted an issue on github. Had anybody solved this problem?

Solution

import torch FIRST.

1
2
3
4
5
6
7
8
➜  ~ python
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> import correlation_cuda
>>> import resample2d_cuda
>>> import channelnorm_cuda

Still Buggy

FlowNet2-Pytorch Inference

Where can we find and download checkpoints? Please refer to Inference on FlowNet2-Pytorch:

1
python main.py --inference --model FlowNet2 --save_flow --inference_dataset MpiSintelClean --inference_dataset_root /path/to/mpi-sintel/clean/dataset --resume /path/to/checkpoints

FlowNet2-Pytorch Training

Training on FlowNet2-Pytorch gives the following ERROR:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
➜  flownet2-pytorch git:(master) ✗ python main.py --batch_size 8 --model FlowNet2 --loss=L1Loss --optimizer=Adam --optimizer_lr=1e-4 --training_dataset MpiSintelFinal --training_dataset_root /path/to/mpi-sintel/training/final/mountain_1  --validation_dataset MpiSintelClean --validation_dataset_root /path/tompi-sintel/training/clean/mountain_1
Parsing Arguments
[0.013s] batch_size: 8
[0.013s] crop_size: [256, 256]
[0.013s] fp16: False
[0.014s] fp16_scale: 1024.0
[0.014s] gradient_clip: None
[0.014s] inference: False
[0.014s] inference_batch_size: 1
[0.014s] inference_dataset: MpiSintelClean
[0.014s] inference_dataset_replicates: 1
[0.014s] inference_dataset_root: ./MPI-Sintel/flow/training
[0.014s] inference_n_batches: -1
[0.014s] inference_size: [-1, -1]
[0.014s] log_frequency: 1
[0.014s] loss: L1Loss
[0.014s] model: FlowNet2
[0.014s] model_batchNorm: False
[0.014s] model_div_flow: 20.0
[0.014s] name: run
[0.014s] no_cuda: False
[0.014s] number_gpus: 1
[0.014s] number_workers: 8
[0.014s] optimizer: Adam
[0.014s] optimizer_amsgrad: False
[0.014s] optimizer_betas: (0.9, 0.999)
[0.014s] optimizer_eps: 1e-08
[0.014s] optimizer_lr: 0.0001
[0.014s] optimizer_weight_decay: 0
[0.014s] render_validation: False
[0.014s] resume:
[0.014s] rgb_max: 255.0
[0.014s] save: ./work
[0.014s] save_flow: False
[0.014s] schedule_lr_fraction: 10
[0.014s] schedule_lr_frequency: 0
[0.014s] seed: 1
[0.014s] skip_training: False
[0.014s] skip_validation: False
[0.014s] start_epoch: 1
[0.014s] total_epochs: 10000
[0.014s] train_n_batches: -1
[0.014s] training_dataset: MpiSintelFinal
[0.014s] training_dataset_replicates: 1
[0.014s] training_dataset_root: ....../mpi-sintel/training/final/mountain_1
[0.014s] validation_dataset: MpiSintelClean
[0.014s] validation_dataset_replicates: 1
[0.014s] validation_dataset_root: ....../mpi-sintel/training/clean/mountain_1
[0.014s] validation_frequency: 5
[0.014s] validation_n_batches: -1
[0.016s] Operation finished

Source Code
Current Git Hash: b'ac1602a72f0454f65872126b70665a596fae8009'

Initializing Datasets
[0.003s] Operation failed

Traceback (most recent call last):
File "main.py", line 139, in <module>
train_dataset = args.training_dataset_class(args, True, **tools.kwargs_from_args(args, 'training_dataset'))
File "....../flownet2-pytorch/datasets.py", line 112, in __init__
super(MpiSintelFinal, self).__init__(args, is_cropped = is_cropped, root = root, dstype = 'final', replicates = replicates)
File "....../flownet2-pytorch/datasets.py", line 66, in __init__
self.frame_size = frame_utils.read_gen(self.image_list[0][0]).shape
IndexError: list index out of range

Therefore, I posted a further issue on Github.

vid2vid

Preparation

There are 4 scripts under folder scripts to run before testing vid2vid.

1
2
3
4
5
➜  vid2vid git:(master) ✗ ll scripts/*.py
-rwxrwxrwx 1 jiapei jiapei 322 Jan 5 22:04 scripts/download_datasets.py
-rwxrwxrwx 1 jiapei jiapei 579 Jan 5 22:04 scripts/download_flownet2.py
-rwxrwxrwx 1 jiapei jiapei 1.3K Jan 5 22:04 scripts/download_gdrive.py
-rwxrwxrwx 1 jiapei jiapei 257 Jan 5 22:04 scripts/download_models_flownet2.py

However, ONLY 3 of the scripts can be successfully run, but scripts/download_flownet2.py failed to run, as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
➜  vid2vid git:(master) python scripts/download_flownet2.py 
Compiling correlation kernels by nvcc...
rm: cannot remove '../_ext': No such file or directory
Traceback (most recent call last):
File "build.py", line 3, in <module>
import torch.utils.ffi
File "~/.local/lib/python3.6/site-packages/torch/utils/ffi/__init__.py", line 1, in <module>
raise ImportError("torch.utils.ffi is deprecated. Please use cpp extensions instead.")
ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead.
Compiling resample2d kernels by nvcc...
rm: cannot remove 'Resample2d_kernel.o': No such file or directory
rm: cannot remove '../_ext': No such file or directory
In file included from Resample2d_kernel.cu:1:0:
~/.local/lib/python3.6/site-packages/torch/lib/include/THC/THC.h:4:10: fatal error: THC/THCGeneral.h: No such file or directory
#include <THC/THCGeneral.h>
^~~~~~~~~~~~~~~~~~
compilation terminated.
Traceback (most recent call last):
File "build.py", line 3, in <module>
import torch.utils.ffi
File "~/.local/lib/python3.6/site-packages/torch/utils/ffi/__init__.py", line 1, in <module>
raise ImportError("torch.utils.ffi is deprecated. Please use cpp extensions instead.")
ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead.
Compiling channelnorm kernels by nvcc...
rm: cannot remove 'ChannelNorm_kernel.o': No such file or directory
rm: cannot remove '../_ext': No such file or directory
In file included from ChannelNorm_kernel.cu:1:0:
~/.local/lib/python3.6/site-packages/torch/lib/include/THC/THC.h:4:10: fatal error: THC/THCGeneral.h: No such file or directory
#include <THC/THCGeneral.h>
^~~~~~~~~~~~~~~~~~
compilation terminated.
Traceback (most recent call last):
File "build.py", line 3, in <module>
import torch.utils.ffi
File "~/.local/lib/python3.6/site-packages/torch/utils/ffi/__init__.py", line 1, in <module>
raise ImportError("torch.utils.ffi is deprecated. Please use cpp extensions instead.")
ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead.

Testing is ALSO Buggy

If we run Testing on vid2vid homepage, it gives the following ERROR:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
➜  vid2vid git:(master) python test.py --name label2city_2048 --label_nc 35 --loadSize 2048 --n_scales_spatial 3 --use_instance --fg --use_single_G
------------ Options -------------
add_face_disc: False
aspect_ratio: 1.0
basic_point_only: False
batchSize: 1
checkpoints_dir: ./checkpoints
dataroot: datasets/Cityscapes/
dataset_mode: temporal
debug: False
densepose_only: False
display_id: 0
display_winsize: 512
feat_num: 3
fg: True
fg_labels: [26]
fineSize: 512
gpu_ids: [0]
how_many: 300
input_nc: 3
isTrain: False
label_feat: False
label_nc: 35
loadSize: 2048
load_features: False
load_pretrain:
max_dataset_size: inf
model: vid2vid
nThreads: 2
n_blocks: 9
n_blocks_local: 3
n_downsample_E: 3
n_downsample_G: 3
n_frames_G: 3
n_gpus_gen: 1
n_local_enhancers: 1
n_scales_spatial: 3
name: label2city_2048
ndf: 64
nef: 32
netE: simple
netG: composite
ngf: 128
no_canny_edge: False
no_dist_map: False
no_first_img: False
no_flip: False
no_flow: False
norm: batch
ntest: inf
openpose_only: False
output_nc: 3
phase: test
random_drop_prob: 0.05
random_scale_points: False
remove_face_labels: False
resize_or_crop: scaleWidth
results_dir: ./results/
serial_batches: False
start_frame: 0
tf_log: False
use_instance: True
use_real_img: False
use_single_G: True
which_epoch: latest
-------------- End ----------------
CustomDatasetDataLoader
dataset [TestDataset] was created
vid2vid
Traceback (most recent call last):
File "test.py", line 25, in <module>
model = create_model(opt)
File "....../vid2vid/models/models.py", line 7, in create_model
from .vid2vid_model_G import Vid2VidModelG
File "....../vid2vid/models/vid2vid_model_G.py", line 13, in <module>
from . import networks
File "....../vid2vid/models/networks.py", line 12, in <module>
from .flownet2_pytorch.networks.resample2d_package.resample2d import Resample2d
File "....../vid2vid/models/flownet2_pytorch/networks/resample2d_package/resample2d.py", line 2, in <module>
from .functions.resample2d import Resample2dFunction
File "....../vid2vid/models/flownet2_pytorch/networks/resample2d_package/functions/resample2d.py", line 3, in <module>
from .._ext import resample2d
ModuleNotFoundError: No module named 'models.flownet2_pytorch.networks.resample2d_package._ext'

This is still annoying me. A detailed github issue has been posted today. Please give me a hand. Thanks…

With the most inspiring speech by Arnold Schwarzenegger, 2019 arrived… I’ve got something in my mind as well…

Schwarzenegger's Speech

Everybody has a dream/goal. NEVER EVER blow it out… Here comes the preface of my PhD’s thesis… It’s been 10 years already… This is NOT ONLY my attitude towards science, BUT ALSO towards those so-called professors(叫兽) and specialists(砖家).

Preface of My PhD Thesis

So, today, I’m NOT going to repeat any cliches, BUT happily encouraged by Arnold Schwarzenegger’s speech. We will definitely have a fruitful 2019. Let’s have some pizza.

Pizza

Merry Christmas everybody. Let’s take a look at the Christmas Lights around Lafarge Lake in Coquitlam FIRST.

Lafarge Lake Christmas Lights 01 Lafarge Lake Christmas Lights 02
Lafarge Lake Christmas Lights 01 Lafarge Lake Christmas Lights 02
Lafarge Lake Christmas Lights 03 Lafarge Lake Christmas Lights 04
Lafarge Lake Christmas Lights 03 Lafarge Lake Christmas Lights 04
Lafarge Lake Christmas Lights 05 Lafarge Lake Christmas Lights 06
Lafarge Lake Christmas Lights 05 Lafarge Lake Christmas Lights 06

Now, it’s the time to test PyDicom.

Get Started

Copy and paste that piece of code from Medical Image Analysis with Deep Learning  with trivial modifications as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
import cv2
import numpy as np
import matplotlib.pyplot as plt
import pydicom as pdicom
import os
import glob
import pandas as pd
import scipy.ndimage
from skimage import measure, morphology
from mpl_toolkits.mplot3d.art3d import Poly3DCollection


# Prepare dicom images
INPUT_FOLDER = './'
patients = os.listdir(INPUT_FOLDER)
patients.sort()


# Collect all dicom images
lstFilesDCM = [] # create an empty list
def load_scan2(path):
for dirName, subdirList, fileList in os.walk(path):
for filename in fileList:
if ".dcm" in filename.lower():
lstFilesDCM.append(os.path.join(dirName, filename))
return lstFilesDCM

first_patient = load_scan2(INPUT_FOLDER)


# Get ref file
print (lstFilesDCM[0])
RefDs = pdicom.read_file(lstFilesDCM[0])

# Load dimensions based on the number of rows, columns and slices (along z axis)
ConstPixelDims = (int(RefDs.Rows), int(RefDs.Columns), len(lstFilesDCM))

# Load spacing values (in mm)
ConstPixelSpacing = (float(RefDs.PixelSpacing[0]), float(RefDs.PixelSpacing[1]), float(RefDs.SliceThickness))


x = np.arange(0.0, (ConstPixelDims[0]+1)*ConstPixelSpacing[0], ConstPixelSpacing[0])
y = np.arange(0.0, (ConstPixelDims[1]+1)*ConstPixelSpacing[1], ConstPixelSpacing[1])
z = np.arange(0.0, (ConstPixelDims[2]+1)*ConstPixelSpacing[2], ConstPixelSpacing[2])


# The array is sized based on 'ConstPixelDims'

ArrayDicom = np.zeros(ConstPixelDims, dtype=RefDs.pixel_array.dtype)

# loop through all the DICOM files
for filenameDCM in lstFilesDCM:
# read the file
ds = pdicom.read_file(filenameDCM)
# store the raw image data
ArrayDicom[:, :, lstFilesDCM.index(filenameDCM)] = ds.pixel_array


plt.figure(dpi=1600)
plt.axes().set_aspect('equal', 'datalim')
plt.set_cmap(plt.gray())
plt.pcolormesh(y, x, np.flipud(ArrayDicom[:, :, 80]))
plt.show()

Display DICOM

I did a MRI scan in 2015, in SIAT, for the purpose of research. The scan resolution is NOT very high. Anyway, a bunch of DICOM images can be viewed as follows:

DICOM images

MRI - me

Merry Christmas buddy!

Christmas Lights in Surrey

It’s been quite a while without writing anything. Today, we are going to introduce PyFlux for time seriers analysis. Cannonical models are to be directly adopted from PyFlux Documents and tested in this blog.

Get Started

Copy and paste that piece of code from PyFlux Documents with trivial modifications as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import pandas as pd
import numpy as np
from pandas_datareader.data import DataReader
from datetime import datetime
import pyflux as pf
import matplotlib.pyplot as plt


a = DataReader('JPM', 'yahoo', datetime(2006,6,1), datetime(2016,6,1))
a_returns = pd.DataFrame(np.diff(a['Adj Close'].values))
a_returns.index = a.index.values[1:a.index.values.shape[0]]
a_returns.columns = ["JPM Returns"]

print(a_returns.head())

plt.figure(figsize=(15, 5))
plt.ylabel("Returns")
plt.plot(a_returns)
plt.show()

pf.acf_plot(a_returns.values.T[0])
pf.acf_plot(np.square(a_returns.values.T[0]))

After running the above piece of code, we’ll get the following time series loaded:

1
2
3
4
5
6
            JPM Returns
2006-06-02 0.167995
2006-06-05 -0.613506
2006-06-06 -0.430914
2006-06-07 -0.094942
2006-06-08 0.073048

and the following resultant figures:

Time Series

Autocorrelation

Squared Autocorrelation

Cannonical Models

ARIMA

We can load and test ARIMA model using the following piece of code:

1
2
3
4
5
6
7
8
9
10
my_model = pf.ARIMA(data=a_returns, ar=4, ma=4, family=pf.Normal())
print(my_model.latent_variables)

result = my_model.fit("MLE")
result.summary()

my_model.plot_z(figsize=(15,5))
my_model.plot_fit(figsize=(15,10))
my_model.plot_predict_is(h=50, figsize=(15,5))
my_model.plot_predict(h=20,past_values=20,figsize=(15,5))

ARIMA’s latent variables are printed as:

1
2
3
4
5
6
7
8
9
10
11
12
Index    Latent Variable           Prior           Prior Hyperparameters     V.I. Dist  Transform
======== ========================= =============== ========================= ========== ==========
0 Constant Normal mu0: 0, sigma0: 3 Normal None
1 AR(1) Normal mu0: 0, sigma0: 0.5 Normal None
2 AR(2) Normal mu0: 0, sigma0: 0.5 Normal None
3 AR(3) Normal mu0: 0, sigma0: 0.5 Normal None
4 AR(4) Normal mu0: 0, sigma0: 0.5 Normal None
5 MA(1) Normal mu0: 0, sigma0: 0.5 Normal None
6 MA(2) Normal mu0: 0, sigma0: 0.5 Normal None
7 MA(3) Normal mu0: 0, sigma0: 0.5 Normal None
8 MA(4) Normal mu0: 0, sigma0: 0.5 Normal None
9 Normal Scale Flat n/a (non-informative) Normal exp

And the fitting result is summarized as:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Normal ARIMA(4,0,4)
======================================================= ==================================================
Dependent Variable: JPM Returns Method: MLE
Start Date: 2006-06-08 00:00:00 Log Likelihood: -3106.9508
End Date: 2016-06-02 00:00:00 AIC: 6233.9016
Number of observations: 2514 BIC: 6292.1979
==========================================================================================================
Latent Variable Estimate Std Error z P>|z| 95% C.I.
======================================== ========== ========== ======== ======== =========================
Constant 0.0126 0.0136 0.9273 0.3538 (-0.0141 | 0.0394)
AR(1) 0.0596 0.198 0.3008 0.7635 (-0.3284 | 0.4475)
AR(2) -0.3591 0.1858 -1.9331 0.0532 (-0.7231 | 0.005)
AR(3) -0.2205 0.2687 -0.8208 0.4118 (-0.7471 | 0.3061)
AR(4) 0.5272 0.1373 3.8406 0.0001 (0.2581 | 0.7962)
MA(1) -0.1667 0.1995 -0.8354 0.4035 (-0.5578 | 0.2244)
MA(2) 0.3234 0.202 1.6008 0.1094 (-0.0726 | 0.7193)
MA(3) 0.1603 0.2396 0.6692 0.5033 (-0.3093 | 0.63)
MA(4) -0.5817 0.1618 -3.5942 0.0003 (-0.8989 | -0.2645)
Normal Scale 0.8331
==========================================================================================================

And 4 images are to be produced as:

ARIMA Latent Variable Plot

ARIMA Model vs. Data

ARIMA Forcast vs. Data

ARIMA Forcast

GARCH

We can load and test GARCH model using the following piece of code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
my_model = pf.GARCH(p=1, q=1, data=a_returns)
print(my_model.latent_variables)

my_model.adjust_prior(1, pf.TruncatedNormal(0.01, 0.5, lower=0.0, upper=1.0))
my_model.adjust_prior(2, pf.TruncatedNormal(0.97, 0.5, lower=0.0, upper=1.0))

result = my_model.fit('M-H', nsims=20000)
result.summary()

my_model.plot_z([1,2])
my_model.plot_fit(figsize=(15,5))
my_model.plot_sample(nsims=10, figsize=(15,7))

from scipy.stats import kurtosis
my_model.plot_ppc(T=kurtosis)
my_model.plot_predict(h=30, figsize=(15,5))

GARCH’s latent variables are printed as:

1
2
3
4
5
6
Index    Latent Variable           Prior           Prior Hyperparameters     V.I. Dist  Transform
======== ========================= =============== ========================= ========== ==========
0 Vol Constant Normal mu0: 0, sigma0: 3 Normal exp
1 q(1) Normal mu0: 0, sigma0: 0.5 Normal logit
2 p(1) Normal mu0: 0, sigma0: 0.5 Normal logit
3 Returns Constant Normal mu0: 0, sigma0: 3 Normal None

And the fitting result is summarized as:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
~/.local/lib/python3.6/site-packages/numdifftools/limits.py:126: UserWarning: All-NaN slice encountered
warnings.warn(str(msg))
Acceptance rate of Metropolis-Hastings is 0.000125
Acceptance rate of Metropolis-Hastings is 0.00075
Acceptance rate of Metropolis-Hastings is 0.105525
Acceptance rate of Metropolis-Hastings is 0.13335
Acceptance rate of Metropolis-Hastings is 0.1907
Acceptance rate of Metropolis-Hastings is 0.232
Acceptance rate of Metropolis-Hastings is 0.299

Tuning complete! Now sampling.
Acceptance rate of Metropolis-Hastings is 0.36655
GARCH(1,1)
======================================================= ==================================================
Dependent Variable: JPM Returns Method: Metropolis Hastings
Start Date: 2006-06-05 00:00:00 Unnormalized Log Posterior: -2671.5492
End Date: 2016-06-02 00:00:00 AIC: 5351.717896880396
Number of observations: 2517 BIC: 5375.041188860938
==========================================================================================================
Latent Variable Median Mean 95% Credibility Interval
======================================== ================== ================== =========================
Vol Constant 0.0059 0.0057 (0.004 | 0.0076)
q(1) 0.0721 0.0728 (0.0556 | 0.0921)
p(1) 0.9202 0.9199 (0.9013 | 0.9373)
Returns Constant 0.0311 0.0311 (0.0109 | 0.0511)
==========================================================================================================

And 5 images are to be produced as:

GARCH Latent Variable Plot

GARCH Volatility

GARCH Volatility Sampling

GARCH Posterior Predictive

GARCH Forcast

GAS

VAR

This blog is just to solve some bugs while using MySQL as a server. Assuming you’ve already installed mysql-server, mysql-server-5.7, and mysql-server-core-5.7.

1. Errors

1.1 Failed to start mysql.service: Unit mysql.service not found.

1
2
3
4
5
6
7
➜  ~ sudo service mysqld start
[sudo] password for jiapei:
Failed to start mysqld.service: Unit mysqld.service not found.
➜ ~ /etc/init.d/mysql stop
[ ok ] Stopping mysql (via systemctl): mysql.service.
➜ ~ /etc/init.d/mysql start
[ ok ] Starting mysql (via systemctl): mysql.service.

Conclusion: use /etc/init.d/mysql start instead of service mysqld start.

1.2 ERROR 1045 (28000): Access denied for user ‘root‘@’localhost’ (using password: YES)

1
2
3
➜  ~ mysql -u root -p
Enter password:
ERROR 1698 (28000): Access denied for user 'root'@'localhost'

We FIRST list all existing MySQL processes and kill them all.

1
2
3
➜  ~ ps aux | grep mysql
...
➜ ~ sudo kill PID

Util now, there should be NO MySQL process running.

1.3 ERROR 2002 (HY000): Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock’

Since there is NO MySQL process running, of course we cannot connect to MySQL server.

1
2
3
4
5
6
7
➜  ~ mysql -u root -p
Enter password:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
➜ ~ /etc/init.d/mysql start
[....] Starting mysql (via systemctl): mysql.serviceJob for mysql.service failed because the control process exited with error code.
See "systemctl status mysql.service" and "journalctl -xe" for details.
failed!

Then, we will have to have these 3 packages reinstalled: mysql-server, mysql-server-5.7, and mysql-server-core-5.7.

1
2
3
4
➜  ~ sudo apt remove mysql-server mysql-server-5.7 mysql-server-core-5.7
...
➜ ~ sudo apt install mysql-server mysql-server-5.7 mysql-server-core-5.7
...

/var/run/mysqld/mysqld.sock is NOW back, and MySQL seems to run automatically.

1
2
3
4
5
➜  ~ ls -ls /var/run/mysqld/mysqld.sock
0 srwxrwxrwx 1 mysql mysql 0 Nov 30 03:08 /var/run/mysqld/mysqld.sock
➜ ~ ps aux | grep mysql
mysql 13785 0.1 0.3 1323300 172680 ? Sl 03:08 0:00 /usr/sbin/mysqld --daemonize --pid-file=/run/mysqld/mysqld.pid
jiapei 14453 0.0 0.0 21556 2560 pts/0 R+ 03:16 0:00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn mysql

Then, use chown and chmod to set up the owners and permissions suitably as follows.

1
2
3
4
5
6
7
8
9
10
11
12
13
➜  run pwd
/var/run
➜ run sudo chown mysql:mysql -R mysql
➜ run sudo chmod 755 -R mysql
➜ run ls -lsd mysql
0 drwxr-xr-x 2 mysql mysql 100 Nov 30 02:38 mysqld
➜ run cd ../lib
➜ lib pwd
/var/lib
➜ lib sudo chown mysql:mysql -R mysql
➜ lib sudo chmod 755 -R mysql
➜ lib ls -lsd mysql
4 drwxr-xr-x 6 mysql mysql 4096 Nov 30 02:38 mysql

1.4 ERROR 1698 (28000): Access denied for user ‘root‘@’localhost’

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
➜  ~ mysql -u root -p
Enter password:
ERROR 1698 (28000): Access denied for user 'root'@'localhost'
➜ ~ sudo mysql -u root
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.24-0ubuntu0.18.04.1 (Ubuntu)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> USE lvrsql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> CREATE USER 'YOUR_SYSTEM_USER'@'localhost' IDENTIFIED BY 'YOUR_PASSWORD';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'YOUR_SYSTEM_USER'@'localhost';
mysql> FLUSH PRIVILEGES;
mysql> exit;

➜ ~ service mysql restart
➜ ~

1.5 [ERROR] InnoDB: Unable to lock ./ibdata1 (Additional)

If you meet the above ERROR message, what you can do is to restart MySQL by:

1
2
3
➜  ~ /etc/init.d/mysql restart
[ ok ] Restarting mysql (via systemctl): mysql.service.
➜ ~

2. Fundamental Commands in MySQL

Finally, I’m talking about Geographic Information System, namely, GIS. Remote Sensing and Information Engineer is my bachelor’s major in Wuhan Technical University of Surveying and Mapping, which is NOW a part of Wuhan University. When I was in the university, I NEVER thought I was in such an amazing department, studying such an amazing major.

Alright… Let’s start today’s blog. We’re going to play OpenStreetMap for some fun.

1. ArcGIS with OpenStreetMap

We FIRST test out how ArcGIS displays OpenStreetMap.

1.1 ArcGIS 3.26

A snippet of sample code is provided for ArcGIS Version 3.26, cited as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="viewport" content="initial-scale=1, maximum-scale=1,user-scalable=no" />
<title>OpenStreetMap</title>
<link rel="stylesheet" href="https://js.arcgis.com/3.26/esri/css/esri.css">
<style>
html, body, #esri-map-container {
padding: 0px;
margin: 0px;
height: 100%;
}
</style>
<script src="https://js.arcgis.com/3.26/"></script>
<script>
require([
"esri/map",
"esri/layers/OpenStreetMapLayer",
"dojo/domReady!"
], function (Map, OpenStreetMapLayer){

var map, openStreetMapLayer;

map = new Map("esri-map-container", {
center: [-89.924, 30.036],
zoom: 12
});
openStreetMapLayer = new OpenStreetMapLayer();
map.addLayer(openStreetMapLayer);
});
</script>
</head>

<body>
<div id="esri-map-container"></div>
</body>
</html>

Please click on the following image and have a try.

ArcGIS 3.26

1.2 ArcGIS LATEST

A snippet of sample code is provided for ArcGIS LATEST Version 4.9, cited as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="initial-scale=1,maximum-scale=1,user-scalable=no">
<title>OpenStreetMapLayer - 4.9</title>

<style>
html,
body,
#viewDiv {
padding: 0;
margin: 0;
height: 100%;
width: 100%;
}
</style>

<link rel="stylesheet" href="https://js.arcgis.com/4.9/esri/css/main.css">
<script src="https://js.arcgis.com/4.9/"></script>

<script>
require([
"esri/layers/OpenStreetMapLayer",
"esri/Map",
"esri/views/SceneView"
],
function(
OpenStreetMapLayer,
Map,
SceneView
) {

var map = new Map({
ground: "world-elevation"
});

var view = new SceneView({
map: map,
container: "viewDiv"
});

var osmLayer = new OpenStreetMapLayer();
map.add(osmLayer);
});
</script>
</head>

<body>
<div id="viewDiv"></div>
</body>

</html>

Please click on the following image and have a try.

ArcGIS 4.9

ArcGIS is like the CROWN of GIS. Its SDK/API seems NOT fully open source but commercial. Therefore, I’d love to breifly test out some other open source.

2. WebGLEarth

Several WebGLEarth examples are given at http://examples.webglearth.com/, and we just try out the very FIRST example Satellite. The code is cited as:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<!DOCTYPE HTML>
<html>
<head>
<script src="http://www.webglearth.com/v2/api.js"></script>
<script>
function initialize() {
var options = {atmosphere: true, center: [0, 0], zoom: 0};
var earth = new WE.map('earth_div', options);
WE.tileLayer('http://tileserver.maptiler.com/nasa/{z}/{x}/{y}.jpg', {
minZoom: 0,
maxZoom: 5,
attribution: 'NASA'
}).addTo(earth);
}
</script>
<style>
html, body{padding: 0; margin: 0;}
#earth_div{top: 0; right: 0; bottom: 0; left: 0;
background-color: #000; position: absolute !important;}
</style>
<title>WebGL Earth API: Satellite</title>
</head>
<body onload="initialize()">
<div id="earth_div"></div>
</body>
</html>

Please click on the following image and have a try.

LeafletJS

3. OpenGlobus Based On OpenStreetMap

OpenGlobus is a javascript library designed to display interactive 3d maps and planets with map tiles, imagery and vector data, markers and 3d objects. It uses the WebGL technology, open source and completely free.” (Cited from OpenGlobus Github). Let’s just take the very FIRST example Og - examples: Attribution as our example. Code is cited as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
<html>

<head>
<title>Attribution Example</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="http://www.openglobus.org/og.css" type="text/css" />
<script src="http://www.openglobus.org/og.js"></script>
</head>

<body>
<div id="globus" style="width:100%;height:100%"></div>
<button id="btnOSM" style="position: absolute; left:0; margin:10px;">Set attribution</button>
<script>

document.getElementById("btnOSM").onclick = function () {
states.setAttribution("Hello, WMS default USA population states example!");
};

let osm = new og.layer.XYZ("OpenStreetMap", {
specular: [0.0003, 0.00012, 0.00001],
shininess: 20,
diffuse: [0.89, 0.9, 0.83],
isBaseLayer: true,
url: "http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png",
visibility: true,
attribution: 'Data @ OpenStreetMap contributors, ODbL'
});

let states = new og.layer.WMS("USA Population", {
extent: [[-128, 24], [-66, 49]],
visibility: true,
isBaseLayer: false,
url: "http://openglobus.org/geoserver/",
layers: "topp:states",
opacity: 1.0,
attribution: 'Hi!',
transparentColor: [1.0, 1.0, 1.0]
});

let globus = new og.Globe({
"target": "globus",
"name": "Earth",
"terrain": new og.terrain.GlobusTerrain(),
"layers": [osm, states]
});

globus.planet.flyExtent(states.getExtent());
</script>
</body>

</html>

Please click on the following image and have a try.

OpenGlobus

4. LeafletJS Based On OpenStreetMap

Several LeafletJS examples are given at https://leafletjs.com/examples.html. Here, we took Leaflet on Mobile as an example (code copied).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
<!DOCTYPE html>
<html>
<head>

<title>Mobile tutorial - Leaflet</title>

<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0">

<link rel="shortcut icon" type="image/x-icon" href="docs/images/favicon.ico" />

<link rel="stylesheet" href="https://unpkg.com/leaflet@1.3.4/dist/leaflet.css" integrity="sha512-puBpdR0798OZvTTbP4A8Ix/l+A4dHDD0DGqYW6RQ+9jxkRFclaxxQb/SJAWZfWAkuyeQUytO7+7N4QKrDh+drA==" crossorigin=""/>
<script src="https://unpkg.com/leaflet@1.3.4/dist/leaflet.js" integrity="sha512-nMMmRyTVoLYqjP9hrbed9S+FzjZHW5gY1TWCHA5ckwXZBadntCNs8kEqAWdrb9O7rxbCaA4lKTIWjDXZxflOcA==" crossorigin=""></script>


<style>
html, body {
height: 100%;
margin: 0;
}
#map {
width: 600px;
height: 400px;
}
</style>

<style>body { padding: 0; margin: 0; } #map { height: 100%; width: 100vw; }</style>
</head>
<body>

<div id='map'></div>

<script>
var map = L.map('map').fitWorld();

L.tileLayer('https://api.tiles.mapbox.com/v4/{id}/{z}/{x}/{y}.png?access_token=pk.eyJ1IjoibWFwYm94IiwiYSI6ImNpejY4NXVycTA2emYycXBndHRqcmZ3N3gifQ.rJcFIG214AriISLbB6B5aw', {
maxZoom: 18,
attribution: 'Map data &copy; <a href="https://www.openstreetmap.org/">OpenStreetMap</a> contributors, ' +
'<a href="https://creativecommons.org/licenses/by-sa/2.0/">CC-BY-SA</a>, ' +
'Imagery © <a href="https://www.mapbox.com/">Mapbox</a>',
id: 'mapbox.streets'
}).addTo(map);

function onLocationFound(e) {
var radius = e.accuracy / 2;

L.marker(e.latlng).addTo(map)
.bindPopup("You are within " + radius + " meters from this point").openPopup();

L.circle(e.latlng, radius).addTo(map);
}

function onLocationError(e) {
alert(e.message);
}

map.on('locationfound', onLocationFound);
map.on('locationerror', onLocationError);

map.locate({setView: true, maxZoom: 16});
</script>

</body>
</html>

Please click on the following image and have a try.

LeafletJS

5. Mapbox GL JS Based on OpenStreetMap

From the above LeafletJS example, it’s clearly show that LeafletJS is based on Mapbox, which is further based on OpenStreetMap. “Mapbox is a Live Location Platform.” (Cited from https://www.mapbox.com/).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
<!DOCTYPE html>
<html>
<head>
<meta charset='utf-8' />
<title>Display a map</title>
<meta name='viewport' content='initial-scale=1,maximum-scale=1,user-scalable=no' />
<script src='https://api.tiles.mapbox.com/mapbox-gl-js/v0.51.0/mapbox-gl.js'></script>
<link href='https://api.tiles.mapbox.com/mapbox-gl-js/v0.51.0/mapbox-gl.css' rel='stylesheet' />
<style>
body { margin:0; padding:0; }
#map { position:absolute; top:0; bottom:0; width:100%; }
</style>
</head>
<body>

<div id='map'></div>
<script>
mapboxgl.accessToken = 'pk.eyJ1IjoibG9uZ2VydmlzaW9uIiwiYSI6ImNqb2ZpN2Z5ZjA0bnMzd3FoY29oeTZwNW0ifQ.th9s23R3Njgxewe9rDdXzA';
var map = new mapboxgl.Map({
container: 'map', // container id
style: 'mapbox://styles/mapbox/streets-v9', // stylesheet location
center: [-74.50, 40], // starting position [lng, lat]
zoom: 9 // starting zoom
});
</script>

</body>
</html>

Please click on the following image and have a try.

Mapbox GL

6. OSMBuildings Based On OpenStreetMap

OSMBuildings is “a 3D building geometry viewer based on OpenStreetMap data.” (Cited from https://github.com/OSMBuildings/OSMBuildings). Unfortunately, from OSMBuildings Registration, it is NOT free. However, its demonstrations are amazing - please refer to https://osmbuildings.org/developer/.

OSMBuildings Examples

7. Tiles Upon OpenStreetMap

OpenStreetMap may be covered by different tiles in order to show different maps. For instance, Google map vs Google satellite map. Let’s take a look at Home Page of WebGLEarth, you’ll notice MapTiler at the left bottom corner. Here, we summarize 3 widely used tiles in the following:

Generally, there are the following types of common tiles:

  • OpenStreetMap vector tiles
  • Contour lines vector tiles
  • Hillshading raster tiles
  • Landcover vector tiles
  • Satellite raster tiles
  • Terrain - quantized mesh tiles
  • etc.

8. OpenLayers

Besides tiles, there might be several layers (including one layer of tile) upon OpenStreetMap, here comes OpenLayers: “OpenLayers makes it easy to put a dynamic map in any web page. It can display map tiles, vector data and markers loaded from any source.” (cited from https://openlayers.org/)

A simple example of is in OpenLayers QuickStart. The code is cited as:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
<!doctype html>
<html lang="en">
<head>
<link rel="stylesheet" href="https://cdn.rawgit.com/openlayers/openlayers.github.io/master/en/v5.3.0/css/ol.css" type="text/css">
<style>
.map {
height: 400px;
width: 100%;
}
</style>
<script src="https://cdn.rawgit.com/openlayers/openlayers.github.io/master/en/v5.3.0/build/ol.js"></script>
<title>OpenLayers example</title>
</head>
<body>
<h2>My Map</h2>
<div id="map" class="map"></div>
<script type="text/javascript">
var map = new ol.Map({
target: 'map',
layers: [
new ol.layer.Tile({
source: new ol.source.OSM()
})
],
view: new ol.View({
center: ol.proj.fromLonLat([37.41, 8.82]),
zoom: 4
})
});
</script>
</body>
</html>

Please click on the following image and have a try.

OpenLayers Quick Start

9. CesiumJS

There is a complete example from Cesium Tutorial, which is cited directly as:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<script src="https://cesium.com/downloads/cesiumjs/releases/1.68/Build/Cesium/Cesium.js"></script>
<link href="https://cesium.com/downloads/cesiumjs/releases/1.68/Build/Cesium/Widgets/widgets.css" rel="stylesheet">
</head>
<body>
<div id="cesiumContainer" style="width: 700px; height:400px"></div>
<script>
Cesium.Ion.defaultAccessToken = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJqdGkiOiI1ZTcwZTRmMS1jNmZkLTQ1NzctYTU3ZC03Mzc1Y2U2YWU5OTMiLCJpZCI6MjY1MzUsInNjb3BlcyI6WyJhc3IiLCJnYyJdLCJpYXQiOjE1ODc5MDM0MDB9.SAx2MQ2F3mUmH2InfoHWJvUoQBhq1CvGwo_2Z8oNC7Y';
var viewer = new Cesium.Viewer('cesiumContainer', {
terrainProvider: Cesium.createWorldTerrain()
});

var tileset = viewer.scene.primitives.add(
new Cesium.Cesium3DTileset({
url: Cesium.IonResource.fromAssetId(your_asset_id)
})
);
viewer.zoomTo(tileset);
</script>
</body>
</html>

You are welcome to click on the following image and have a try.

A Completed Example of CesiumJS

10. Esri

As the LAST demo of this blog, let’s try something interesting: A Live Demo - Wind-JS.

You are of course welcome to click on the following image and have a try:

Esri Wind of the World

11. MapQuest

MapQuest provides Open Search (Nominatim) API, which relies solely on OpenStreetMap data.

Geocoding based on OpenStreetMap is exactly for ME.

😍

Several Geocoding services are provided as follows:

Please take a look at the bottom of my other website https://longervision.cc/.