Today seems to be the FIRST big day pf 2019? So many
important packages (including operating systems) have released their
NEW updates. Alright, let's take a look what I've done
today.
Updates Today
Ubuntu 18.04.2
1 2 3 4 5 6 7 8
➜ ~ uname -r 4.18.0-15-generic ➜ ~ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.2 LTS Release: 18.04 Codename: bionic
➜ ~ nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Fri_Feb__8_19:08:17_PST_2019 Cuda compilation tools, release 10.1, V10.1.105 ➜ ~ deviceQuery deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 980M" CUDA Driver Version / Runtime Version 10.1 / 10.1 CUDA Capability Major/Minor version number: 5.2 Total amount of global memory: 4035 MBytes (4231331840 bytes) (12) Multiprocessors, (128) CUDA Cores/MP: 1536 CUDA Cores GPU Max Clock rate: 1126 MHz (1.13 GHz) Memory Clock rate: 2505 Mhz Memory Bus Width: 256-bit L2 Cache Size: 2097152 bytes Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096) Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 2 copy engine(s) Run time limit on kernels: Yes Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled Device supports Unified Addressing (UVA): Yes Device supports Compute Preemption: No Supports Cooperative Kernel Launch: No Supports MultiDevice Co-op Kernel Launch: No Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.1, CUDA Runtime Version = 10.1, NumDevs = 1 Result = PASS
Two things to emphasize before building. 1. To have all the packages
at Ignition Robotics
Development Libraries successfully built, we need to build all
libraries in a particular sequence. 2. Some required
3rd-party libraries are additionally installed while
building Ignition Robotics
Development Libraries, including:
Indoor surveying & mapping is done by a cluster
of servers, namely, on a small cloud. Currently, we are still
dockerizing our own SDK. Three videos are used to briefly explain the
MOST important three steps of indoor surveying
& mapping, as shown:
After TorchSeg is
checked out, we need to modify all the config.py files
and ensure all variables C.pretrained_model are
specified to the RIGHT location and with the
RIGHT names. In my case, I just downloaded all
PyTorch models under the same directory as TorchSeg, therefore, all
C.pretrained_model are designated as:
We also need to modify all variables
C.dataset_path and make sure we are using the
RIGHT dataset. In fact, ONLY two
datasets are directly adopted in the originally checked-out code of TorchSeg.
➜ ~ pip show prophet Name: prophet Version: 0.1.1 Summary: Microframework for analyzing financial markets. Home-page: http://prophet.michaelsu.io/ Author: Michael Su Author-email: mdasu1@gmail.com License: BSD Location: /home/jiapei/.local/lib/python3.6/site-packages Requires: six, pytz, pandas Required-by: ➜ ~ pip show pystan Name: pystan Version: 2.18.1.0 Summary: Python interface to Stan, a package for Bayesian inference Home-page: https://github.com/stan-dev/pystan Author: None Author-email: None License: GPLv3 Location: /home/jiapei/.local/lib/python3.6/site-packages Requires: Cython, numpy Required-by: fbprophet
Download the Time Series
Data
We just need to download the CSV file to some directory:
1 2 3 4
➜ facebookprophet curl -O https://assets.digitalocean.com/articles/eng_python/prophet/AirPassengers.csv % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1748 100 1748 0 0 2281 0 --:--:-- --:--:-- --:--:-- 2279
AttributeError:
type object 'Path' has no attribute 'home'
This error message is found while I was trying to from
matplotlib.pyplot as plt today. I've got NO
idea what had happened to my python. But, it seems this
issue had been met and solved in some public posts. For instance:
After having successfully installed PyTorch current version
1.1, I still failed to import torch.
Please refer to the following ERROR.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
➜ ~ python Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch Traceback (most recent call last): File "<stdin>", line 1, in <module> File "~/.local/lib/python3.6/site-packages/torch/__init__.py", line 84, in <module> from torch._C import * ImportError: ~/.local/lib/python3.6/site-packages/torch/lib/libcaffe2.so: undefined symbol: _ZTIN3c1010TensorImplE >>> import caffe2 >>> caffe2.__version__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'caffe2' has no attribute '__version__' >>> caffe2.__file__ '~/.local/lib/python3.6/site-packages/caffe2/__init__.py'
In order to have PyTorch
successully imported, I've got to remove the manually installed
PyTorch v 1.1,
but had it installed by pip
This is PyTorch v1.0, which seems
NOT come with caffe2, and of course
should NOT be compatible with the installed
caffe2 built with PyTorch v1.1. Can
anybody help to solve this issue? Please also refer to Github
issue.
Solution
Remove anything/everything related to your
previously installed PyTorch. In my
case, file /usr/local/lib/libc10.s0 is to be removed.
In order to analyze which files are possibly related to the concerned
package, we can use the command ldd.
It's not hard to have FlowNet2-Pytorch
installed by one line of commands:
1
➜ flownet2-pytorch git:(master) ✗ ./install.sh
After installation, there will be 3 packages installed under folder
~/.local/lib/python3.6/site-packages:
correlation-cuda
resample2d-cuda
channelnorm-cuda
1 2 3 4 5 6 7 8 9 10
➜ site-packages ls -lsd correlation* 4 drwxrwxr-x 4 jiapei jiapei 4096 Jan 7 00:07 correlation_cuda-0.0.0-py3.6-linux-x86_64.egg ➜ site-packages ls -lsd channelnorm* 4 drwxrwxr-x 4 jiapei jiapei 4096 Jan 7 00:07 channelnorm_cuda-0.0.0-py3.6-linux-x86_64.egg ➜ site-packages ls -lsd resample2d* 4 drwxrwxr-x 4 jiapei jiapei 4096 Jan 7 00:07 resample2d_cuda-0.0.0-py3.6-linux-x86_64.egg ➜ site-packages ls -lsd flownet2* zsh: no matches found: flownet2* ➜ site-packages pwd ~/.local/lib/python3.6/site-packages
That is to say: you should NEVER import
flownet2, nor correlation, nor
channelnorm, nor resampled2d, but
correlation_cuda
resample2d_cuda
channelnorm_cuda
Current Bug
Here comes the ERROR:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
➜ ~ python Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import correlation_cuda Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: ~/.local/lib/python3.6/site-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE >>> import channelnorm_cuda Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: ~/.local/lib/python3.6/site-packages/channelnorm_cuda-0.0.0-py3.6-linux-x86_64.egg/channelnorm_cuda.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE >>> import resample2d_cuda Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: ~/.local/lib/python3.6/site-packages/resample2d_cuda-0.0.0-py3.6-linux-x86_64.egg/resample2d_cuda.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE >>>
I've already posted an issue on
github. Had anybody solved this problem?
Solution
import torch FIRST.
1 2 3 4 5 6 7 8
➜ ~ python Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> import correlation_cuda >>> import resample2d_cuda >>> import channelnorm_cuda
Source Code Current Git Hash: b'ac1602a72f0454f65872126b70665a596fae8009'
Initializing Datasets [0.003s] Operation failed
Traceback (most recent call last): File "main.py", line 139, in <module> train_dataset = args.training_dataset_class(args, True, **tools.kwargs_from_args(args, 'training_dataset')) File "....../flownet2-pytorch/datasets.py", line 112, in __init__ super(MpiSintelFinal, self).__init__(args, is_cropped = is_cropped, root = root, dstype = 'final', replicates = replicates) File "....../flownet2-pytorch/datasets.py", line 66, in __init__ self.frame_size = frame_utils.read_gen(self.image_list[0][0]).shape IndexError: list index out of range
➜ vid2vid git:(master) python scripts/download_flownet2.py Compiling correlation kernels by nvcc... rm: cannot remove '../_ext': No such file or directory Traceback (most recent call last): File "build.py", line 3, in <module> import torch.utils.ffi File "~/.local/lib/python3.6/site-packages/torch/utils/ffi/__init__.py", line 1, in <module> raise ImportError("torch.utils.ffi is deprecated. Please use cpp extensions instead.") ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead. Compiling resample2d kernels by nvcc... rm: cannot remove 'Resample2d_kernel.o': No such file or directory rm: cannot remove '../_ext': No such file or directory In file included from Resample2d_kernel.cu:1:0: ~/.local/lib/python3.6/site-packages/torch/lib/include/THC/THC.h:4:10: fatal error: THC/THCGeneral.h: No such file or directory #include <THC/THCGeneral.h> ^~~~~~~~~~~~~~~~~~ compilation terminated. Traceback (most recent call last): File "build.py", line 3, in <module> import torch.utils.ffi File "~/.local/lib/python3.6/site-packages/torch/utils/ffi/__init__.py", line 1, in <module> raise ImportError("torch.utils.ffi is deprecated. Please use cpp extensions instead.") ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead. Compiling channelnorm kernels by nvcc... rm: cannot remove 'ChannelNorm_kernel.o': No such file or directory rm: cannot remove '../_ext': No such file or directory In file included from ChannelNorm_kernel.cu:1:0: ~/.local/lib/python3.6/site-packages/torch/lib/include/THC/THC.h:4:10: fatal error: THC/THCGeneral.h: No such file or directory #include <THC/THCGeneral.h> ^~~~~~~~~~~~~~~~~~ compilation terminated. Traceback (most recent call last): File "build.py", line 3, in <module> import torch.utils.ffi File "~/.local/lib/python3.6/site-packages/torch/utils/ffi/__init__.py", line 1, in <module> raise ImportError("torch.utils.ffi is deprecated. Please use cpp extensions instead.") ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead.
With the most inspiring speech by Arnold Schwarzenegger, 2019
arrived... I've got something in my mind as well...
Everybody has a dream/goal. NEVER EVER blow it
out... Here comes the preface of my PhD's thesis... It's been 10 years
already... This is NOT ONLY my attitude towards
science, BUT ALSO towards those so-called
professors(叫兽) and
specialists(砖家).
So, today, I'm NOT going to repeat any cliches,
BUT happily encouraged by Arnold Schwarzenegger's
speech. We will definitely have a fruitful 2019. Let's have some
pizza.
import cv2 import numpy as np import matplotlib.pyplot as plt import pydicom as pdicom import os import glob import pandas as pd import scipy.ndimage from skimage import measure, morphology from mpl_toolkits.mplot3d.art3d import Poly3DCollection
# Collect all dicom images lstFilesDCM = [] # create an empty list defload_scan2(path): for dirName, subdirList, fileList in os.walk(path): for filename in fileList: if".dcm"in filename.lower(): lstFilesDCM.append(os.path.join(dirName, filename)) return lstFilesDCM
first_patient = load_scan2(INPUT_FOLDER)
# Get ref file print (lstFilesDCM[0]) RefDs = pdicom.read_file(lstFilesDCM[0])
# Load dimensions based on the number of rows, columns and slices (along z axis) ConstPixelDims = (int(RefDs.Rows), int(RefDs.Columns), len(lstFilesDCM))
x = np.arange(0.0, (ConstPixelDims[0]+1)*ConstPixelSpacing[0], ConstPixelSpacing[0]) y = np.arange(0.0, (ConstPixelDims[1]+1)*ConstPixelSpacing[1], ConstPixelSpacing[1]) z = np.arange(0.0, (ConstPixelDims[2]+1)*ConstPixelSpacing[2], ConstPixelSpacing[2])
# loop through all the DICOM files for filenameDCM in lstFilesDCM: # read the file ds = pdicom.read_file(filenameDCM) # store the raw image data ArrayDicom[:, :, lstFilesDCM.index(filenameDCM)] = ds.pixel_array
I did a MRI scan in 2015, in SIAT, for the purpose of research. The
scan resolution is NOT very high. Anyway, a bunch of
DICOM images can be viewed as follows:
It's been quite a while without writing anything. Today, we are going
to introduce PyFlux for
time seriers analysis. Cannonical models are to be directly adopted from
PyFlux Documents and tested
in this blog.
Get Started
Copy and paste that piece of code from PyFlux Documents with trivial
modifications as follows:
import pandas as pd import numpy as np from pandas_datareader.data import DataReader from datetime import datetime import pyflux as pf import matplotlib.pyplot as plt
Index Latent Variable Prior Prior Hyperparameters V.I. Dist Transform ======== ========================= =============== ========================= ========== ========== 0 Constant Normal mu0: 0, sigma0: 3 Normal None 1 AR(1) Normal mu0: 0, sigma0: 0.5 Normal None 2 AR(2) Normal mu0: 0, sigma0: 0.5 Normal None 3 AR(3) Normal mu0: 0, sigma0: 0.5 Normal None 4 AR(4) Normal mu0: 0, sigma0: 0.5 Normal None 5 MA(1) Normal mu0: 0, sigma0: 0.5 Normal None 6 MA(2) Normal mu0: 0, sigma0: 0.5 Normal None 7 MA(3) Normal mu0: 0, sigma0: 0.5 Normal None 8 MA(4) Normal mu0: 0, sigma0: 0.5 Normal None 9 Normal Scale Flat n/a (non-informative) Normal exp
~/.local/lib/python3.6/site-packages/numdifftools/limits.py:126: UserWarning: All-NaN slice encountered warnings.warn(str(msg)) Acceptance rate of Metropolis-Hastings is 0.000125 Acceptance rate of Metropolis-Hastings is 0.00075 Acceptance rate of Metropolis-Hastings is 0.105525 Acceptance rate of Metropolis-Hastings is 0.13335 Acceptance rate of Metropolis-Hastings is 0.1907 Acceptance rate of Metropolis-Hastings is 0.232 Acceptance rate of Metropolis-Hastings is 0.299
Tuning complete! Now sampling. Acceptance rate of Metropolis-Hastings is 0.36655 GARCH(1,1) ======================================================= ================================================== Dependent Variable: JPM Returns Method: Metropolis Hastings Start Date: 2006-06-05 00:00:00 Unnormalized Log Posterior: -2671.5492 End Date: 2016-06-02 00:00:00 AIC: 5351.717896880396 Number of observations: 2517 BIC: 5375.041188860938 ========================================================================================================== Latent Variable Median Mean 95% Credibility Interval ======================================== ================== ================== ========================= Vol Constant 0.0059 0.0057 (0.004 | 0.0076) q(1) 0.0721 0.0728 (0.0556 | 0.0921) p(1) 0.9202 0.9199 (0.9013 | 0.9373) Returns Constant 0.0311 0.0311 (0.0109 | 0.0511) ==========================================================================================================