Kaggle Competition - PetFinder.my Adoption Prediction

Happy new year everybody. 2019 comes. Let’s have some Kaggle fun. Today, we’re going to try PetFinder.my Adoption Prediction.

System Preparation

Supposing you’ve already correctly set up all your configurations:

Python

1
2
3
4
➜  ~ python --version
Python 3.6.7
➜ ~ which python
/usr/bin/python

Kaggle

Please follow Kaggle Github to install Python Kaggle. After installation, you’ll get:

1
2
3
4
5
6
7
8
9
10
11
➜  ~ pip show kaggle
Name: kaggle
Version: 1.5.1.1
Summary: Kaggle API
Home-page: https://github.com/Kaggle/kaggle-api
Author: Kaggle
Author-email: support@kaggle.com
License: Apache 2.0
Location: ~/.local/lib/python3.6/site-packages
Requires: requests, certifi, tqdm, six, python-dateutil, python-slugify, urllib3
Required-by:

Cuda

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
➜  ~ nvidia-smi
Wed Jan 2 15:52:06 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.78 Driver Version: 410.78 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 980M Off | 00000000:01:00.0 On | N/A |
| N/A 32C P8 8W / N/A | 779MiB / 4035MiB | 0% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1842 G /usr/lib/xorg/Xorg 15MiB |
| 0 1952 G /usr/bin/gnome-shell 48MiB |
| 0 2923 G /usr/lib/xorg/Xorg 188MiB |
| 0 3039 G /usr/bin/gnome-shell 163MiB |
| 0 3801 G ...uest-channel-token=14688497433927674620 220MiB |
| 0 7109 G ...-token=FECD30C79B74E3AD9CF815BF4CB9EB09 70MiB |
| 0 28276 G ...-token=8CC4669488D477BE118BBC69F71B724E 64MiB |
+-----------------------------------------------------------------------------+
➜ ~ /usr/local/cuda/samples/bin/x86_64/linux/release/deviceQuery
/usr/local/cuda/samples/bin/x86_64/linux/release/deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 980M"
CUDA Driver Version / Runtime Version 10.0 / 10.0
CUDA Capability Major/Minor version number: 5.2
Total amount of global memory: 4035 MBytes (4231331840 bytes)
(12) Multiprocessors, (128) CUDA Cores/MP: 1536 CUDA Cores
GPU Max Clock rate: 1126 MHz (1.13 GHz)
Memory Clock rate: 2505 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: No
Supports Cooperative Kernel Launch: No
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.0, CUDA Runtime Version = 10.0, NumDevs = 1
Result = PASS

Data Preparation

Data Download

Please refer to Kaggle petfinder-adoption-prediction Data and have the data prepared.

1
2
3
4
➜  ~ kaggle competitions download -c petfinder-adoption-prediction
➜ cd petfinder-adoption-prediction
➜ petfinder-adoption-prediction ls
breed_labels.csv color_labels.csv state_labels.csv test_images.zip test_metadata.zip test_sentiment.zip test.zip train_images.zip train_metadata.zip train_sentiment.zip train.zip

Data Exploration

We can then use Andrew Lukyanenko’s existing kernel code to explore how the data look like. It’s NOT hard to re-produce the following resultant pictures:

Number of Samples (both cats and dogs) in Each Classes -- Classified by Adoption Speed

Number of Cats & Dogs in Train and Test Datasets

Number of Samples (respectively for cats and dogs) in Each Classes -- Classified by Adoption Speed

Word Count for Cats & Dogs







































The Algorithm