Today is Canada's Thanksgiving. Let me do something extra on Jetson Orin Nano.

Jetson Orin Nano

0. My Working Environment

1
2
3
4
5
6
➜  ~ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 24.04.1 LTS
Release: 24.04
Codename: noble

1. Flash From Scratch

1.1 Entering Recovery Mode

In order to enter the Recovery Mode, we need to connect Jetson Orin Nano's Pin 9 and Pin 10 with a jumper, and then turn on the power, and connect the Jetson Orin Nano with your desktop, here in my case, Ubuntu 24.04.1 LTS.

1.2 lsusb

1
2
➜  Resource git:(master) lsusb | grep NVIDIA
Bus 009 Device 016: ID 0955:7523 NVIDIA Corp. APX

1.3 Install Jetson Software with SDK Manager

Step 01 Step 02
Step 01 Step 02

Get stucked at:

Step 03 |

1.4 Jetson Orin Nano Tutorial: SSD Install, Boot, and JetPack Setup

What can I say?

😞😢😭

My comment: so complicated. I'm too lazy to follow.

1.5 dd Is My Favorite Tool

For some reasons, on my Jetson Orin Nano, there is NO TF card slot, but a 128G SSD instead.

😳😳😳

Therefore, in my case, my BEST solution is to flash the SSD directly.

Simply download and unzip JetPack 6.1 Orin Nano SD Card Image first. Then, do dd:

1
2
3
4
5
6
7
➜  jetson sudo dd bs=1M if=sd-blob.img of=/dev/sdb conv=fsync 

[sudo] password for lvision:
22802+0 records in
22802+0 records out
23909629952 bytes (24 GB, 22 GiB) copied, 60.1446 s, 398 MB/s
➜ jetson

2. Demonstration

2.1 Initial Login Without ssh

You need the DP - DisplayPort

2.2 ssh Into Jetson Orin Nano After ssh Is Enabled

2.3 YOLOv11 On Jetson Orin Nano

My previous blog YOLOv11.md didn't talk about which model to use. And today, let's try them out.

YOLOv11 was just out for a couple of days. People keep talking about its terrrific performance. Today, let me have some fun.

1. YOLOv11 Server App

A Flask implementation is obtained from ChatGPT:

LVT YOLOv11 Server App

2. Demonstration From Client Requests

2.1 Commands

Corresponding to the 5 tasks summarized on YOLOv11, the results of client requests are demonstrated as:

LVT YOLOv11 5 EndPoints Requests

2.2 Resultant Images On Server

Original Image YOLO Detection
Original Image Detection Result
YOLO Segmentation YOLO Pose
Segmentation Boundaries Pose Keypoints
YOLO Oriented Detection YOLO Classification
Oriented Detection Classfication Json

Let me try to provide such a service as public asap..

Peace. Jesus. I'd recommend you The Bible Studio.

Today, I'd love to have some fun of TorchServe.

1. TorchServe Getting Started

Please just follow TorchServe Getting Started, with trivial modifications. In my demonstration, I stick to working under directory /opt/servers/torchserve.

1.1 Clone TorchServe

1
2
3
4
5
6
7
8
➜  torchserve git clone https://github.com/pytorch/serve.git        
Cloning into 'serve'...
remote: Enumerating objects: 60508, done.
remote: Counting objects: 100% (25/25), done.
remote: Compressing objects: 100% (25/25), done.
remote: Total 60508 (delta 6), reused 3 (delta 0), pack-reused 60483 (from 1)
Receiving objects: 100% (60508/60508), 99.06 MiB | 23.01 MiB/s, done.
Resolving deltas: 100% (37656/37656), done.

1.2 Store a Model

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
➜  torchserve mkdir model_store
➜ torchserve wget https://download.pytorch.org/models/densenet161-8d451a50.pth
--2024-10-07 00:35:54-- https://download.pytorch.org/models/densenet161-8d451a50.pth
Resolving download.pytorch.org (download.pytorch.org)... 2600:9000:26ce:7a00:d:607e:4540:93a1, 2600:9000:26ce:6e00:d:607e:4540:93a1, 2600:9000:26ce:3400:d:607e:4540:93a1, ...
Connecting to download.pytorch.org (download.pytorch.org)|2600:9000:26ce:7a00:d:607e:4540:93a1|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 115730790 (110M) [application/x-www-form-urlencoded]
Saving to: ‘densenet161-8d451a50.pth’

densenet161-8d451a50.pth 100%[==================================================================================================================================================>] 110.37M 109MB/s in 1.0s

2024-10-07 00:35:55 (109 MB/s) - ‘densenet161-8d451a50.pth’ saved [115730790/115730790]
➜ torchserve torch-model-archiver --model-name densenet161 --version 1.0 --model-file ./serve/examples/image_classifier/densenet_161/model.py --serialized-file densenet161-8d451a50.pth --export-path model_store --extra-files ./serve/examples/image_classifier/index_to_name.json --handler image_classifier
➜ torchserve ll model_store
total 106M
4.0K drwxrwxr-x 2 lvision lvision 4.0K Oct 7 00:36 ./
4.0K drwxrwxr-x 5 lvision lvision 4.0K Oct 7 00:35 ../
106M -rw-rw-r-- 1 lvision lvision 106M Oct 7 00:36 densenet161.mar

1.3 Start TorchServe

1.3.1 config.properties

1
2
3
4
5
6
7
8
9
➜  torchserve cat config.properties 
inference_address=http://127.0.0.1:8080
management_address=http://127.0.0.1:8081
metrics_address=http://127.0.0.1:8082
enable_token_auth=true
ts.key.file=/opt/servers/torchserve/key_file.json
log_location=/opt/servers/torchserve/logs
metrics_location=/opt/servers/torchserve/logs
access_log_location=/opt/servers/torchserve/logs

1.3.2 Start TorchServe to serve the model

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
➜  torchserve torchserve --start --ncs --model-store model_store --models densenet161.mar
➜ torchserve WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2024-10-07T00:38:23,323 [DEBUG] main org.pytorch.serve.util.ConfigManager - xpu-smi not available or failed: Cannot run program "xpu-smi": error=2, No such file or directory
2024-10-07T00:38:23,326 [WARN ] main org.pytorch.serve.util.ConfigManager - Your torchserve instance can access any URL to load models. When deploying to production, make sure to limit the set of allowed_urls in config.properties
2024-10-07T00:38:23,361 [INFO ] main org.pytorch.serve.util.TokenAuthorization -
######
TorchServe now enforces token authorization by default.
This requires the correct token to be provided when calling an API.
Key file located at /opt/servers/torchserve/key_file.json
Check token authorization documenation for information: https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md
######

2024-10-07T00:38:23,361 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Initializing plugins manager...
2024-10-07T00:38:23,399 [INFO ] main org.pytorch.serve.metrics.configuration.MetricConfiguration - Successfully loaded metrics configuration from /home/lvision/.local/lib/python3.12/site-packages/ts/configs/metrics.yaml
2024-10-07T00:38:23,484 [INFO ] main org.pytorch.serve.ModelServer -
Torchserve version: 0.12.0
TS Home: /home/lvision/.local/lib/python3.12/site-packages
Current directory: /opt/servers/torchserve
Temp directory: /tmp
Metrics config path: /home/lvision/.local/lib/python3.12/site-packages/ts/configs/metrics.yaml
Number of GPUs: 1
Number of CPUs: 48
Max heap size: 30208 M
Python executable: /usr/bin/python3
Config file: config.properties
Inference address: http://127.0.0.1:8080
Management address: http://127.0.0.1:8081
Metrics address: http://127.0.0.1:8082
Model Store: /opt/servers/torchserve/model_store
Initial Models: densenet161.mar
Log dir: /opt/servers/torchserve/logs
Metrics dir: /opt/servers/torchserve/logs
Netty threads: 0
Netty client threads: 0
Default workers per model: 1
Blacklist Regex: N/A
Maximum Response Size: 6553500
Maximum Request Size: 6553500
Limit Maximum Image Pixels: true
Prefer direct buffer: false
Allowed Urls: [file://.*|http(s)?://.*]
Custom python dependency for model allowed: false
Enable metrics API: true
Metrics mode: LOG
Disable system metrics: false
Workflow Store: /opt/servers/torchserve/model_store
CPP log config: N/A
Model config: N/A
System metrics command: default
Model API enabled: false
2024-10-07T00:38:23,491 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Loading snapshot serializer plugin...
2024-10-07T00:38:23,492 [INFO ] main org.pytorch.serve.ModelServer - Loading initial models: densenet161.mar
2024-10-07T00:38:24,521 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model densenet161
2024-10-07T00:38:24,521 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model densenet161
2024-10-07T00:38:24,521 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model densenet161 loaded.
2024-10-07T00:38:24,521 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: densenet161, count: 1
2024-10-07T00:38:24,526 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: EpollServerSocketChannel.
2024-10-07T00:38:24,527 [DEBUG] W-9000-densenet161_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/usr/bin/python3, /home/lvision/.local/lib/python3.12/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9000, --metrics-config, /home/lvision/.local/lib/python3.12/site-packages/ts/configs/metrics.yaml]
2024-10-07T00:38:24,568 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://127.0.0.1:8080
2024-10-07T00:38:24,568 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: EpollServerSocketChannel.
2024-10-07T00:38:24,569 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://127.0.0.1:8081
2024-10-07T00:38:24,570 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: EpollServerSocketChannel.
2024-10-07T00:38:24,570 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://127.0.0.1:8082
Model server started.
2024-10-07T00:38:24,720 [WARN ] pool-3-thread-1 org.pytorch.serve.metrics.MetricCollector - worker pid is not available yet.
2024-10-07T00:38:25,152 [INFO ] pool-3-thread-1 TS_METRICS - CPUUtilization.Percent:20.0|#Level:Host|#hostname:lvision-MS-7C60,timestamp:1728286705
2024-10-07T00:38:25,153 [INFO ] pool-3-thread-1 TS_METRICS - DiskAvailable.Gigabytes:1412.4598426818848|#Level:Host|#hostname:lvision-MS-7C60,timestamp:1728286705
2024-10-07T00:38:25,153 [INFO ] pool-3-thread-1 TS_METRICS - DiskUsage.Gigabytes:215.69704055786133|#Level:Host|#hostname:lvision-MS-7C60,timestamp:1728286705
2024-10-07T00:38:25,153 [INFO ] pool-3-thread-1 TS_METRICS - DiskUtilization.Percent:13.2|#Level:Host|#hostname:lvision-MS-7C60,timestamp:1728286705
2024-10-07T00:38:25,153 [INFO ] pool-3-thread-1 TS_METRICS - GPUMemoryUtilization.Percent:2.5472005208333335|#Level:Host,DeviceId:0|#hostname:lvision-MS-7C60,timestamp:1728286705
2024-10-07T00:38:25,153 [INFO ] pool-3-thread-1 TS_METRICS - GPUMemoryUsed.Megabytes:626.0|#Level:Host,DeviceId:0|#hostname:lvision-MS-7C60,timestamp:1728286705
2024-10-07T00:38:25,154 [INFO ] pool-3-thread-1 TS_METRICS - GPUUtilization.Percent:0.0|#Level:Host,DeviceId:0|#hostname:lvision-MS-7C60,timestamp:1728286705
2024-10-07T00:38:25,154 [INFO ] pool-3-thread-1 TS_METRICS - MemoryAvailable.Megabytes:248234.90625|#Level:Host|#hostname:lvision-MS-7C60,timestamp:1728286705
2024-10-07T00:38:25,154 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUsed.Megabytes:6890.875|#Level:Host|#hostname:lvision-MS-7C60,timestamp:1728286705
2024-10-07T00:38:25,154 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUtilization.Percent:3.6|#Level:Host|#hostname:lvision-MS-7C60,timestamp:1728286705
2024-10-07T00:38:25,632 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - s_name_part0=/tmp/.ts.sock, s_name_part1=9000, pid=62344
2024-10-07T00:38:25,636 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9000
2024-10-07T00:38:25,637 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - Successfully loaded /home/lvision/.local/lib/python3.12/site-packages/ts/configs/metrics.yaml.
2024-10-07T00:38:25,637 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - [PID]62344
2024-10-07T00:38:25,637 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - Torch worker started.
2024-10-07T00:38:25,637 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - Python runtime: 3.12.3
2024-10-07T00:38:25,638 [DEBUG] W-9000-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-densenet161_1.0 State change null -> WORKER_STARTED
2024-10-07T00:38:25,641 [INFO ] W-9000-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000
2024-10-07T00:38:25,646 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9000.
2024-10-07T00:38:25,648 [DEBUG] W-9000-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd LOAD repeats 1 to backend at: 1728286705648
2024-10-07T00:38:25,649 [INFO ] W-9000-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Looping backend response at: 1728286705649
2024-10-07T00:38:25,674 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - model_name: densenet161, batchSize: 1
2024-10-07T00:38:26,881 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - Enabled tensor cores
2024-10-07T00:38:26,882 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - OpenVINO is not enabled
2024-10-07T00:38:26,882 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - proceeding without onnxruntime
2024-10-07T00:38:26,882 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - Torch TensorRT not enabled
2024-10-07T00:38:27,263 [WARN ] W-9000-densenet161_1.0-stderr MODEL_LOG - /home/lvision/.local/lib/python3.12/site-packages/ts/torch_handler/base_handler.py:355: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
2024-10-07T00:38:27,264 [WARN ] W-9000-densenet161_1.0-stderr MODEL_LOG - state_dict = torch.load(model_pt_path, map_location=map_location)
2024-10-07T00:38:27,564 [INFO ] W-9000-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 1915
2024-10-07T00:38:27,564 [DEBUG] W-9000-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-densenet161_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2024-10-07T00:38:27,565 [INFO ] W-9000-densenet161_1.0 TS_METRICS - WorkerLoadTime.Milliseconds:3041.0|#WorkerName:W-9000-densenet161_1.0,Level:Host|#hostname:lvision-MS-7C60,timestamp:1728286707
2024-10-07T00:38:27,565 [INFO ] W-9000-densenet161_1.0 TS_METRICS - WorkerThreadTime.Milliseconds:2.0|#Level:Host|#hostname:lvision-MS-7C60,timestamp:1728286707


1.3.3 key_file.json

By running the above command, a key_file.json file is generated under the current working directory (Please refer to TorchServe token authorization API):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
➜  torchserve ll key_file.json
4.0K -rw------- 1 lvision lvision 243 Oct 7 00:38 key_file.json
➜ torchserve cat key_file.json
{
"management": {
"key": "c_7-MgUE",
"expiration time": "2024-10-07T08:38:23.343462778Z"
},
"inference": {
"key": "IMc5oeRf",
"expiration time": "2024-10-07T08:38:23.343456097Z"
},
"API": {
"key": "_tFv4L56"
}
}%

1.3.4 Is TorchServe Service Running?

1
2
3
4
5
6
7
8
9
➜  ~ curl -H "Authorization: Bearer c_7-MgUE" http://127.0.0.1:8081/models
{
"models": [
{
"modelName": "densenet161",
"modelUrl": "densenet161.mar"
}
]
}

1.4 Get Predictions

1.4.1 Using REST APIs

1
2
3
4
➜  torchserve curl -O https://raw.githubusercontent.com/pytorch/serve/master/docs/images/kitten_small.jpg
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7341 100 7341 0 0 28980 0 --:--:-- --:--:-- --:--:-- 29015

Let's take a look:

kitten

1
2
3
4
5
6
7
8
➜  torchserve curl -H "Authorization: Bearer IMc5oeRf" http://127.0.0.1:8080/predictions/densenet161 -T kitten_small.jpg
{
"tabby": 0.47793325781822205,
"lynx": 0.20019005239009857,
"tiger_cat": 0.16827784478664398,
"tiger": 0.062009651213884354,
"Egyptian_cat": 0.05115227773785591
}%

1.4.2 Using gRPC APIs through Python Client

1.4.2.1 Install gRPC Python dependencies

1
pip install -U grpcio protobuf grpcio-tools googleapis-common-protos

1.4.2.2 Generate inference client using proto files

Must under folder serve:

1
➜  torchserve cd serve

Then,

1
2
3
4
➜  serve git:(master) python -m grpc_tools.protoc --proto_path=frontend/server/src/main/resources/proto/ --python_out=ts_scripts --grpc_python_out=ts_scripts frontend/server/src/main/resources/proto/inference.proto frontend/server/src/main/resources/proto/management.proto
google/rpc/status.proto: File not found.
inference.proto:6:1: Import "google/rpc/status.proto" was not found or had errors.
inference.proto:32:14: "google.rpc.Status" is not defined.
Why????
😞😢😭

Solution::

1
2
3
4
5
6
7
8
9
➜  serve git:(master) git clone https://github.com/googleapis/googleapis.git

Cloning into 'googleapis'...
remote: Enumerating objects: 233669, done.
remote: Counting objects: 100% (13457/13457), done.
remote: Compressing objects: 100% (410/410), done.
remote: Total 233669 (delta 13122), reused 13077 (delta 13042), pack-reused 220212 (from 1)
Receiving objects: 100% (233669/233669), 205.13 MiB | 21.65 MiB/s, done.
Resolving deltas: 100% (196982/196982), done.
  • Step 2: Generate inference client using Google APIs's necessary .proto files:
1
2
3
4
5
6
7
➜  serve git:(master) ✗ python -m grpc_tools.protoc \
--proto_path=frontend/server/src/main/resources/proto/ \
--proto_path=googleapis/ \
--python_out=ts_scripts \
--grpc_python_out=ts_scripts \
frontend/server/src/main/resources/proto/inference.proto \
frontend/server/src/main/resources/proto/management.proto
  • Step 3: Modify ts_scripts/torchserve_grpc_client.py as follows:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
import argparse
import queue
import threading
from functools import partial

import grpc
import inference_pb2
import inference_pb2_grpc
import management_pb2
import management_pb2_grpc

# Function to get an inference stub for making gRPC calls to the inference service.
def get_inference_stub():
channel = grpc.insecure_channel("localhost:7070")
stub = inference_pb2_grpc.InferenceAPIsServiceStub(channel)
return stub

# Function to get a management stub for making gRPC calls to the model management service.
def get_management_stub():
channel = grpc.insecure_channel("localhost:7071")
stub = management_pb2_grpc.ManagementAPIsServiceStub(channel)
return stub

# Perform a single inference call.
def infer(stub, model_name, model_input, metadata):
with open(model_input, "rb") as f:
data = f.read()

input_data = {"data": data}
response = stub.Predictions(
inference_pb2.PredictionsRequest(model_name=model_name, input=input_data),
metadata=metadata,
)

try:
prediction = response.prediction.decode("utf-8")
print(prediction)
except grpc.RpcError as e:
print(f"gRPC error: {e.details()}")
exit(1)

# Perform streaming inference.
def infer_stream(stub, model_name, model_input, metadata):
with open(model_input, "rb") as f:
data = f.read()

input_data = {"data": data}
responses = stub.StreamPredictions(
inference_pb2.PredictionsRequest(model_name=model_name, input=input_data),
metadata=metadata,
)

try:
for resp in responses:
prediction = resp.prediction.decode("utf-8")
print(prediction)
except grpc.RpcError as e:
print(f"gRPC error: {e.details()}")
exit(1)

# Perform an advanced streaming inference with multiple input files.
def infer_stream2(model_name, sequence_id, input_files, metadata):
response_queue = queue.Queue()
process_response_func = partial(
InferStream2.default_process_response, response_queue
)

client = InferStream2SimpleClient()
try:
client.start_stream(
model_name=model_name,
sequence_id=sequence_id,
process_response=process_response_func,
metadata=metadata,
)
sequence = input_files.split(",")

for input_file in sequence:
client.async_send_infer(input_file.strip())

for i in range(0, len(sequence)):
response = response_queue.get()
print(str(response))

print("Sequence completed!")

except grpc.RpcError as e:
print("infer_stream2 received error", e)
exit(1)
finally:
client.stop_stream()
client.stop()

# Register a new model with TorchServe.
def register(stub, model_name, mar_set_str, metadata):
mar_set = set()
if mar_set_str:
mar_set = set(mar_set_str.split(","))
marfile = f"{model_name}.mar"
print(f"## Check {marfile} in mar_set :", mar_set)
if marfile not in mar_set:
marfile = "https://torchserve.s3.amazonaws.com/mar_files/{}.mar".format(
model_name
)

print(f"## Register marfile: {marfile}\n")
params = {
"url": marfile,
"initial_workers": 1,
"synchronous": True,
"model_name": model_name,
}
try:
response = stub.RegisterModel(
management_pb2.RegisterModelRequest(**params), metadata=metadata
)
print(f"Model {model_name} registered successfully")
except grpc.RpcError as e:
print(f"Failed to register model {model_name}.")
print(str(e.details()))
exit(1)

# Unregister a model from TorchServe.
def unregister(stub, model_name, metadata):
try:
response = stub.UnregisterModel(
management_pb2.UnregisterModelRequest(model_name=model_name),
metadata=metadata,
)
print(f"Model {model_name} unregistered successfully")
except grpc.RpcError as e:
print(f"Failed to unregister model {model_name}.")
print(str(e.details()))
exit(1)

# The rest of the code defines the streaming classes and the command-line interface.

if __name__ == "__main__":
# Argument parsing for the script
parent_parser = argparse.ArgumentParser(add_help=False)
parent_parser.add_argument(
"model_name",
type=str,
default=None,
help="Name of the model used.",
)
parent_parser.add_argument(
"--auth-token",
dest="auth_token",
type=str,
default=None,
required=False,
help="Authorization token",
)

parser = argparse.ArgumentParser(
description="TorchServe gRPC client",
formatter_class=argparse.RawTextHelpFormatter,
)
subparsers = parser.add_subparsers(help="Action", dest="action")

infer_action_parser = subparsers.add_parser(
"infer", parents=[parent_parser], add_help=False
)
infer_stream_action_parser = subparsers.add_parser(
"infer_stream", parents=[parent_parser], add_help=False
)
infer_stream2_action_parser = subparsers.add_parser(
"infer_stream2", parents=[parent_parser], add_help=False
)
register_action_parser = subparsers.add_parser(
"register", parents=[parent_parser], add_help=False
)
unregister_action_parser = subparsers.add_parser(
"unregister", parents=[parent_parser], add_help=False
)

# Arguments for different actions
infer_action_parser.add_argument(
"model_input", type=str, default=None, help="Input for model for inference."
)
infer_stream_action_parser.add_argument(
"model_input",
type=str,
default=None,
help="Input for model for stream inference.",
)
infer_stream2_action_parser.add_argument(
"sequence_id",
type=str,
default=None,
help="Input for sequence id for stream inference.",
)
infer_stream2_action_parser.add_argument(
"input_files",
type=str,
default=None,
help="Comma separated list of input files",
)
register_action_parser.add_argument(
"mar_set",
type=str,
default=None,
nargs="?",
help="Comma separated list of mar models to be loaded using [model_name=]model_location format.",
)

# Parse command line arguments
args = parser.parse_args()

# Create metadata with or without the authorization token
if args.auth_token:
metadata = (
("protocol", "gRPC"),
("session_id", "12345"),
("authorization", f"Bearer {args.auth_token}"),
)
else:
metadata = (("protocol", "gRPC"), ("session_id", "12345"))

# Perform the selected action
if args.action == "infer":
infer(get_inference_stub(), args.model_name, args.model_input, metadata)
elif args.action == "infer_stream":
infer_stream(get_inference_stub(), args.model_name, args.model_input, metadata)
elif args.action == "infer_stream2":
infer_stream2(args.model_name, args.sequence_id, args.input_files, metadata)
elif args.action == "register":
register(get_management_stub(), args.model_name, args.mar_set, metadata)
elif args.action == "unregister":
unregister(get_management_stub(), args.model_name, metadata)
  • Step 4: Run inference using a sample client gRPC python client:
1
2
3
4
5
6
7
8
9
10
11
12
13
➜  serve git:(master) ✗ python ts_scripts/torchserve_grpc_client.py infer --auth-token IMc5oeRf densenet161 examples/image_classifier/kitten.jpg

/home/lvision/.local/lib/python3.12/site-packages/google/protobuf/runtime_version.py:112: UserWarning: Protobuf gencode version 5.27.2 is older than the runtime version 5.28.2 at inference.proto. Please avoid checked-in Protobuf gencode that can be obsolete.
warnings.warn(
/home/lvision/.local/lib/python3.12/site-packages/google/protobuf/runtime_version.py:112: UserWarning: Protobuf gencode version 5.27.2 is older than the runtime version 5.28.2 at management.proto. Please avoid checked-in Protobuf gencode that can be obsolete.
warnings.warn(
{
"tabby": 0.46603792905807495,
"tiger_cat": 0.4651001989841461,
"Egyptian_cat": 0.06611046195030212,
"lynx": 0.001293532201088965,
"plastic_bag": 0.000228719727601856
}

China's 75th National Day

Today, it's China's 75th Birthday. Let me join the celebration. And, today, I'm going to build up a customized Linux operating system by following Linux From Scratch, with the MOST up-to-date kernel 6.11.1.

1. Preparation

1.1 Create a Disk Image File

1
2
3
4
➜  repo dd if=/dev/zero of=./lfs.img bs=1M count=65536 
65536+0 records in
65536+0 records out
68719476736 bytes (69 GB, 64 GiB) copied, 114.926 s, 598 MB/s
1
2
➜  repo ll lfs.img
65G -rw-rw-r-- 1 lvision lvision 64G Oct 1 00:01 lfs.img

1.2 Format the Disk Image

1
2
3
4
5
6
7
8
9
10
11
12
13
➜  repo mkfs.ext4 lfs.img
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 16777216 4k blocks and 4194304 inodes
Filesystem UUID: 7cffdaf4-2fa9-4688-8269-5a60e637b887
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424

Allocating group tables: done
Writing inode tables: done
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: done

1.3 Mount the Disk Image

1
2
➜  repo sudo mount -o loop lfs.img /mnt/lfs
➜ repo
1
2
3
➜  lfs df -h /mnt/lfs
Filesystem Size Used Avail Use% Mounted on
/dev/loop14 63G 27G 34G 45% /mnt/lfs

1.4 Login As Root

From now on, we need to login as user root.

1
2
3
➜  repo su - root                          
Password:
root@lvision-MS-7C60:~#

1.5 Exports

1
2
root@lvision-MS-7C60:~# export LFS=/mnt/lfs
root@lvision-MS-7C60:~# export LFS_TGT=$(uname -m)-lfs-linux-gnu

1.6 Download Source

1
2
3
4
root@lvision-MS-7C60:~# mkdir -v $LFS/sources
mkdir: created directory '/sources'
root@lvision-MS-7C60:~# chmod -v a+wt $LFS/sources
mode of '/sources' changed from 0755 (rwxr-xr-x) to 1777 (rwxrwxrwt)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
root@lvision-MS-7C60:/mnt/docker/repo# wget --input-file=wget-list-sysv --continue --directory-prefix=$LFS/sources
--2024-08-27 03:49:06-- https://download.savannah.gnu.org/releases/acl/acl-2.3.2.tar.xz
Resolving download.savannah.gnu.org (download.savannah.gnu.org)... 2001:470:142:5::200, 209.51.188.200
Connecting to download.savannah.gnu.org (download.savannah.gnu.org)|2001:470:142:5::200|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://mirrors.ocf.berkeley.edu/nongnu/acl/acl-2.3.2.tar.xz [following]
--2024-08-27 03:49:06-- https://mirrors.ocf.berkeley.edu/nongnu/acl/acl-2.3.2.tar.xz
Resolving mirrors.ocf.berkeley.edu (mirrors.ocf.berkeley.edu)... 2607:f140:0:32::70, 169.229.200.70
Connecting to mirrors.ocf.berkeley.edu (mirrors.ocf.berkeley.edu)|2607:f140:0:32::70|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 371680 (363K) [application/octet-stream]
Saving to: ‘/mnt/lfs/sources/acl-2.3.2.tar.xz’

acl-2.3.2.tar.xz 100%[====================================================================================================================================>] 362.97K --.-KB/s in 0.1s

2024-08-27 03:49:07 (3.07 MB/s) - ‘/mnt/lfs/sources/acl-2.3.2.tar.xz’ saved [371680/371680]

--2024-08-27 03:49:07-- https://download.savannah.gnu.org/releases/attr/attr-2.5.2.tar.gz
Connecting to download.savannah.gnu.org (download.savannah.gnu.org)|2001:470:142:5::200|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://nongnu.askapache.com/attr/attr-2.5.2.tar.gz [following]
--2024-08-27 03:49:07-- https://nongnu.askapache.com/attr/attr-2.5.2.tar.gz
Resolving nongnu.askapache.com (nongnu.askapache.com)... 50.87.145.190
Connecting to nongnu.askapache.com (nongnu.askapache.com)|50.87.145.190|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 492539 (481K) [application/x-gzip]
Saving to: ‘/mnt/lfs/sources/attr-2.5.2.tar.gz’

attr-2.5.2.tar.gz 100%[====================================================================================================================================>] 481.00K 2.13MB/s in 0.2s

......
sysvinit-3.10-consolidated-1.patch 100%[====================================================================================================================================>] 2.41K --.-KB/s in 0s

2024-08-27 03:51:25 (2.39 GB/s) - ‘/mnt/lfs/sources/sysvinit-3.10-consolidated-1.patch’ saved [2464/2464]

FINISHED --2024-08-27 03:51:25--
Total wall clock time: 2m 20s
Downloaded: 94 files, 519M in 1m 32s (5.67 MB/s)
You have new mail in /var/mail/root
root@lvision-MS-7C60:/mnt/docker/repo#

1.7 Creating a Limited Directory Layout in the LFS Filesystem

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
root@lvision-MS-7C60:/mnt/lfs# mkdir -pv $LFS/{etc,var} $LFS/usr/{bin,lib,sbin}

for i in bin lib sbin; do
ln -sv usr/$i $LFS/$i
done

case $(uname -m) in
x86_64) mkdir -pv $LFS/lib64 ;;
esac
mkdir: created directory '/mnt/lfs/etc'
mkdir: created directory '/mnt/lfs/var'
mkdir: created directory '/mnt/lfs/usr'
mkdir: created directory '/mnt/lfs/usr/bin'
mkdir: created directory '/mnt/lfs/usr/lib'
mkdir: created directory '/mnt/lfs/usr/sbin'
'/mnt/lfs/bin' -> 'usr/bin'
'/mnt/lfs/lib' -> 'usr/lib'
'/mnt/lfs/sbin' -> 'usr/sbin'
mkdir: created directory '/mnt/lfs/lib64'
root@lvision-MS-7C60:/mnt/lfs# mkdir -pv $LFS/tools
mkdir: created directory '/mnt/lfs/tools'
root@lvision-MS-7C60:/mnt/lfs#

1.8 Adding the LFS User

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
root@lvision-MS-7C60:/mnt/lfs# groupadd lfs
useradd -s /bin/bash -g lfs -m -k /dev/null lfs
groupadd: group 'lfs' already exists
useradd: user 'lfs' already exists
root@lvision-MS-7C60:/mnt/lfs# chown -v lfs $LFS/{usr{,/*},lib,var,etc,bin,sbin,tools}
case $(uname -m) in
x86_64) chown -v lfs $LFS/lib64 ;;
esac
changed ownership of '/mnt/lfs/usr' from root to lfs
changed ownership of '/mnt/lfs/usr/bin' from root to lfs
changed ownership of '/mnt/lfs/usr/lib' from root to lfs
changed ownership of '/mnt/lfs/usr/sbin' from root to lfs
ownership of '/mnt/lfs/lib' retained as lfs
changed ownership of '/mnt/lfs/var' from root to lfs
changed ownership of '/mnt/lfs/etc' from root to lfs
ownership of '/mnt/lfs/bin' retained as lfs
ownership of '/mnt/lfs/sbin' retained as lfs
changed ownership of '/mnt/lfs/tools' from root to lfs
changed ownership of '/mnt/lfs/lib64' from root to lfs
1
2
root@lvision-MS-7C60:/mnt/lfs# su - lfs
lfs@lvision-MS-7C60:~$

2. Building the LFS Cross Toolchain and Temporary Tools

Please just strictily follow:

[!Note:] One key point to be emphasized: Entering Chroot for Chapter 7

1
2
3
4
chown --from lfs -R root:root $LFS/{usr,lib,var,etc,bin,sbin,tools}
case $(uname -m) in
x86_64) chown --from lfs -R root:root $LFS/lib64 ;;
esac
1
2
3
4
5
6
7
8
9
10
root@lvision-MS-7C60:/mnt/lfs# mount -v --bind /dev $LFS/dev
mount -vt devpts devpts -o gid=5,mode=0620 $LFS/dev/pts
mount -vt proc proc $LFS/proc
mount -vt sysfs sysfs $LFS/sys
mount -vt tmpfs tmpfs $LFS/run
mount: /dev bound on /mnt/lfs/dev.
mount: devpts mounted on /mnt/lfs/dev/pts.
mount: proc mounted on /mnt/lfs/proc.
mount: sysfs mounted on /mnt/lfs/sys.
mount: tmpfs mounted on /mnt/lfs/run.
1
2
3
4
5
6
root@lvision-MS-7C60:/mnt/lfs# if [ -h $LFS/dev/shm ]; then
install -v -d -m 1777 $LFS$(realpath /dev/shm)
else
mount -vt tmpfs -o nosuid,nodev tmpfs $LFS/dev/shm
fi
mount: tmpfs mounted on /mnt/lfs/dev/shm.
1
2
3
4
5
6
7
8
9
root@lvision-MS-7C60:/mnt/lfs# chroot "$LFS" /usr/bin/env -i   \
HOME=/root \
TERM="$TERM" \
PS1='(lfs chroot) \u:\w\$ ' \
PATH=/usr/bin:/usr/sbin \
MAKEFLAGS="-j$(nproc)" \
TESTSUITEFLAGS="-j$(nproc)" \
/bin/bash --login
(lfs chroot) root:/#

3. Building the LFS System

[!Note:] Two bugs MUST be fixed:

3.1 fatal error: getopt-cdefs.h: No such file or directory in LFS Chapter 8 - 8.35. Grep-3.11

The Bug:

1
2
3
4
5
In file included from grep.c:43:
../lib/getopt.h:84:10: fatal error: getopt-cdefs.h: No such file or directory
84 | #include <getopt-cdefs.h>
| ^~~~~~~~~~~~~~~~
compilation terminated.

The Solution:

1
(lfs chroot) root:/sources/grep-3.11# cp ../coreutils-9.5/lib/getopt-cdefs.h ./lib/

3.2 i386-pc vs. x86_64-efi in LFS Chapter 8 - 8.64. GRUB-2.12

In order to build GRUB, please strictly follow: BLFS Chapter 5 - GRUB-2.12 for EFI instead.

3.2.1. Build x86_64-efi Instead of i386-pc

Do NOT use the following for i386-pc:

1
2
3
4
./configure --prefix=/usr          \
--sysconfdir=/etc \
--disable-efiemu \
--disable-werror

Instead, use the following for x86_64-efi:

1
2
3
4
5
6
7
./configure --prefix=/usr        \
--sysconfdir=/etc \
--disable-efiemu \
--enable-grub-mkfont \
--with-platform=efi \
--target=x86_64 \
--disable-werror

After you obtained the following message:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
*******************************************************
GRUB2 will be compiled with following components:
Platform: x86_64-efi
With devmapper support: No (need libdevmapper header)
With memory debugging: No
With disk cache statistics: No
With boot time statistics: No
efiemu runtime: No (not available on efi)
grub-mkfont: No (need freetype2 library)
grub-mount: No (need fuse or fuse3 libraries)
starfield theme: No (No build-time grub-mkfont)
With libzfs support: No (need zfs library)
Build-time grub-mkfont: No (need freetype2 library)
Without unifont (no build-time grub-mkfont)
With liblzma from -llzma (support for XZ-compressed mips images)
With stack smashing protector: No
*******************************************************

we do:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
unset TARGET_CC &&
make
......
TARGET_OBJ2ELF= sh genmod.sh moddep.lst gcry_whirlpool.module build-grub-module-verifier gcry_whirlpool.mod
make[3]: Leaving directory '/sources/grub-2.12/grub-core'
make[2]: Leaving directory '/sources/grub-2.12/grub-core'
Making all in po
make[2]: Entering directory '/sources/grub-2.12/po'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory '/sources/grub-2.12/po'
Making all in docs
make[2]: Entering directory '/sources/grub-2.12/docs'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory '/sources/grub-2.12/docs'
Making all in util/bash-completion.d
make[2]: Entering directory '/sources/grub-2.12/util/bash-completion.d'
../../config.status --file=grub:grub-completion.bash.in
config.status: creating grub
make[2]: Leaving directory '/sources/grub-2.12/util/bash-completion.d'
make[1]: Leaving directory '/sources/grub-2.12'

3.2.2. efibootmgr and FreeType2 Preferrably Enabled

From BLFS Chapter 5 - efibootmgr-18, some required or recommended libraries are also recommended by me to install.

From BLFS Chapter 10 - FreeType-2.13.3, those recommended libraries are also recommended by me to install.

4. GRUB and UEFI Configuration

4.1 Find UUID of File lfs.img

1
2
3
4
➜  lfs sudo blkid ./lfs.img

[sudo] password for lvision:
./lfs.img: UUID="174bc795-5e06-4c0b-979f-7eccfca1ab10" BLOCK_SIZE="4096" TYPE="ext4"

4.2 Configure /etc/fstab

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
(lfs chroot) root:/boot# cat /etc/fstab
# Begin /etc/fstab

# file system mount-point type options dump fsck
# order

UUID=174bc795-5e06-4c0b-979f-7eccfca1ab10 / ext4 defaults 1 1
proc /proc proc nosuid,noexec,nodev 0 0
sysfs /sys sysfs nosuid,noexec,nodev 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
tmpfs /run tmpfs defaults 0 0
devtmpfs /dev devtmpfs mode=0755,nosuid 0 0
tmpfs /dev/shm tmpfs nosuid,nodev 0 0
cgroup2 /sys/fs/cgroup cgroup2 nosuid,noexec,nodev 0 0

# End /etc/fstab

4.3 Generate Configuration File /boot/grub/grub.cfg for GRUB

1
2
3
4
5
6
7
8
(lfs chroot) root:/boot# grub-mkconfig -o /boot/grub/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.11.1-lfs-12.2
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
Adding boot menu entry for UEFI Firmware Settings ...
done

Let's take a look at the generated /boot/grub/grub.cfg:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
(lfs chroot) root:~# cat /boot/grub/grub.cfg
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

### BEGIN /etc/grub.d/00_header ###
if [ -s $prefix/grubenv ]; then
load_env
fi
if [ "${next_entry}" ] ; then
set default="${next_entry}"
set next_entry=
save_env next_entry
set boot_once=true
else
set default="0"
fi

if [ x"${feature_menuentry_id}" = xy ]; then
menuentry_id_option="--id"
else
menuentry_id_option=""
fi

export menuentry_id_option

if [ "${prev_saved_entry}" ]; then
set saved_entry="${prev_saved_entry}"
save_env saved_entry
set prev_saved_entry=
save_env prev_saved_entry
set boot_once=true
fi

function savedefault {
if [ -z "${boot_once}" ]; then
saved_entry="${chosen}"
save_env saved_entry
fi
}

function load_video {
if [ x$feature_all_video_module = xy ]; then
insmod all_video
else
insmod efi_gop
insmod efi_uga
insmod ieee1275_fb
insmod vbe
insmod vga
insmod video_bochs
insmod video_cirrus
fi
}

if [ x$feature_default_font_path = xy ] ; then
font=unicode
else
insmod ext2
search --no-floppy --fs-uuid --set=root 174bc795-5e06-4c0b-979f-7eccfca1ab10
font="/usr/share/grub/unicode.pf2"
fi

if loadfont $font ; then
set gfxmode=auto
load_video
insmod gfxterm
fi
terminal_output gfxterm
if [ x$feature_timeout_style = xy ] ; then
set timeout_style=menu
set timeout=5
# Fallback normal timeout code in case the timeout_style feature is
# unavailable.
else
set timeout=5
fi
### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/10_linux ###
menuentry 'GNU/Linux' --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-174bc795-5e06-4c0b-979f-7eccfca1ab10' {
load_video
insmod gzio
insmod ext2
search --no-floppy --fs-uuid --set=root 174bc795-5e06-4c0b-979f-7eccfca1ab10
echo 'Loading Linux 6.11.1-lfs-12.2 ...'
linux /boot/vmlinuz-6.11.1-lfs-12.2 root=/dev/loop14 ro
}
submenu 'Advanced options for GNU/Linux' $menuentry_id_option 'gnulinux-advanced-174bc795-5e06-4c0b-979f-7eccfca1ab10' {
menuentry 'GNU/Linux, with Linux 6.11.1-lfs-12.2' --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.11.1-lfs-12.2-advanced-174bc795-5e06-4c0b-979f-7eccfca1ab10' {
load_video
insmod gzio
insmod ext2
search --no-floppy --fs-uuid --set=root 174bc795-5e06-4c0b-979f-7eccfca1ab10
echo 'Loading Linux 6.11.1-lfs-12.2 ...'
linux /boot/vmlinuz-6.11.1-lfs-12.2 root=/dev/loop14 ro
}
menuentry 'GNU/Linux, with Linux 6.11.1-lfs-12.2 (recovery mode)' --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.11.1-lfs-12.2-recovery-174bc795-5e06-4c0b-979f-7eccfca1ab10' {
load_video
insmod gzio
insmod ext2
search --no-floppy --fs-uuid --set=root 174bc795-5e06-4c0b-979f-7eccfca1ab10
echo 'Loading Linux 6.11.1-lfs-12.2 ...'
linux /boot/vmlinuz-6.11.1-lfs-12.2 root=/dev/loop14 ro single
}
}

### END /etc/grub.d/10_linux ###

### BEGIN /etc/grub.d/20_linux_xen ###
### END /etc/grub.d/20_linux_xen ###

### BEGIN /etc/grub.d/25_bli ###
if [ "$grub_platform" = "efi" ]; then
insmod bli
fi
### END /etc/grub.d/25_bli ###

### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###

### BEGIN /etc/grub.d/30_uefi-firmware ###
if [ "$grub_platform" = "efi" ]; then
fwsetup --is-supported
if [ "$?" = 0 ]; then
menuentry 'UEFI Firmware Settings' $menuentry_id_option 'uefi-firmware' {
fwsetup
}
fi
fi
### END /etc/grub.d/30_uefi-firmware ###

### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###

### BEGIN /etc/grub.d/41_custom ###
if [ -f ${config_directory}/custom.cfg ]; then
source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then
source $prefix/custom.cfg
fi
### END /etc/grub.d/41_custom ###
(lfs chroot) root:~#

4.4 Generate UEFI

1
2
3
4
5
6
7
8
9
10
11
12
13
14
(lfs chroot) root:~# # Create a FAT filesystem image
dd if=/dev/zero of=/boot/efi.img bs=1M count=10
mkfs.fat /boot/efi.img

# Create the mount point if it doesn't already exist
mkdir -p /boot/efi

# Mount the image to simulate an EFI System Partition
mount -o loop /boot/efi.img /boot/efi
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0157083 s, 668 MB/s
mkfs.fat 4.2 (2021-01-31)
(lfs chroot) root:~#

and then:

1
2
3
(lfs chroot) root:/boot# grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=GRUB --no-nvram --removable
Installing for x86_64-efi platform.
Installation finished. No error reported.

Now, you can see the following file:

1
2
(lfs chroot) root:/# ls /boot/efi/EFI/BOOT/BOOTX64.EFI 
/boot/efi/EFI/BOOT/BOOTX64.EFI```

4.5 Clear Up

  • Exit chroot
  • unmount efi.img

5. Demonstration

5.1 QEMU

1
➜  lfs sudo qemu-system-x86_64 -drive file=./lfs.img,format=raw,if=virtio -kernel /mnt/lfs/boot/vmlinuz-6.11.1-lfs-12.2 -append "root=/dev/vda rw rootdelay=5" -m 2048

LFS QEMU Logged In

1. Download the Official Release

  • Qt 6.7.3 Archive
  • Extract under /opt/qt
    1
    2
    [14:49:18] lvision /opt/qt 
    $ tar xvf qt-everywhere-src-6.7.3.tar.xz

2. CMake Configuration with Generator Ninja

1
2
[14:47:52] lvision /opt/qt/qt-everywhere-src-6.7.3/build 
$ cmake -G "Ninja" ../

3. Build with Trivial Modifications

1
2
[14:51:43] lvision /opt/qt/qt-everywhere-src-6.7.3/build 
$ ccmake ../

The MOST important: Include /usr/include/litehtml in

4. Installation

1
2
[14:51:43] lvision /opt/qt/qt-everywhere-src-6.7.3/build 
$ ninja

and you'll get:

1
2
3
4
5
6
7
8
9
10
......
In function ‘fxcrt::RetainPtr<T> pdfium::MakeRetain(Args&& ...) [with T = CFX_ReadOnlySpanStream; Args = {span<const unsigned char>}]’,
inlined from ‘std::vector<UnsupportedFeature> CPDF_Metadata::CheckForSharedForm() const’ at ../../../../../../qtwebengine/src/3rdparty/chromium/third_party/pdfium/core/fpdfdoc/cpdf_metadata.cpp:86:75:
../../../../../../qtwebengine/src/3rdparty/chromium/third_party/pdfium/core/fxcrt/retain_ptr.h:210:23: note: returned from ‘void* operator new(std::size_t)’
210 | return RetainPtr<T>(new T(std::forward<Args>(args)...));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[1323/1323] STAMP QtPdf.stamp
[10785/10857] Automatic MOC for target Pdf
AutoMoc: /opt/qt/qt-everywhere-src-6.7.3/qtwebengine/src/pdf/qpdflinkmodel_p.h: note: No relevant classes found. No output generated.
[10857/10857] Linking CXX shared module qtbase/qml/QtQuick/Pdf/libpdfquickplugin.so

Good, ALL passed.

Finally, do ninja install.

1
2
3
➜  ~ qmake --version
QMake version 3.1
Using Qt version 6.7.3 in /opt/qt/6/lib

1. Commercial NAS vs. DIY NAS?

  • For me, the BIGGEST difference between Commerial NAS vs. DIY NAS is their prices 😂
  • Without considering respective capabilities, DIY is ALWAYS of more freedom. And, freedom is so so so important as well

2. OpenMediaVault on Raspberry Pi 5

2.1 Use Lite Instead Of Desktop

On [Raspberry Pi]'s official webpage Raspbian Operating Systems, find Raspberry Pi OS Lite in category Raspberry Pi OS (64-bit). As of today, the most up-to-date release is raspios_lite_arm64-2024-07-04.

1
2
3
4
5
6
➜  RaspberryPi sudo dd if=./2024-07-04-raspios-bookworm-arm64-lite.img of=/dev/sda bs=1M status=progress conv=fsync 
2031091712 bytes (2.0 GB, 1.9 GiB) copied, 2 s, 1.0 GB/s2835349504 bytes (2.8 GB, 2.6 GiB) copied, 2.88047 s, 984 MB/s

2704+0 records in
2704+0 records out
2835349504 bytes (2.8 GB, 2.6 GiB) copied, 312.17 s, 9.1 MB/s

Don’t forget to enable SSH after installation.

2.2 Install OpenMediaVault Upon Raspberry Pi OS Lite

Now, ssh into Raspberry Pi 5:

OpenMediaVault ssh

Follow OpenMediaVault's official documentation OpenMediaVault Installation on Debian, do the following:

  • apt-get install --yes gnupg
  • wget --quiet --output-document=- https://packages.openmediavault.org/public/archive.key | sudo gpg --dearmor --yes --output "/usr/share/keyrings/openmediavault-archive-keyring.gpg"
  • nvim /etc/apt/sources.list.d/openmediavault.list and add
    1
    2
    3
    4
    5
    6
    7
    8
    9
    deb [signed-by=/usr/share/keyrings/openmediavault-archive-keyring.gpg] https://packages.openmediavault.org/public sandworm main
    # deb [signed-by=/usr/share/keyrings/openmediavault-archive-keyring.gpg] https://downloads.sourceforge.net/project/openmediavault/packages sandworm main
    ## Uncomment the following line to add software from the proposed repository.
    # deb [signed-by=/usr/share/keyrings/openmediavault-archive-keyring.gpg] https://packages.openmediavault.org/public sandworm-proposed main
    # deb [signed-by=/usr/share/keyrings/openmediavault-archive-keyring.gpg] https://downloads.sourceforge.net/project/openmediavault/packages sandworm-proposed main
    ## This software is not part of OpenMediaVault, but is offered by third-party
    ## developers as a service to OpenMediaVault users.
    # deb [signed-by=/usr/share/keyrings/openmediavault-archive-keyring.gpg] https://packages.openmediavault.org/public sandworm partner
    # deb [signed-by=/usr/share/keyrings/openmediavault-archive-keyring.gpg] https://downloads.sourceforge.net/project/openmediavault/packages sandworm partner

By the way, please refer to OpenMediaVault SourceForge Distributions, sandworm is the MOST RECENT distribution.

  • Exports
    1
    2
    3
    export LANG=C.UTF-8
    export DEBIAN_FRONTEND=noninteractive
    export APT_LISTCHANGES_FRONTEND=none
  • sudo apt-get --yes --auto-remove --show-upgraded \ --allow-downgrades --allow-change-held-packages \ --no-install-recommends \ --option DPkg::Options::="--force-confdef" \ --option DPkg::Options::="--force-confold" \ install openmediavault
  • omv-confdbadm populate

Now, ALL done.

2.3 OpenMediaVault Overview

OpenMediaVault Login Page OpenMediaVault After Login
OpenMediaVault Login Page OpenMediaVault After Login
  • username: admin
  • password: openmediavault, all lower case

OpenMediaVault Dashboard

3. NAS Configuration on My Raspberry Pi 5

3.1 Abbreviation

NAS is the abbreviation of Network Attached Storage.

3.2 Geekbord X1005 with a M.2 to Sata3.0 Extended Card

3.2.1 Enable NFS

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
➜  ~ sudo omv-salt deploy run nfs
raspberrypi4b8g:
----------
ID: remove_upgrade_debian12_conf
Function: file.absent
Name: /etc/nfs.conf.d/local.conf
Result: True
Comment: File /etc/nfs.conf.d/local.conf is not present
Started: 15:51:54.017518
Duration: 1.632 ms
Changes:
----------
ID: configure_nfs_conf
Function: file.managed
Name: /etc/nfs.conf.d/99-openmediavault.conf
Result: True
Comment: File /etc/nfs.conf.d/99-openmediavault.conf is in the correct state
Started: 15:51:54.019412
Duration: 407.334 ms
Changes:
----------
ID: configure_idmapd_conf
Function: file.managed
Name: /etc/idmapd.conf
Result: True
Comment: File /etc/idmapd.conf is in the correct state
Started: 15:51:54.427075
Duration: 345.438 ms
Changes:
----------
ID: divert_idmapd_conf
Function: omv_dpkg.divert_add
Name: /etc/idmapd.conf
Result: True
Comment: Leaving 'local diversion of /etc/idmapd.conf to /etc/idmapd.conf.distrib'
Started: 15:51:54.773922
Duration: 46.812 ms
Changes:
----------
ID: configure_nfs_exports
Function: file.managed
Name: /etc/exports
Result: True
Comment: File /etc/exports is in the correct state
Started: 15:51:54.821087
Duration: 411.736 ms
Changes:
----------
ID: divert_nfs_exports
Function: omv_dpkg.divert_add
Name: /etc/exports
Result: True
Comment: Leaving 'local diversion of /etc/exports to /etc/exports.distrib'
Started: 15:51:55.233143
Duration: 47.43 ms
Changes:
----------
ID: stop_nfs_blkmap_service
Function: service.dead
Name: nfs-blkmap
Result: True
Comment: The service nfs-blkmap is already dead
Started: 15:51:58.720879
Duration: 62.323 ms
Changes:
----------
ID: mask_nfs_blkmap_service
Function: service.masked
Name: nfs-blkmap
Result: True
Comment: Service nfs-blkmap is already masked
Started: 15:51:58.783606
Duration: 1.536 ms
Changes:
----------
ID: start_nfs_server_service
Function: service.running
Name: nfs-server
Result: True
Comment: The service nfs-server is already running
Started: 15:51:58.786135
Duration: 49.284 ms
Changes:
----------
ID: restart_nfs_utils_service
Function: service.running
Name: nfs-utils
Result: True
Comment: The service nfs-utils is already running
Started: 15:51:58.836046
Duration: 55.353 ms
Changes:

Summary for raspberrypi4b8g
-------------
Succeeded: 10
Failed: 0
-------------
Total states run: 10
Total run time: 1.429 s

and

1
2
➜  ~ sudo cat /proc/fs/nfsd/versions
+2 +3 +4 +4.1 +4.2

3.2.2 Hailo AI Accelerators On Geekbord X1005

I’ll give it a go with Hailo AI Accelerators some time soon.

As of year 2024, it looks NCSDK is NOT maintained. To clarify the relationships among OpenVINO, NCSDK master and NCSDK ncsdk2 is a kind of complicated.

1. OpenVINO Runtime NO Longer Supports Movidius VPU

1.1 OpenVINO Toolkit Release Notes 2023.0

OpenVINO NOT Support NCS2

And it looks Long Term Support (LTS) version of OpenVINO ending in December 2024 now still supports for the devices listed above.

Therefore, in order to get my old devices: 1 NCS2 and 1 NCS1 working in the year of 2024, I’ll have to resort to Intel® Distribution of OpenVINO™ Toolkit Long-Term Support (LTS).

1.2 Intel® Distribution of OpenVINO™ Toolkit Long-Term Support (LTS) 2022.3

OpenVINO LTS Supports Movidius

1.3 Intel® Distribution of OpenVINO™ Toolkit Long-Term Support (LTS) 2023.3

Ever since LTS 2023.3, OpenVINO started emphasize GenAI. On its release notes, nowhere is talking about NCS2 or NCS1 or anything related to Movidius VPU. According to OpenVINO Toolkit Release Notes 2023.0 in the above, I think ever since 2023, OpenVINO turns its direction to GenAI, and NEVER support for Movidius VPU any more. Therefore, most probably, including this Long-Term Support version, Intel® Distribution of OpenVINO™ Toolkit Long-Term Support (LTS) 2023.3 does NOT have any support to Movidius VPU devices.

OpenVINO GenAI Since 2023

1.4 OpenVINO 2024.2 @ Intel® Distribution of OpenVINO™ Toolkit Release Notes 2024.2

Not to mention the MOST UP-TO-DATE version 2024.2, which seems to be NOT a LTS. Of course, it does NOT support Movidius VPU devices.

1.5 Conclusion

Go for:

2. NCSDK Does NOT Support AArch64

2.1 get_mvcmd.sh is the key.

2.2 Firmware Is Actually Downloaded But Without AArch64 Support

1
2
3
4
5
6
7
8
➜  ncsdk ls NCSDK-2.10.01.01
ncsdk-armv7l ncsdk-x86_64 version.txt
➜ ncsdk ll NCSDK-2.10.01.01/ncsdk-x86_64/fw
Permissions Size User Date Modified Name
.rwxrwxrwx 1.1M root 26 Jan 2019 MvNCAPI-ma2450.mvcmd
➜ ncsdk ll NCSDK-2.10.01.01/ncsdk-armv7l/fw
Permissions Size User Date Modified Name
.rwxrwxrwx 1.1M root 26 Jan 2019 MvNCAPI-ma2450.mvcmd
1
2
3
4
5
6
7
8
➜  ncsdk ls NCSDK-1.12.01.01
install-ncsdk.sh ncsdk-armv7l ncsdk-x86_64 ncsdk.conf requirements.txt requirements_apt.txt tests uninstall-ncsdk.sh version.txt
➜ ncsdk ll NCSDK-1.12.01.01/ncsdk-x86_64/fw
Permissions Size User Date Modified Name
.rwxrwxrwx 866k root 5 Jan 2018 MvNCAPI.mvcmd
➜ ncsdk ll NCSDK-1.12.01.01/ncsdk-armv7l/fw
Permissions Size User Date Modified Name
.rwxrwxrwx 866k root 5 Jan 2018 MvNCAPI.mvcmd

2.3 ncsdk-aarch64 for NCS1

Fortunately, I found ncsdk-aarch64. However, it only provides ncsdk-aarch64’s firmware from NCSDK-1.12.00.01, which seems to be also okay for NCSDK-1.12.01.01.

Let’s try it out.

2.3.1 Find the Right Version

You will see a folder named NCSDK-1.12.01.01 (or NCSDK-1.12.00.01), under which there is the key folder ncsdk-aarch64, which actually contains the Aarch64 firmware for NCS1.

1
2
3
4
5
➜  ncsdk-aarch64 git:(master) ✗ ls
api docs examples install-opencv.sh install.sh LICENSE Makefile NCSDK-1.12.01.01 ncsdk.conf README.md uninstall-opencv.sh uninstall.sh
➜ ncsdk-aarch64 git:(master) ✗ cd NCSDK-1.12.01.01
➜ NCSDK-1.12.01.01 git:(master) ✗ ls
install-ncsdk.sh ncsdk-aarch64 ncsdk-armv7l ncsdk-x86_64 ncsdk.conf requirements.txt requirements_apt.txt tests uninstall-ncsdk.sh version.txt

2.3.2 Installation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
➜  ncsdk-aarch64 git:(master) ✗ cd api/src
➜ src git:(master) ✗ sudo make basicinstall
mkdir -p /usr/local/include/
mkdir -p /usr/local/lib/
cp obj-aarch64/libmvnc.so.0 /usr/local/lib/
ln -fs libmvnc.so.0 /usr/local/lib/libmvnc.so
cp ../include/*.h /usr/local/include/
mkdir -p /usr/local/lib/mvnc
cp mvnc/MvNCAPI.mvcmd /usr/local/lib/mvnc/
mkdir -p /etc/udev/rules.d/
cp 97-usbboot.rules /etc/udev/rules.d/
➜ src git:(master) ✗ sudo make pythoninstall
mkdir -p /usr/local/lib/python3.11/dist-packages
cp -r ../python/mvnc /usr/local/lib/python3.11/dist-packages/
➜ src git:(master) ✗ sudo make postinstall
udevadm control --reload-rules
udevadm trigger
ldconfig

2.3.3 Demonstrate Examples on My Raspberry Pi 5

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
➜  examples git:(master) ✗ make
make -C apps/.
make[1]: Entering directory '/opt/intel/ncsdk-aarch64/examples/apps'
make -C hello_ncs_cpp/.
make[2]: Entering directory '/opt/intel/ncsdk-aarch64/examples/apps/hello_ncs_cpp'

making hello_ncs_cpp
g++ cpp/hello_ncs.cpp -o cpp/hello_ncs_cpp -lmvnc
Created cpp/hello_ncs_cpp executable
make[2]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/apps/hello_ncs_cpp'
make -C hello_ncs_py/.
make[2]: Entering directory '/opt/intel/ncsdk-aarch64/examples/apps/hello_ncs_py'
nothing to make, use 'make run' to run.
make[2]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/apps/hello_ncs_py'
make -C multistick_cpp/.
make[2]: Entering directory '/opt/intel/ncsdk-aarch64/examples/apps/multistick_cpp'

making googlenet
(cd ../../caffe/GoogLeNet; make compile; cd ../../apps/multistick_cpp; cp ../../caffe/GoogLeNet/graph ./googlenet.graph;)
make[3]: Entering directory '/opt/intel/ncsdk-aarch64/examples/caffe/GoogLeNet'

making prereqs
(cd ../../data/ilsvrc12; make)
make[4]: Entering directory '/opt/intel/ncsdk-aarch64/examples/data/ilsvrc12'
make[4]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/data/ilsvrc12'

making prototxt
Prototxt file already exists

making caffemodel
caffemodel file already exists

making compile
mvNCCompile -w bvlc_googlenet.caffemodel -s 12 deploy.prototxt
mvNCCompile v02.00, Copyright @ Movidius Ltd 2016

Layer inception_3b/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_3b/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4a/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4a/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4b/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4b/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4c/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4c/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4d/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4d/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4e/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4e/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_5a/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_5a/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_5b/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_5b/pool_proj forced to im2col_v2, because its output is used in concat
/usr/local/bin/ncsdk/Controllers/FileIO.py:50: UserWarning: You are using a large type. Consider reducing your data sizes for best performance
warnings.warn(
make[3]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/caffe/GoogLeNet'

making squeezenet
(cd ../../caffe/SqueezeNet; make compile; cd ../../apps/multistick_cpp; cp ../../caffe/SqueezeNet/graph ./squeezenet.graph;)
make[3]: Entering directory '/opt/intel/ncsdk-aarch64/examples/caffe/SqueezeNet'

making prereqs
(cd ../../data/ilsvrc12; make)
make[4]: Entering directory '/opt/intel/ncsdk-aarch64/examples/data/ilsvrc12'
make[4]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/data/ilsvrc12'
LICENSE file already exists

making prototxt
Prototxt file already exists

making caffemodel
caffemodel file already exists

making compile
mvNCCompile -w squeezenet_v1.0.caffemodel -s 12 deploy.prototxt
mvNCCompile v02.00, Copyright @ Movidius Ltd 2016

/usr/local/bin/ncsdk/Controllers/FileIO.py:50: UserWarning: You are using a large type. Consider reducing your data sizes for best performance
warnings.warn(
make[3]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/caffe/SqueezeNet'

making multistick_cpp
cp googlenet.graph cpp/googlenet.graph;
cp squeezenet.graph cpp/squeezenet.graph;
g++ cpp/multistick.cpp cpp/fp16.c -o cpp/multistick_cpp -lmvnc
cpp/multistick.cpp: In function ‘bool DoInferenceOnImageFile(void*, const char*, int, float*)’:
cpp/multistick.cpp:307:1: warning: control reaches end of non-void function [-Wreturn-type]
307 | }
| ^
Created cpp/multistick_cpp executable
make[2]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/apps/multistick_cpp'
make[1]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/apps'
make -C caffe/.
make[1]: Entering directory '/opt/intel/ncsdk-aarch64/examples/caffe'
make -C AlexNet/.
make[2]: Entering directory '/opt/intel/ncsdk-aarch64/examples/caffe/AlexNet'

making prereqs
(cd ../../data/ilsvrc12; make)
make[3]: Entering directory '/opt/intel/ncsdk-aarch64/examples/data/ilsvrc12'
make[3]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/data/ilsvrc12'
chmod +x run.py

making prototxt
Prototxt file already exists

making profile
mvNCProfile deploy.prototxt -s 12
mvNCProfile v02.00, Copyright @ Movidius Ltd 2016

****** WARNING: using empty weights ******
/usr/local/bin/ncsdk/Controllers/FileIO.py:50: UserWarning: You are using a large type. Consider reducing your data sizes for best performance
warnings.warn(
USB: Transferring Data...
/usr/local/lib/python3.11/dist-packages/mvnc/mvncapi.py:244: DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
tensor = tensor.tostring()
Time to Execute : 107.91 ms
USB: Myriad Execution Finished
Time to Execute : 103.26 ms
USB: Myriad Execution Finished
USB: Myriad Connection Closing.
USB: Myriad Connection Closed.
Network Summary

Detailed Per Layer Profile
Bandwidth time
# Name MFLOPs (MB/s) (ms)
======================================================================================================================================================================================================================================
0 data 0.0126457.6 0.002
1 conv1 210.8 7434.1 4.811
2 norm1 0.0 721.2 0.768
3 pool1 0.6 1253.9 0.442
4 pool1_s0 0.0 1372.8 0.049
5 conv2_p0 223.9 596.2 6.099
6 pool1_s1 0.0 1455.7 0.046
7 conv2_p1 223.9 585.2 6.213
8 norm2 0.0 697.8 0.510
9 pool2 0.4 1308.0 0.272
10 conv3 299.0 302.0 8.048
11 conv3_s0 0.0 1582.4 0.039
12 conv4_p0 112.1 543.7 3.219
13 conv3_s1 0.0 1492.2 0.041
14 conv4_p1 112.1 560.1 3.125
15 conv4_p0_s0 0.0 1314.0 0.047
16 conv5_p0 74.8 578.5 2.659
17 conv4_p0_s1 0.0 1317.5 0.047
18 conv5_p1 74.8 580.0 2.652
19 pool5 0.1 918.4 0.090
20 fc6 75.5 2144.1 33.589
21 fc7 33.6 2132.6 15.008
22 fc8 8.2 2651.8 2.949
23 prob 0.0 7.6 0.253
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total inference time 90.98
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Generating Profile Report 'output_report.html'...

making caffemodel
caffemodel file already exists

making check
mvNCCheck -w bvlc_alexnet.caffemodel -i ../../data/images/cat.jpg -s 12 -id 281 deploy.prototxt -M 110 -S 255 -metric top1
mvNCCheck v02.00, Copyright @ Movidius Ltd 2016

/usr/local/bin/ncsdk/Controllers/FileIO.py:50: UserWarning: You are using a large type. Consider reducing your data sizes for best performance
warnings.warn(
USB: Transferring Data...
/usr/local/lib/python3.11/dist-packages/mvnc/mvncapi.py:244: DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
tensor = tensor.tostring()
USB: Myriad Execution Finished
USB: Myriad Connection Closing.
USB: Myriad Connection Closed.

Result: Validation Pass

Result: (1000,)
1) 281 0.512
Expected: (1000,)
1) 281 0.522
------------------------------------------------------------
Obtained values
------------------------------------------------------------
Obtained Min Pixel Accuracy: 2.385406941175461% (max allowed=2%), Fail
Obtained Average Pixel Accuracy: 0.0052463514293776825% (max allowed=1%), Pass
Obtained Percentage of wrong values: 0.1% (max allowed=0%), Fail
Obtained Pixel-wise L2 error: 0.09619102055778048% (max allowed=1%), Pass
Obtained Global Sum Difference: 0.027384519577026367
------------------------------------------------------------

making compile
mvNCCompile -w bvlc_alexnet.caffemodel -s 12 deploy.prototxt
mvNCCompile v02.00, Copyright @ Movidius Ltd 2016

/usr/local/bin/ncsdk/Controllers/FileIO.py:50: UserWarning: You are using a large type. Consider reducing your data sizes for best performance
warnings.warn(

making cpp
g++ cpp/run.cpp cpp/fp16.c -o cpp/run_cpp -lmvnc
make[2]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/caffe/AlexNet'
make -C GoogLeNet/.
make[2]: Entering directory '/opt/intel/ncsdk-aarch64/examples/caffe/GoogLeNet'

making prereqs
(cd ../../data/ilsvrc12; make)
make[3]: Entering directory '/opt/intel/ncsdk-aarch64/examples/data/ilsvrc12'
make[3]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/data/ilsvrc12'

making prototxt
Prototxt file already exists

making profile
mvNCProfile deploy.prototxt -s 12
mvNCProfile v02.00, Copyright @ Movidius Ltd 2016

****** WARNING: using empty weights ******
Layer inception_3b/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_3b/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4a/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4a/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4b/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4b/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4c/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4c/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4d/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4d/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4e/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4e/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_5a/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_5a/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_5b/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_5b/pool_proj forced to im2col_v2, because its output is used in concat
/usr/local/bin/ncsdk/Controllers/FileIO.py:50: UserWarning: You are using a large type. Consider reducing your data sizes for best performance
warnings.warn(
USB: Transferring Data...
/usr/local/lib/python3.11/dist-packages/mvnc/mvncapi.py:244: DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
tensor = tensor.tostring()
Time to Execute : 130.42 ms
USB: Myriad Execution Finished
Time to Execute : 108.35 ms
USB: Myriad Execution Finished
USB: Myriad Connection Closing.
USB: Myriad Connection Closed.
Network Summary

Detailed Per Layer Profile
Bandwidth time
# Name MFLOPs (MB/s) (ms)
======================================================================================================================================================================================================================================
0 data 0.0 46160.0 0.006
1 conv1/7x7_s2 236.0 2466.8 5.713
2 pool1/3x3_s2 1.8 1340.9 1.142
3 pool1/norm1 0.0 701.9 0.546
4 conv2/3x3_reduce 25.7 478.8 0.816
5 conv2/3x3 693.6 307.6 11.891
6 conv2/norm2 0.0 787.5 1.458
7 pool2/3x3_s2 1.4 1390.2 0.826
8 inception_3a/1x1 19.3 535.1 0.581
9 inception_3a/3x3_reduce 28.9 459.3 0.702
10 inception_3a/3x3 173.4 314.3 4.788
11 inception_3a/5x5_reduce 4.8 1025.6 0.286
12 inception_3a/5x5 20.1 678.0 0.921
13 inception_3a/pool 1.4 578.0 0.497
14 inception_3a/pool_proj 9.6 640.6 0.467
15 inception_3b/1x1 51.4 444.3 1.003
16 inception_3b/3x3_reduce 51.4 446.2 0.998
17 inception_3b/3x3 346.8 257.6 8.339
18 inception_3b/5x5_reduce 12.8 877.2 0.454
19 inception_3b/5x5 120.4 522.8 2.577
20 inception_3b/pool 1.8 660.6 0.580
21 inception_3b/pool_proj 25.7 618.7 0.669
22 pool3/3x3_s2 0.8 1215.8 0.590
23 inception_4a/1x1 36.1 358.6 0.991
24 inception_4a/3x3_reduce 18.1 498.7 0.536
25 inception_4a/3x3 70.4 296.2 2.259
26 inception_4a/5x5_reduce 3.0 756.2 0.257
27 inception_4a/5x5 7.5 412.6 0.457
28 inception_4a/pool 0.8 505.0 0.355
29 inception_4a/pool_proj 12.0 547.8 0.435
30 inception_4b/1x1 32.1 330.5 1.053
31 inception_4b/3x3_reduce 22.5 375.9 0.800
32 inception_4b/3x3 88.5 285.9 2.838
33 inception_4b/5x5_reduce 4.8 573.7 0.375
34 inception_4b/5x5 15.1 309.1 0.973
35 inception_4b/pool 0.9 532.7 0.359
36 inception_4b/pool_proj 12.8 537.0 0.473
37 inception_4c/1x1 25.7 402.8 0.786
38 inception_4c/3x3_reduce 25.7 389.8 0.812
39 inception_4c/3x3 115.6 260.4 3.831
40 inception_4c/5x5_reduce 4.8 584.9 0.367
41 inception_4c/5x5 15.1 308.1 0.976
42 inception_4c/pool 0.9 627.2 0.305
43 inception_4c/pool_proj 12.8 551.2 0.461
44 inception_4d/1x1 22.5 413.7 0.728
45 inception_4d/3x3_reduce 28.9 472.7 0.702
46 inception_4d/3x3 146.3 399.4 3.008
47 inception_4d/5x5_reduce 6.4 637.4 0.349
48 inception_4d/5x5 20.1 388.8 1.028
49 inception_4d/pool 0.9 610.5 0.314
50 inception_4d/pool_proj 12.8 566.9 0.448
51 inception_4e/1x1 53.0 297.6 1.531
52 inception_4e/3x3_reduce 33.1 258.2 1.389
53 inception_4e/3x3 180.6 280.8 5.067
54 inception_4e/5x5_reduce 6.6 493.5 0.465
55 inception_4e/5x5 40.1 356.2 1.405
56 inception_4e/pool 0.9 600.5 0.329
57 inception_4e/pool_proj 26.5 430.8 0.758
58 pool4/3x3_s2 0.4 1221.7 0.255
59 inception_5a/1x1 20.9 614.2 0.789
60 inception_5a/3x3_reduce 13.0 553.5 0.599
61 inception_5a/3x3 45.2 550.4 1.851
62 inception_5a/5x5_reduce 2.6 336.2 0.382
63 inception_5a/5x5 10.0 427.3 0.646
64 inception_5a/pool 0.4 405.1 0.192
65 inception_5a/pool_proj 10.4 484.7 0.580
66 inception_5b/1x1 31.3 611.9 1.124
67 inception_5b/3x3_reduce 15.7 608.1 0.629
68 inception_5b/3x3 65.0 569.9 2.516
69 inception_5b/5x5_reduce 3.9 362.9 0.424
70 inception_5b/5x5 15.1 438.7 0.937
71 inception_5b/pool 0.4 483.8 0.161
72 inception_5b/pool_proj 10.4 502.0 0.560
73 pool5/7x7_s1 0.1 460.8 0.208
74 loss3/classifier 2.0 2497.0 0.783
75 prob 0.0 9.5 0.200
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total inference time 95.91
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Generating Profile Report 'output_report.html'...

making caffemodel
caffemodel file already exists

making check
mvNCCheck -w bvlc_googlenet.caffemodel -i ../../data/images/nps_electric_guitar.png -s 12 -id 546 deploy.prototxt -S 255 -M 110 -metric top1
mvNCCheck v02.00, Copyright @ Movidius Ltd 2016

Layer inception_3b/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_3b/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4a/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4a/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4b/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4b/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4c/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4c/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4d/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4d/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4e/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4e/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_5a/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_5a/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_5b/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_5b/pool_proj forced to im2col_v2, because its output is used in concat
/usr/local/bin/ncsdk/Controllers/FileIO.py:50: UserWarning: You are using a large type. Consider reducing your data sizes for best performance
warnings.warn(
USB: Transferring Data...
/usr/local/lib/python3.11/dist-packages/mvnc/mvncapi.py:244: DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
tensor = tensor.tostring()
USB: Myriad Execution Finished
USB: Myriad Connection Closing.
USB: Myriad Connection Closed.

Result: Validation Pass

Result: (1000,)
1) 546 0.995
Expected: (1000,)
1) 546 0.9946
------------------------------------------------------------
Obtained values
------------------------------------------------------------
Obtained Min Pixel Accuracy: 0.049091799883171916% (max allowed=2%), Pass
Obtained Average Pixel Accuracy: 7.989402774910559e-05% (max allowed=1%), Pass
Obtained Percentage of wrong values: 0.0% (max allowed=0%), Pass
Obtained Pixel-wise L2 error: 0.0017114860971367098% (max allowed=1%), Pass
Obtained Global Sum Difference: 0.0007946491241455078
------------------------------------------------------------

making compile
mvNCCompile -w bvlc_googlenet.caffemodel -s 12 deploy.prototxt
mvNCCompile v02.00, Copyright @ Movidius Ltd 2016

Layer inception_3b/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_3b/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4a/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4a/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4b/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4b/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4c/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4c/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4d/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4d/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_4e/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_4e/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_5a/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_5a/pool_proj forced to im2col_v2, because its output is used in concat
Layer inception_5b/1x1 forced to im2col_v2, because its output is used in concat
Layer inception_5b/pool_proj forced to im2col_v2, because its output is used in concat
/usr/local/bin/ncsdk/Controllers/FileIO.py:50: UserWarning: You are using a large type. Consider reducing your data sizes for best performance
warnings.warn(

making cpp
g++ cpp/run.cpp cpp/fp16.c -o cpp/run_cpp -lmvnc
make[2]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/caffe/GoogLeNet'
make -C SqueezeNet/.
make[2]: Entering directory '/opt/intel/ncsdk-aarch64/examples/caffe/SqueezeNet'

making prereqs
(cd ../../data/ilsvrc12; make)
make[3]: Entering directory '/opt/intel/ncsdk-aarch64/examples/data/ilsvrc12'
make[3]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/data/ilsvrc12'
LICENSE file already exists

making prototxt
Prototxt file already exists

making profile
mvNCProfile deploy.prototxt -s 12
mvNCProfile v02.00, Copyright @ Movidius Ltd 2016

****** WARNING: using empty weights ******
/usr/local/bin/ncsdk/Controllers/FileIO.py:50: UserWarning: You are using a large type. Consider reducing your data sizes for best performance
warnings.warn(
USB: Transferring Data...
/usr/local/lib/python3.11/dist-packages/mvnc/mvncapi.py:244: DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
tensor = tensor.tostring()
Time to Execute : 78.37 ms
USB: Myriad Execution Finished
Time to Execute : 60.05 ms
USB: Myriad Execution Finished
USB: Myriad Connection Closing.
USB: Myriad Connection Closed.
Network Summary

Detailed Per Layer Profile
Bandwidth time
# Name MFLOPs (MB/s) (ms)
======================================================================================================================================================================================================================================
0 data 0.0 96516.2 0.003
1 conv1 347.7 1634.5 8.861
2 pool1 2.6 1445.0 1.561
3 fire2/squeeze1x1 9.3 1222.9 0.455
4 fire2/expand1x1 6.2 152.1 0.620
5 fire2/expand3x3 55.8 475.0 1.789
6 fire3/squeeze1x1 12.4 1466.3 0.506
7 fire3/expand1x1 6.2 158.4 0.596
8 fire3/expand3x3 55.8 475.9 1.785
9 fire4/squeeze1x1 24.8 976.6 0.764
10 fire4/expand1x1 24.8 174.3 1.106
11 fire4/expand3x3 223.0 391.4 4.430
12 pool4 1.7 1268.5 1.164
13 fire5/squeeze1x1 11.9 788.7 0.471
14 fire5/expand1x1 6.0 151.4 0.347
15 fire5/expand3x3 53.7 376.4 1.257
16 fire6/squeeze1x1 17.9 699.1 0.543
17 fire6/expand1x1 13.4 159.6 0.531
18 fire6/expand3x3 120.9 256.4 2.973
19 fire7/squeeze1x1 26.9 827.4 0.688
20 fire7/expand1x1 13.4 159.9 0.530
21 fire7/expand3x3 120.9 259.7 2.935
22 fire8/squeeze1x1 35.8 735.0 0.790
23 fire8/expand1x1 23.9 155.7 0.775
24 fire8/expand3x3 215.0 201.3 5.396
25 pool8 0.8 1265.2 0.563
26 fire9/squeeze1x1 11.1 587.7 0.387
27 fire9/expand1x1 5.5 153.4 0.341
28 fire9/expand3x3 49.8 288.2 1.636
29 conv10 173.1 331.1 3.448
30 pool10 0.3 671.0 0.480
31 prob 0.0 9.5 0.202
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total inference time 47.94
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Generating Profile Report 'output_report.html'...

making caffemodel
caffemodel file already exists

making check
mvNCCheck -w squeezenet_v1.0.caffemodel -i ../../data/images/cat.jpg -s 12 -id 281 deploy.prototxt -S 255 -M 120 -metric top1
mvNCCheck v02.00, Copyright @ Movidius Ltd 2016

/usr/local/bin/ncsdk/Controllers/FileIO.py:50: UserWarning: You are using a large type. Consider reducing your data sizes for best performance
warnings.warn(
USB: Transferring Data...
/usr/local/lib/python3.11/dist-packages/mvnc/mvncapi.py:244: DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
tensor = tensor.tostring()
USB: Myriad Execution Finished
USB: Myriad Connection Closing.
USB: Myriad Connection Closed.

Result: Validation Pass

Result: (1000, 1, 1)
1) 281 0.6177
Expected: (1000, 1, 1)
1) 281 0.616
------------------------------------------------------------
Obtained values
------------------------------------------------------------
Obtained Min Pixel Accuracy: 0.6933439057320356% (max allowed=2%), Pass
Obtained Average Pixel Accuracy: 0.0016120149666676298% (max allowed=1%), Pass
Obtained Percentage of wrong values: 0.0% (max allowed=0%), Pass
Obtained Pixel-wise L2 error: 0.029018872980330863% (max allowed=1%), Pass
Obtained Global Sum Difference: 0.009933412075042725
------------------------------------------------------------

making compile
mvNCCompile -w squeezenet_v1.0.caffemodel -s 12 deploy.prototxt
mvNCCompile v02.00, Copyright @ Movidius Ltd 2016

/usr/local/bin/ncsdk/Controllers/FileIO.py:50: UserWarning: You are using a large type. Consider reducing your data sizes for best performance
warnings.warn(

making cpp
g++ cpp/run.cpp cpp/fp16.c -o cpp/run_cpp -lmvnc
make[2]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/caffe/SqueezeNet'
make[1]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/caffe'
make -C data/.
make[1]: Entering directory '/opt/intel/ncsdk-aarch64/examples/data'
Possible Make targets
make help - shows this message
make clean - Removes all temp files from all directories
make[1]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/data'
make -C tensorflow/.
make[1]: Entering directory '/opt/intel/ncsdk-aarch64/examples/tensorflow'
make -C inception_v1/.
make[2]: Entering directory '/opt/intel/ncsdk-aarch64/examples/tensorflow/inception_v1'
test -f output/inception-v1.meta || ((wget http://download.tensorflow.org/models/inception_v1_2016_08_28.tar.gz && tar zxf inception_v1_2016_08_28.tar.gz && rm inception_v1_2016_08_28.tar.gz) && ./inception-v1.py)
--2024-07-29 13:06:28-- http://download.tensorflow.org/models/inception_v1_2016_08_28.tar.gz
Resolving download.tensorflow.org (download.tensorflow.org)... 2607:f8b0:400a:803::201b, 2607:f8b0:400a:805::201b, 2607:f8b0:400a:807::201b, ...
Connecting to download.tensorflow.org (download.tensorflow.org)|2607:f8b0:400a:803::201b|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 24642554 (24M) [application/x-tar]
Saving to: ‘inception_v1_2016_08_28.tar.gz’

inception_v1_2016_08_28.tar.gz 100%[====================================================================================================================================>] 23.50M 98.9MB/s in 0.2s

2024-07-29 13:06:28 (98.9 MB/s) - ‘inception_v1_2016_08_28.tar.gz’ saved [24642554/24642554]

Traceback (most recent call last):
File "/opt/intel/ncsdk-aarch64/examples/tensorflow/inception_v1/./inception-v1.py", line 6, in <module>
from tensorflow.contrib.slim.nets import inception
ModuleNotFoundError: No module named 'tensorflow.contrib'
make[2]: *** [Makefile:47: weights] Error 1
make[2]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/tensorflow/inception_v1'
make[1]: *** [Makefile:12: inception_v1/.] Error 2
make[1]: Leaving directory '/opt/intel/ncsdk-aarch64/examples/tensorflow'
make: *** [Makefile:12: tensorflow/.] Error 2

I'm gonna giving up the issue of Tensorflow version incompatibility, for Tensorflow 1.x has NOT been maintained for years. I mean example inception_v1 and inception_v3 are compatible with Tensorflow 1.x, rather than 2.x.

1
2
➜  examples git:(master) ✗ ls tensorflow 
inception_v1 inception_v3 Makefile readme.md

2.4 ncsdk2-aarch64 for NCS2

2.4.1 Download the Right NCSDK-2.10.01.01.tar.gz

  • git clone https://github.com/LE0xUL/ncsdk2_aarch64.git won’t provide the correct NCSDK-2.10.01.01.tar.gz.
1
2
3
4
5
6
7
8
➜  ncsdk2_aarch64 git:(aarch64) ✗ ll
Permissions Size User Date Modified Name
drwxr-xr-x - lvision 29 Jul 15:34 .git
.rw-r--r-- 564 lvision 29 Jul 12:16 .gitattributes
.rw-r--r-- 212 lvision 29 Jul 12:16 .gitignore
......
.rw-r--r-- 132 lvision 29 Jul 12:16 NCSDK-2.10.01.01.tar.gz
......

As you can see, the git cloned NCSDK-2.10.01.01.tar.gz is of size 132 bytes ONLY.

1
2
3
➜  ncsdk2_aarch64 git:(aarch64) ✗ ll NCSDK-2.10.01.01.tar.gz 
Permissions Size User Date Modified Name
.rw-r--r-- 2.4M lvision 29 Jul 16:12 NCSDK-2.10.01.01.tar.gz

After extraction, you’ll also see the most important folder named ncsdk-aarch64, which actually contains the Aarch64 firmware for NCS2.

1
2
3
4
5
6
➜  ncsdk2_aarch64 git:(aarch64) ✗ ls
api docs extras install-utilities.sh LICENSE NCSDK-2.10.01.01 ncsdk.conf requirements.txt requirements_apt_raspbian.txt uninstall-opencv.sh version.txt
ATTRIBUTIONS examples install-opencv.sh install.sh Makefile NCSDK-2.10.01.01.tar.gz README.md requirements_apt.txt tensorflow-1.11.0-cp35-none-linux_aarch64.whl uninstall.sh
➜ ncsdk2_aarch64 git:(aarch64) ✗ cd NCSDK-2.10.01.01
➜ NCSDK-2.10.01.01 git:(aarch64) ✗ ls
ncsdk-aarch64 ncsdk-armv7l ncsdk-x86_64 version.txt

2.4.2 Installation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
➜  ncsdk2_aarch64 git:(aarch64) ✗ cd api/src 
➜ src git:(aarch64) ✗ sudo make basicinstall
NCSDK FW successfully installed
mkdir -p /usr/local/include/
mkdir -p /usr/local/include/mvnc2
mkdir -p /usr/local/lib/
cp obj-aarch64/libmvnc.so.0 /usr/local/lib/
ln -fs libmvnc.so.0 /usr/local/lib/libmvnc.so
cp ../include/mvnc.h /usr/local/include/mvnc2
ln -fs /usr/local/include/mvnc2/mvnc.h /usr/local/include/mvnc.h
mkdir -p /usr/local/lib/mvnc
cp mvnc/MvNCAPI-*.mvcmd /usr/local/lib/mvnc/
mkdir -p /etc/udev/rules.d/
cp 97-ncs2.rules /etc/udev/rules.d/
➜ src git:(aarch64) ✗ sudo make pythoninstall
mkdir -p /usr/local/lib/python3.11/dist-packages
cp -r ../python/mvnc /usr/local/lib/python3.11/dist-packages/
➜ src git:(aarch64) ✗ sudo make postinstall
udevadm control --reload-rules
udevadm trigger
ldconfig

2.4.3 Demonstrate Examples on My Raspberry Pi 5

Let’s just try hello_ncs.py.

1
2
3
4
5
6
7
8
9
10
➜  hello_ncs_py git:(aarch64) ✗ python hello_ncs.py 
D: [ 0] ncDeviceCreate:307 ncDeviceCreate index 0

D: [ 0] ncDeviceCreate:307 ncDeviceCreate index 1

D: [ 0] ncDeviceOpen:523 File path /usr/local/lib/mvnc/MvNCAPI-ma2480.mvcmd

W: [ 0] ncDeviceOpen:527 ncDeviceOpen() XLinkBootRemote returned error 3

Error - Could not open NCS device.

It looks I dealt with this issue in my old blog OpenVINO on Raspberry Pi 4 with Movidius Neural Compute Stick II, where I’d already provided the solution:

OpenVINO should be utilized instead of NCSDK ncsdk2.

3. Intel® Distribution of OpenVINO™ Toolkit Long-Term Support (LTS) 2022.3 for NCS2

3.1 Fix an Error

File /opt/intel/OpenVINO/openvino/thirdparty/ocv/opencv_hal_neon.hpp function v_deinterleave_expand is clearly buggy, for -1 should NEVER appear in an uchar array. Anyway, just change -1 to 255 as a temporary solution, namely:

Change

1
2
3
4
5
6
constexpr int nlanes = static_cast<int>(v_uint8x16::nlanes);
uchar mask_e[nlanes] = { 0, -1, 2, -1, 4, -1, 6, -1,
8, -1, 10, -1, 12, -1, 14, -1 };

uchar mask_o[nlanes] = { 1, -1, 3, -1, 5, -1, 7, -1,
9, -1, 11, -1, 13, -1, 15, -1 };

to:

1
2
3
4
5
6
constexpr int nlanes = static_cast<int>(v_uint8x16::nlanes);
uchar mask_e[nlanes] = { 0, 255, 2, 255, 4, 255, 6, 255,
8, 255, 10, 255, 12, 255, 14, 255 };

uchar mask_o[nlanes] = { 1, 255, 3, 255, 5, 255, 7, 255,
9, 255, 11, 255, 13, 255, 15, 255 };

3.2 Two More Things After Installation

3.2.1 OpenVINO Python Wrapper Installation

Under folder /opt/intel/OpenVINO/openvino/build/wheels, there are the following 2 files:

1
2
3
4
➜  wheels git:(2022.3.2) ✗ ll
Permissions Size User Date Modified Name
.rw-r--r-- 13M lvision 30 Jul 14:09 openvino-2022.3.2-9279-cp311-cp311-manylinux_2_36_aarch64.whl
.rw-r--r-- 24k lvision 30 Jul 13:08 openvino_dev-2022.3.2-9279-py3-none-any.whl

Use key parameter --break-system-packages

1
sudo pip install --break-system-packages openvino-2022.3.2-9279-cp311-cp311-manylinux_2_36_aarch64.whl

and

1
sudo pip install --break-system-packages openvino_dev-2022.3.2-9279-py3-none-any.whl

3.2.2 Copy plugins.xml

A kind of weird that:

  • Under build/lib, there is ONLY 1 file libade.a:
1
2
3
4
➜  lib git:(2022.3.2) ✗ pwd
/opt/intel/OpenVINO/openvino/build/lib
➜ lib git:(2022.3.2) ✗ ls
libade.a
  • Everything is built under bin/aarch64/Release:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
➜  Release git:(2022.3.2) ✗ pwd                      
/opt/intel/OpenVINO/openvino/bin/aarch64/Release
➜ Release git:(2022.3.2) ✗ ll
Permissions Size User Date Modified Name
.rwxr-xr-x 723k lvision 30 Jul 14:08 benchmark_app
.rwxr-xr-x 461k lvision 30 Jul 14:08 benchmark_app_legacy
.rwxr-xr-x 264k lvision 30 Jul 14:08 classification_sample_async
.rwxr-xr-x 264k lvision 30 Jul 14:08 compile_tool
.rwxr-xr-x 199k lvision 30 Jul 14:08 hello_classification
.rwxr-xr-x 67k lvision 30 Jul 14:08 hello_classification_c
.rwxr-xr-x 199k lvision 30 Jul 14:08 hello_nv12_input_classification
.rwxr-xr-x 67k lvision 30 Jul 14:08 hello_nv12_input_classification_c
.rwxr-xr-x 68k lvision 30 Jul 14:08 hello_query_device
.rwxr-xr-x 199k lvision 30 Jul 14:08 hello_reshape_ssd
.rw-r--r-- 367k lvision 30 Jul 12:04 libcnpy.a
.rw-r--r-- 7.5M lvision 30 Jul 14:08 libfluid.a
.rwxr-xr-x 67k lvision 30 Jul 14:08 libformat_reader.so
.rw-r--r-- 3.4M lvision 30 Jul 12:08 libie_docs_snippets.a
.rw-r--r-- 166k lvision 30 Jul 12:34 libie_samples_utils.a
.rw-r--r-- 16M lvision 30 Jul 12:31 libinference_engine_legacy.a
.rw-r--r-- 2.9M lvision 30 Jul 12:36 libinference_engine_snippets.a
.rw-r--r-- 16M lvision 30 Jul 12:56 libinterpreter_backend.a
.rw-r--r-- 3.0k lvision 30 Jul 12:04 libitt.a
.rw-r--r-- 296k lvision 30 Jul 12:05 libmvnc.a
.rw-r--r-- 1.6M lvision 30 Jul 12:04 libngraph_builders.a
.rw-r--r-- 1.6M lvision 30 Jul 14:07 libngraph_reference.a
.rw-r--r-- 2.3M lvision 30 Jul 12:33 liboffline_transformations.a
.rw-r--r-- 11M lvision 30 Jul 12:17 libonnx.a
.rw-r--r-- 54k lvision 30 Jul 12:31 libonnx_common.a
.rw-r--r-- 676k lvision 30 Jul 12:06 libonnx_proto.a
.rwxr-xr-x 67k lvision 30 Jul 12:04 libopencv_c_wrapper.so
lrwxrwxrwx 19 lvision 30 Jul 14:08 libopenvino.so -> libopenvino.so.2232
.rwxr-xr-x 17M lvision 30 Jul 14:08 libopenvino.so.2022.3.2
lrwxrwxrwx 23 lvision 30 Jul 14:08 libopenvino.so.2232 -> libopenvino.so.2022.3.2
.rwxr-xr-x 330k lvision 30 Jul 14:08 libopenvino_auto_batch_plugin.so
.rwxr-xr-x 592k lvision 30 Jul 14:08 libopenvino_auto_plugin.so
lrwxrwxrwx 21 lvision 30 Jul 14:08 libopenvino_c.so -> libopenvino_c.so.2232
.rwxr-xr-x 461k lvision 30 Jul 14:08 libopenvino_c.so.2022.3.2
lrwxrwxrwx 25 lvision 30 Jul 14:08 libopenvino_c.so.2232 -> libopenvino_c.so.2022.3.2
.rwxr-xr-x 1.2M lvision 30 Jul 14:08 libopenvino_gapi_preproc.so
.rwxr-xr-x 395k lvision 30 Jul 14:08 libopenvino_hetero_plugin.so
.rwxr-xr-x 9.3M lvision 30 Jul 14:08 libopenvino_intel_myriad_plugin.so
lrwxrwxrwx 31 lvision 30 Jul 14:08 libopenvino_ir_frontend.so -> libopenvino_ir_frontend.so.2232
.rwxr-xr-x 528k lvision 30 Jul 14:08 libopenvino_ir_frontend.so.2022.3.2
lrwxrwxrwx 35 lvision 30 Jul 14:08 libopenvino_ir_frontend.so.2232 -> libopenvino_ir_frontend.so.2022.3.2
lrwxrwxrwx 33 lvision 30 Jul 14:08 libopenvino_onnx_frontend.so -> libopenvino_onnx_frontend.so.2232
.rwxr-xr-x 4.7M lvision 30 Jul 14:08 libopenvino_onnx_frontend.so.2022.3.2
lrwxrwxrwx 37 lvision 30 Jul 14:08 libopenvino_onnx_frontend.so.2232 -> libopenvino_onnx_frontend.so.2022.3.2
lrwxrwxrwx 35 lvision 30 Jul 14:08 libopenvino_paddle_frontend.so -> libopenvino_paddle_frontend.so.2232
.rwxr-xr-x 1.9M lvision 30 Jul 14:08 libopenvino_paddle_frontend.so.2022.3.2
lrwxrwxrwx 39 lvision 30 Jul 14:08 libopenvino_paddle_frontend.so.2232 -> libopenvino_paddle_frontend.so.2022.3.2
.rwxr-xr-x 68k lvision 30 Jul 14:08 libopenvino_template_extension.so
.rwxr-xr-x 8.0M lvision 30 Jul 14:08 libopenvino_template_plugin.so
lrwxrwxrwx 39 lvision 30 Jul 14:08 libopenvino_tensorflow_frontend.so -> libopenvino_tensorflow_frontend.so.2232
.rwxr-xr-x 3.5M lvision 30 Jul 14:08 libopenvino_tensorflow_frontend.so.2022.3.2
lrwxrwxrwx 43 lvision 30 Jul 14:08 libopenvino_tensorflow_frontend.so.2232 -> libopenvino_tensorflow_frontend.so.2022.3.2
.rw-r--r-- 3.2k lvision 30 Jul 12:04 libov_protobuf_shutdown.a
.rw-r--r-- 1.2k lvision 30 Jul 12:02 libov_shape_inference.a
.rw-r--r-- 1.8M lvision 29 Jul 20:03 libprotobuf-lite.a
.rw-r--r-- 7.8M lvision 29 Jul 20:04 libprotobuf.a
.rw-r--r-- 8.3M lvision 29 Jul 21:07 libprotoc.a
.rw-r--r-- 419k lvision 29 Jul 20:01 libpugixml.a
.rwxr-xr-x 133k lvision 30 Jul 14:08 libtemplate_extension.so
.rw-r--r-- 165k lvision 30 Jul 12:03 libutil.a
.rw-r--r-- 9.9M lvision 30 Jul 12:52 libvpu_common_lib.a
.rw-r--r-- 30M lvision 30 Jul 13:07 libvpu_graph_transformer.a
.rw-r--r-- 296k lvision 30 Jul 12:04 libXLink.a
.rwxr-xr-x 265k lvision 30 Jul 14:08 model_creation_sample
.rwxr-xr-x 68k lvision 30 Jul 14:08 ov_integration_snippet
.rwxr-xr-x 67k lvision 30 Jul 14:08 ov_integration_snippet_c
.rw-r--r-- 2.1M lvision 29 Jul 20:02 pcie-ma2x8x.mvcmd
.rw-r--r-- 486 lvision 30 Jul 14:08 plugins.xml
lrwxrwxrwx 15 lvision 30 Jul 13:56 protoc -> /usr/bin/protoc
.rwxr-xr-x 3.7M lvision 29 Jul 21:07 protoc-3.20.3.0.bak
drwxr-xr-x - lvision 30 Jul 14:08 python_api
.rwxr-xr-x 462k lvision 30 Jul 14:08 speech_sample
.rwxr-xr-x 68k lvision 30 Jul 14:08 sync_benchmark
.rwxr-xr-x 68k lvision 30 Jul 14:08 throughput_benchmark
.rw-r--r-- 2.4M lvision 29 Jul 20:02 usb-ma2x8x.mvcmd

Under which, there is a file plugins.xml. VERY IMPORTANT: put plugins.xml under /usr/local/lib.

3.3 Demonstration

3.3.1 hello_query_device.py

Go to /opt/intel/OpenVINO/openvino/samples/python/hello_query_device, and run Python script hello_query_device.py:

OpenVINO Sample hello_query_device.py

3.3.2 AI Samples

Please refer to my NEXT blog Raspberry Pi 5 + Intel NCS2 + OpenVINO In 2024:

Try a media center on a Raspberry Pi.

1. Overview

1.1 What is LibreELEC? What is Kodi?

  • What is LibreELEC? Directly cited from LibreELEC Wiki: LibreELEC is a minimalist 'Just enough OS' Linux distribution for running Kodi.
  • What is Kodi? Directly cited from About Kodi: Kodi is an award-winning free and open source (GPL) software media player and entertainment hub that can be installed on Linux, OSX, Windows, iOS, tvOS and Android. It is designed around a "10-foot user interface" for use with televisions and remote controls.

1.2 Flash LibreELEC Onto Raspberry Pi

1
2
3
4
5
$ sudo dd if=./LibreELEC-RPi4.aarch64-12.0.0.img of=/dev/sdb bs=1M status=progress conv=fsync
[sudo] password for lvision:
549+0 records in
549+0 records out
575668224 bytes (576 MB, 549 MiB) copied, 68.3918 s, 8.4 MB/s

2. ssh Into LibreELEC

2.1 Overview

You’ll be automatically led to the following two pages after direct reboot. The ONLY thing you need to do is to connect your Raspberry Pi (already installed with LibreELEC) with HDMI cable and connect to your Smart TV.

LibreELEC Welcome LibreELEC Kodi
LibreELEC Welcome LibreELEC Kodi

2.2 Enable SSH and Allow HTTP

Same as OctoPi discussed in my previous blog OctoPrint on A Raspberry Pi, LibreELEC's Wi-Fi is NOT started by default. Therefore, we’ll have to access LibreELEC via Wired connection for the FIRST time.

In fact, you’ll even have to enable SSH from within LibreELEC smart TV configuration before you’re able to SSH into LibreELEC on the Raspberry Pi. Better allow HTTP at the same time. As follows:

LibreELEC Enable SSH LibreELEC Allow HTTP
LibreELEC Enable SSH LibreELEC Allow HTTP

2.3 ssh Into LibreELEC via Wired Connection

1
2
3
4
5
6
7
8
$ ssh root@192.168.1.75
root@192.168.1.75's password:
##############################################
# LibreELEC #
# https://libreelec.tv #
##############################################

LibreELEC (official): 12.0.0 (RPi4.aarch64)

Now, you’re able to SSH into LibreELEC through Wired Connection.

2.4 View LibreELEC Over HTTP

LibreELEC Web

And, you’re able to browse content of LibreELEC via HTTP remotely.

2.5 Configure Wi-Fi

You’ll have to use the command connmanctl.

connmanctl How? connmanctl Outcome
connmanctl How? connmanctl Outcome

2.5.1 Passphrase is Just Your Wifi Password!!

1
2
3
4
connmanctl> connect wifi_<my_network_service_id>
Agent RequestInput wifi_<my_network_service_id>
Passphrase = [ Type=psk, Requirement=mandatory ]
Passphrase?

When you meet the above question (Passphrase?), please input your Wifi password.

2.5.2 Important Commands

You may use the following commands from time to time during Wifi configuration.

  • systemctl status connman
  • systemctl restart connman
  • journalctl -u connman
  • connmanctl state
  • connmanctl services
  • etc.

2.5.3 LibreELEC Configuration

And, you should be able to doublecheck your Wifi configuration via LibreELEC Configuration as:

LibreELEC Configuration Connections Wifi
LibreELEC Configuration Connections Wifi

Now, you’re good to go. You can put your videos, pictures, etc. under respective folder of LibreELEC as a real media center.

0%